AI UX Doesn’t Exist (But AI Can Make UX Work Less Miserable)

Good UX is deciding what matters, in what order, for whom, under constraints you don’t control. That’s the job.

AI in UX can help. But it doesn’t do the deciding.

Think of it as a tireless junior: fast, keen, occasionally delusional. Useful when supervised. Dangerous when left unsupervised.

I’m not anti-AI. I’m anti-theatre. I use AI UX tools every week, but never to replace judgment. Here’s how I actually use it, where I won’t, and why UX work still matters even as every second product claims to be “AI-powered.”


The Client Who Wanted AI to “Just Design the UX”

SaaS analytics platform. 32 employees. Founder called: “We want to use AI for UX design. Save time, move faster, let the AI handle the tedious stuff. How much would that cost?”

I asked what “tedious stuff” meant.

“All of it. The research. The wireframes. The copy. We’ll just feed it our requirements and it spits out the UX, right?”

No. That’s not how AI UX design works. That’s how you waste three months building what a probabilistic model thought you meant while your actual users suffer through broken flows.

Here’s what I proposed instead:

Traditional approach without AI:

  • 50 support tickets analyzed manually (6 hours)
  • 3 hours of user interview transcripts reviewed (4 hours)
  • Patterns identified, prioritized (3 hours)
  • Wireframes and copy iteration (8 hours)
  • Total: 21 hours over 5 days

Approach using AI UX tools correctly:

  • AI digests 50 support tickets → groups by blocked task (15 minutes)
  • I review groups, validate patterns (45 minutes)
  • AI transcribes and summarizes 3 hours of interviews (20 minutes)
  • I watch the contradictions AI flagged (1.5 hours)
  • Wireframes and copy iteration (still 8 hours, this is the actual work)
  • Total: 11 hours over 3 days

Time saved: 10 hours. Cost saved: $1,200 at my rate.

But here’s what AI didn’t do:

  • Decide which problems mattered most (I did)
  • Choose what not to fix (I did)
  • Write final copy for money-making flows (I did)
  • Prioritize against roadmap constraints (I did)
  • Sign off on solutions (I did)

AI handled search, summarize, scaffold. I handled decide, prioritize, promise.

Results after two weeks:

  • Support tickets about confused onboarding: -52%
  • Time from signup to first action: 8 minutes → 3 minutes
  • Onboarding completion: 41% → 67%

That’s what proper use of AI in UX design looks like. Not “AI does the UX.” AI removes friction so humans can make better decisions faster.

The founder wanted magic. I gave them process. They paid less and got better results.


UX Is About Choices, Not Pixels

UX isn’t pixels. It’s priority. It’s trade-offs. It’s choosing what not to ship.

Most of my day is spent saying no:

  • No to another step in onboarding
  • No to clever animation hiding slow queries
  • No to five CTA variations with different adjectives

Users don’t care how much effort you put in. They care it works and they understand what to do next.

In AI UX design specifically, the real interface is uncertainty. Models are probabilistic. They guess. Your job isn’t to pretend certainty. Your job is to make uncertainty usable:

  • Show confidence without drama
  • Admit failure without shame
  • Give people fast correction paths without losing their work

That’s product design, not press-release design. And no AI tool makes those decisions for you.


Where AI UX Helps — So I Can Think

I don’t ask a model to be my creative director. I ask it to remove friction so I get to decisions faster.

Here’s what UX design AI tools actually do well:

Call digestion.
One hour of user interviews becomes first-pass summary I scan on the train. Then I re-watch the sharp bits. Models love confident conclusions. I love contradictions. Both are useful.

Theme clustering.
Dump ten pages of notes in, get clusters to argue with. It won’t know which cluster matters, but it stops me drowning in post-its.

Edge-case generator.
“List 20 ways this form can go wrong.” Never finds all of them, but finds enough to kick me into test mode.

Proto-data for prototypes.
Realistic names, transactions, dates so screens feel like software, not theatre. Magical how much faster stakeholders get the point when tables aren’t full of Lorem Ipsum and “John Doe.”

Empty states and error scaffolds.
First-pass microcopy for boring bits nobody wants to write. I always edit it, but starting from something beats starting from nothing.

Variant sweeps.
Ten headline directions in my tone. Nine go in bin, one sparks the right angle.

Analytics hygiene.
Given journey description, I ask for draft event map. I still decide what matters, but I’m not starting from blank page.

Handoff notes.
I design the component, model expands my terse annotations into friendly spec for developers. Not the decisions — just the wording for proper handoffs.

This is what AI in UX is good for: search, summarize, scaffold, simulate, sanity-check. The work becomes less miserable. I become harder to distract.

When doing UX design work, AI saves me from drowning in transcripts so I can focus on what the data means. That’s the value.


Two Repeatable Moves

These work. I use them weekly.

Move 1: Inbox Autopsy

Inputs: Last 50-100 support messages (subject, body, last seen screen/URL if you have it)

Prompt:

Group these messages by blocked task and last seen screen/URL. 
For each group, return: 
- blocked_task
- last_seen
- count
- representative_quote (max 15 words)

Do not invent fields. If unknown, write "unknown."

Output you want: Short table with 3-6 groups. Not a novel.

Keep/kill rules:

  • Keep groups touching money paths (signup, checkout, billing, core task)
  • Merge groups that are wording variants of same blockage
  • Kill anything with count = 1 unless catastrophic

Definition of done: One sentence per top group: “Fix X on Y screen.” That’s your mini backlog.

Next step: Write tiniest possible fix (copy change, hint, state) and metric to watch for a week.

Common model failure: It will try to summarize by topic (“confusion,” “issues”). Force blocked task labels. That’s where action lives.

This is how I found the three critical issues in that analytics platform story above. 50 tickets → 6 groups → 3 worth fixing immediately. AI did grouping in 15 minutes. Would’ve taken me 6 hours manually.

Move 2: Copy Contradiction Diff

Inputs: Paste two flows (or string lists) you suspect are fighting each other

Prompt:

Find contradictions between these flows. 
Flag phrases that promise different behavior (e.g., "autosave" vs "save").
Propose minimal wording change making both flows consistent.
Keep suggestions to 15 words each.

Output you want: Bullet list of contradictions with one-line fix per item

Keep/kill rules:

  • Keep contradictions touching state (save/unsaved, draft/published, success/failure)
  • Kill stylistic quibbles (Oxford commas, synonyms not changing behavior)
  • Prefer removal over addition — fewer words, clearer truth

Definition of done: One accepted change per contradiction, implemented both places, with note in component or content source of truth.

Common model failure: It over-edits. Ask for minimal change, reject anything altering behavior.

I used this on a project where onboarding said “automatic sync” but settings said “manual sync required.” AI flagged it in 30 seconds. Would’ve taken user complaints to surface otherwise.


Where I Don’t Use AI UX

Some things models can’t do. Won’t ever be able to do. Because they’re not compute problems — they’re human problems.

Prioritization.
Models are brilliant at options. Terrible at cost. Choosing what not to do is leadership, not a prompt. This is core UX/UI design work — deciding what ships and what waits.

Taste and hierarchy.
Model can lay bricks. Cannot design the room. Choosing scale, rhythm, pace is the point.

Positioning and narrative.
Product truth is earned in calls, demos, uncomfortable meetings. AI can tidy words. Can’t decide what you stand for. Knowing who you’re designing for comes from humans, not models.

Guardrails.
“Sensitive content warning” is not a safety system. Deciding what your product refuses to do is human job. AI just explains it politely.

Final copy for high-stakes UI.
I’ll happily take draft. Will not outsource the promise. When users see “Confirm payment,” those words better be right. No model writes that final version.

When doing UX design for AI products specifically, this boundary matters more. Start with correction loop before happy path. Show model’s confidence honestly (bands beat fake percentages). Design latency states telling truth. Let people undo without punishment. This is where trust lives.

And getting this wrong is expensive — both in user trust and in fixing broken promises later.


My Simple Rule

If the task is: search, summarize, scaffold, simulate, sanity-check
→ Give it to AI first

If the task is: decide, prioritize, promise, exclude, sign-off
→ That’s mine

This rule keeps me fast without becoming lazy. Also keeps team honest. If someone says “let’s have AI write the UX,” what they usually mean is “we don’t know what matters yet.”

Fine. Then we’re not done thinking.

For SaaS product design work specifically, AI helps most in research and documentation phases. Decisions about flows, priorities, and trade-offs? Still human work.


The Bit No AI UX Tool Solves

Most broken UX isn’t tooling problem. It’s leadership problem disguised as Figma file.

No model will save:

  • Roadmap with no priorities
  • Funnel with no owner
  • Product that won’t choose who it’s for

AI can make you faster at wrong thing. That’s not progress. That’s theatre with better lighting.

The inverse is also true: when strategy is clear, AI saves hours. I’ve shipped weeks faster because grunt moved out of the way — transcripts digested, variants generated, specs expanded — so team could focus on argument that mattered.

This is what good website design looks like too: clear strategy first, AI-assisted execution second, never reversed.


How to Actually Use AI in UX Design

Use AI like very capable intern who never sleeps and occasionally lies. Helpful when supervised. Never in charge.

For AI UX design work: Your job is making uncertainty humane. Show what the model knows and doesn’t know. Design for correction, not just for success.

For UX design in AI products: Your job is admitting machine’s limits before users discover them the hard way.

When talking about AI in UX design: Talk about outcomes, not magic. Fewer support tickets. Faster time-to-value. Clearer decisions. Not “AI revolutionizes UX” but “AI cleared 10 hours of grunt work so I could focus on the three decisions that mattered.”

I don’t need robot to have taste for me. I need space to use mine. AI gives me that space.

The rest — the choosing, the responsibility, the “no” that keeps product honest — is still the job.


The Bottom Line

You can’t “AI the onboarding.”

You can use AI to clear fog around it: surface real verbs, reveal friction, scaffold empty states. So the human decision can be obvious and fast.

That’s the point of AI in UX design: less ceremony, more clarity.

The analytics platform founder learned this. Wanted AI to design everything, got AI handling tedious analysis while I made actual decisions. Saved $1,200 in grunt work. Shipped better UX in less time.

Because AI didn’t decide what mattered. I did.

That’s the job. That’s always been the job. AI just makes the job less miserable.

Use it for that. Nothing more. Nothing less.

__
DNSK WORK
Design studio for digital products
https://dnsk.work