UX Audit Checklist: 15 Signs Your Product Needs a Reset (Not a Redesign)

Blog » Design » UX Audit Checklist: 15 Signs Your Product Needs a Reset (Not a Redesign)

You’ve been shipping. The roadmap is moving, features are live, users are paying. But somewhere in the last few months the product started feeling… thick. Not broken – functional. Just harder to use than it needs to be. The onboarding that used to feel clean now has two extra steps nobody can explain. There’s a settings panel that three different designers have touched and it shows. The modal that was supposed to be temporary has been there for fourteen months.

This is not a redesign situation. A redesign would take six months, cost $150K minimum, and leave you explaining to investors why you stopped shipping features to change button colors.

What you need is a UX audit. A focused review of what’s actually broken, what’s accumulated debt without anyone noticing, and what can be fixed in days rather than quarters.

I’ve run this process on products across fintech, healthtech, logistics, and B2B SaaS. The patterns repeat. The same fourteen things break in roughly the same order. The good news: most of them are fixable without rebuilding anything.

Here’s the full UX audit checklist – what to look for, why it matters, and what it actually costs when you don’t.


Start With the Screens Nobody Wants to Touch

Every product has them. The account settings panel that hasn’t been updated since launch. The billing page that got postponed four times and still looks like it was built by a different company. The onboarding flow that never quite worked but also never got prioritised because “users figure it out eventually.”

These are where your UX audit should begin – not on the polished surfaces, but on the avoided ones.

The logic is simple: if your own team skips past certain screens during demos, your users are encountering them daily. The avoided screens are where design debt lives. They’re the accumulated “we’ll fix it later” decisions that became permanent through inaction. They reveal more about your product’s actual health than anything in the happy path.

Make a list. Ask your support team which screens generate the most tickets. Ask your engineers which screens they dread touching. Ask your PMs which features never make it onto roadmap discussions. The intersection of those three lists is your audit starting point.


Check for Modal Sprawl

Open your product and start clicking. Count how many modals can be open simultaneously.

One is fine. Two is a warning sign. Three means someone got comfortable with a pattern that doesn’t scale and applied it everywhere.

Modal sprawl starts innocently. One helpful confirmation prompt. One settings overlay. One quick edit form. Then a modal links to another modal, which contains a tooltip that expands, which has a button that opens a third overlay. Users find themselves three layers deep with no clear way back, clicking X buttons in the wrong order and ending up somewhere they didn’t intend.

I audited a product last year with eight stackable modals. The user journey for updating billing information passed through four of them sequentially. Support tickets for that flow ran at 3.2 times the platform average. When we collapsed the flow into a single dedicated page, support volume for billing dropped 61% in the first month.

The diagnostic test: if you can’t diagram the full modal flow on a whiteboard without apologising, it’s broken. The fix is rarely another modal. It’s usually a dedicated page, a slide-out panel, or a simplified flow that doesn’t require overlay logic at all.


Find the Tooltips That Became Crutches

Tooltips are a support mechanism. They’re for supplementary context – keyboard shortcuts, technical definitions, edge case explanations. They’re not a substitute for clear interface copy.

When tooltips start doing the job that labels and buttons should be doing, it means the interface stopped trusting itself. “Archive” needs a tooltip clarifying it doesn’t delete data? The button should say “Archive – data kept” instead. Six icons in a row each requiring a hover to understand their function? That’s a navigation problem, not a tooltip problem.

The practical issue is reach. Mobile users never see tooltips. Keyboard navigation users skip past them. Screen reader users get inconsistent behaviour depending on implementation. Every feature that relies on tooltip comprehension is a feature that’s invisible to a meaningful percentage of your users.

Run this test: disable all tooltips in your product for one day and watch session recordings. Every moment of hesitation, every repeated hover, every abandoned interaction – those are features that depend on explanation rather than clarity. Put them on the audit list.


Read the Empty States Out Loud

Open your product and trigger every zero state you can find. New user with no data. Empty search results. A feature with nothing in it yet. Read each empty state message aloud like a user encountering it for the first time.

Most empty states fail this test badly. “No items yet” with a generic illustration. “Nothing here” with a sad icon. These messages confirm the absence of data without doing anything useful with the moment.

Empty states are the highest-leverage, lowest-effort improvement in most UX audit checklists because they appear at exactly the moment when users are most uncertain – when they’ve arrived somewhere and nothing has happened yet. A good empty state answers three questions immediately: what is this feature for, why is it empty right now, and what should I do next.

The difference in activation is significant. Users who see a clear call to action in an empty state complete their first meaningful action at roughly twice the rate of users who see a generic placeholder. That’s not a design opinion – it’s the consistent finding across every product where I’ve run this comparison.

Fix the empty states before you touch anything else. They’re fast, they’re high impact, and they require no engineering work to spec.


Walk the Nav as a New User

Sign out of your product. Sign back in. Look at the navigation as if you’ve never seen it before.

Do the labels mean anything without context? Can you tell what each section contains before clicking into it? Is there a clear sense of hierarchy – primary actions prominent, secondary actions accessible but not competing?

Nav bloat is the most democratic form of design debt because it accumulates through democracy. Every team gets their section. Nobody argues for removal because removal means someone’s feature is less visible. The result is a nav with 14 items where users regularly use 4.

Every additional navigation item costs real user time – not metaphorically, but measurably in session recordings where you can watch users scan a nav twice before finding what they needed. The cognitive load of parsing 14 options versus 5 options is substantial, and it compounds across every session, every day.

Audit your navigation against a simple test: could a new user, without any onboarding, navigate to the three most important areas of your product in under 60 seconds? Time it. If the answer is no, the nav needs work before anything else does.


Find the Copy That’s Afraid

Open three screens at random. Read every button label, every form field description, every confirmation message. Look for hedging language.

“You might want to consider saving your changes.” “Optionally, you can add team members here.” “This action may affect your settings.”

Hedging copy is a product confidence problem. It happens when teams are uncertain about what an action does, or uncertain whether users want to do it, and they resolve that uncertainty by making the language vague rather than by making the action clear.

The cost is real. Direct, confident copy consistently outperforms hedged copy in conversion tests – not because users prefer confident tone aesthetically, but because vague language creates decision hesitation. If the interface isn’t sure what the button does, the user definitely isn’t.

Find one piece of hedging copy per audit session and rewrite it. “Delete project” instead of “Would you like to remove this project?” “Save changes” instead of “You can optionally save if you want.” “Invite team” instead of “You might want to consider adding collaborators.”

It takes ten minutes. It compounds across every user who hits that screen.


Walk Core Flows With a Stopwatch

Pick the three most important tasks in your product. The thing a new user needs to do to get value. The thing a power user does every day. The thing that generates the most support tickets.

Time how long each takes from first click to completion. Then give the same tasks to someone who hasn’t used the product before and watch without commenting.

Every pause is a signal. Every repeated click is a signal. Every moment where they hover over a button without clicking – trying to predict what will happen before committing – is a signal.

You’re not optimising for speed. You’re optimising for sense. A task that takes 14 clicks when 4 clicks would accomplish the same thing isn’t slow because of click count – it’s slow because each extra click is a moment of uncertainty that the user has to resolve. Those moments add up, and users remember products that made them work harder than necessary.

Session recordings from tools like FullStory or Hotjar make this audit systematic. Look for rage clicks – rapid repeated clicks on the same element, usually indicating the user expected something to happen and it didn’t. Look for U-turn patterns – users navigating forward, then immediately back, indicating they went somewhere wrong. Both patterns point directly at UX debt that needs addressing.


Find the Dead Ends

Not every screen needs to be elaborate. But every screen should give users somewhere to go next.

Dead ends happen when teams build features without thinking about what users do after the feature completes its purpose. The export finished. The report generated. The form submitted. These are moments of completion where users are primed for the next action – and most products leave them staring at a success message with no direction.

Check every terminal state in your product. Every confirmation page. Every success state. Every “all done” moment. Ask: does this screen tell the user what changed, what’s possible now, and what makes sense to do next?

The fix is usually one additional element on a screen that already exists – a contextually relevant call to action, a link to the natural next step, a summary of what just happened paired with what it enables. Not complicated. Just considered.


Check for Styling Drift

Open your settings page, your dashboard, and your onboarding flow simultaneously. Do they look like they came from the same product?

Styling drift is inevitable in products that have been built over time by multiple designers. A button style gets introduced for one feature and not propagated. A new designer joins and brings their own spacing preferences. An urgent fix ships with “temporary” styling that becomes permanent.

The problem isn’t aesthetic – it’s cognitive. Users build a mental model of how your product works based on visual consistency. When a button looks different in two parts of the product, users pause to check whether it means something different. That pause is small but it’s friction, and friction at scale means users trust the product less than they should.

Run the audit: compare buttons, headings, form fields, and spacing across five randomly chosen screens. Note every inconsistency. The fix doesn’t require a full design system project – it requires a two-week cleanup pass with clear documented standards: these are the buttons, this is the spacing, this is how headings work. Then enforce it on every screen going forward.


Identify Where Users Get Stuck But Don’t Complain

Support tickets tell you where users are confused enough to ask for help. They don’t tell you where users are confused and quietly give up.

The second category is larger and more dangerous. Users who hit confusion in a feature, can’t figure it out, and decide it’s not for them – they don’t generate a ticket. They just stop using that part of the product. You lose the activation, the habit formation, the word-of-mouth recommendation that would have come from a user who got value from the feature. You never find out why.

Look for this pattern in analytics: screens with high exit rates and low support volume. That combination means users are leaving without asking for help. Pull the session recordings for those exits. In most cases you’ll find a label that’s ambiguous, a flow that has an unexpected step, or an empty state that gives no guidance.

One product I audited had a feature with 67% abandonment and zero support tickets. Users who arrived at a certain configuration screen saw a field labelled “Integration endpoint” with no explanation of what it expected. Most of them assumed the feature wasn’t relevant to them and left. Changing the label to “Paste your webhook URL here” and adding a one-line example dropped abandonment to 19%. The feature had been “broken” for 11 months without generating a single complaint.


Audit Your Primary CTA on Every Major Screen

Walk through your core product flows and look at every major screen’s primary call to action. Is it still pointing users in the right direction? Is it still the most important action on that screen?

CTAs drift out of alignment with user needs as products evolve. A button that was the right primary action at launch may no longer be the right primary action after six months of feature additions. Secondary actions accumulate and compete. The hierarchy that made sense in the original design gets eroded by subsequent releases.

The specific failure mode to look for: three competing CTAs on the same screen with no clear hierarchy. “Save,” “Save and Continue,” and “Save Draft” sitting side by side. Users spend real time trying to understand the difference when there often isn’t a meaningful one. Consolidate where possible, establish clear hierarchy where consolidation isn’t possible, and make sure the visual weight of your primary action is unambiguous.


Run the “Would We Ship This Today?” Test

Pick five screens at random. Show each one to your product team and ask a single question: if this were a new feature being reviewed before launch, would we approve it as-is?

This test cuts through rationalisation faster than any other diagnostic. Teams defend existing UI because it’s already built and changing it has a cost. But that’s sunk cost thinking. The question isn’t “should we change it” – it’s “would we build it this way now, knowing what we know about our users.”

When I run this exercise with product teams, the typical result is that 40 to 60% of existing screens wouldn’t pass a fresh review. They shipped under deadline pressure, they accumulated changes that weren’t coherent, or they were built for a user behaviour that turned out to be different from the actual behaviour.

Score each screen 1 to 5. Anything below 3 goes on the reset list. Don’t fix everything at once – prioritise by frequency of use and severity of the issue, and work through the list systematically.


What the Audit Costs vs. What Ignoring It Costs

A proper UX audit across these areas takes 2 to 3 weeks of focused work. The output is a prioritised list of issues scored by severity and effort, with specific recommendations for each.

The cost of running it: roughly $8,000 to $15,000 for a thorough external UX audit depending on product complexity, or 3 to 4 weeks of internal designer time if you have the capacity.

The cost of not running it compounds differently for different products, but the pattern is consistent. Support volume grows because users keep hitting the same friction points. Activation rates plateau because new users encounter the same dead ends that existing users have learned to navigate around. The team loses confidence in the product incrementally – not in a dramatic way, but in the slow erosion that comes from knowing things are rougher than they should be.

One B2B SaaS I worked with delayed a UX audit for 14 months because “we don’t have time right now.” When we finally ran it, we found 23 distinct issues that had been generating support tickets the entire time. Conservative estimate of the support cost over 14 months: $34,000 in team time. The audit and remediation cost $18,000.

That’s the math. Not doing the audit isn’t free. It just invoices differently.


The Reset, Not the Redesign

A product design reset is not a redesign. It doesn’t require stopping feature development. It doesn’t require months of discovery or a new design system or stakeholder alignment sessions about visual direction.

It requires working through this list systematically. Week one: audit all areas, score severity, identify the top ten issues. Week two: fix the top five. Week three: test, measure, document patterns so the same issues don’t recur.

The product that comes out the other side isn’t unrecognisable. It’s just cleaner. Sharper in the places that matter. More confident in its own language. Less likely to lose users to friction that nobody put there intentionally but nobody removed either.

That’s what a UX audit checklist is actually for – not to find everything wrong, but to find the specific things that are costing you the most and fix them in the right order.

Start with the screens nobody wants to touch. Work outward from there.

__
DNSK WORK
Design studio for digital products
https://dnsk.work