You’ve probably noticed your AI design tools getting weirder. Here’s why that’s happening and what you can do about it.
Something strange has been happening with AI design tools over the past few months.
That background removal tool that used to work perfectly? Now it occasionally gives you results that look like someone attacked the edges with a chainsaw. The AI-generated stock photos have this uncanny quality where the hands have too many fingers or not enough. The color palette suggestions feel… off somehow.
You probably assumed it was just you being picky. Or maybe the tool had a bad update.
But there’s a bigger problem at play, and it’s not getting fixed anytime soon. It’s called AI model collapse, and it’s quietly affecting every AI-powered design tool you use.
Here’s what’s actually happening: AI tools are increasingly training on content that other AI tools created. And when AI eats AI output, it starts to degrade. Not slowly — exponentially.
This isn’t a future problem. It’s happening right now in the tools you’re using today.
What Model Collapse Actually Means (Without the Jargon)
AI model collapse is what happens when AI systems train on data that was generated by other AI systems.
Think of it like this: you make a photocopy of a document. Then you make a photocopy of that photocopy. Then another copy of that copy. By the tenth generation, the text is barely readable, the images are distorted, and you’ve introduced artifacts that weren’t in the original.
That’s essentially what’s happening with AI, except instead of photocopiers, it’s machine learning models training on their own output.
The technical term is “AI degradation” — the quality of AI-generated content decreases each time a model trains on synthetic data instead of real human-created content.
And the internet is now 50-60% AI-generated content. Which means AI tools are increasingly eating their own output whether they want to or not.
Why This Is Already Affecting Your Design Work
You might be thinking: “Okay, but I’m not training AI models. I’m just using design tools. How does this affect me?”
Here’s how:
Your AI plugins are getting worse. That Figma plugin that generates UI components? It was trained on design files. Many of those design files now contain AI-generated elements. So the next version of that plugin is training on AI output, not human design decisions.
Stock photo generators are in a death spiral. AI image generators trained on stock photos. Then people started uploading AI-generated images to stock photo sites. Now AI is training on AI images, and the quality is noticeably degrading.
Look at hands in AI-generated images from 2024 vs. 2025. They’re getting worse, not better. That’s model collapse in action.
Design inspiration is contaminated. When you search for “modern dashboard design” or “SaaS landing page,” increasingly you’re seeing AI-generated examples. Which means when you reference these for inspiration, you’re being influenced by AI output that was influenced by AI output.
It’s feedback loops all the way down.
The Signs You’ve Probably Already Noticed
Let me describe some things you’ve probably experienced but couldn’t quite explain:
Weird edge cases that shouldn’t happen. Your AI background remover used to nail complex hair edges. Now sometimes it just… doesn’t. The algorithm seems to have forgotten how to handle certain scenarios it used to manage fine.
Increasingly generic output. Your AI writing assistant used to give you varied suggestions. Now everything sounds kind of samey. That’s because it’s training on increasingly homogenized content — AI output tends toward the mean.
Subtle wrongness you can’t pinpoint. That AI-generated illustration looks fine at first glance, but something feels off. The proportions aren’t quite right. The color relationships are slightly weird. It’s competent but soulless.
Consistency problems. Generate the same prompt three times, and you get wildly different quality levels. That’s because the model’s confidence in its own output is degrading.
These aren’t bugs. They’re symptoms of AI degradation.
Why AI Companies Can’t Just Fix This
Here’s the part that makes this problem unsolvable with current approaches: AI companies need massive amounts of training data. The internet was that data source.
But now the internet is majority AI-generated content. So every time they retrain their models to “improve” them, they’re inadvertently including more AI output in the training data.
They can try to filter out AI-generated content, but:
- AI-generated content is increasingly hard to distinguish from human content
- The sheer volume makes manual curation impossible
- Users are incentivized to pass AI content as human-made
- Even “human-created” content now often includes AI-assisted elements
It’s like trying to un-mix ingredients after you’ve baked a cake. The contamination is already throughout the system.
Some researchers estimate that by 2026, over 90% of online content will be AI-generated or AI-influenced. Which means model collapse is only going to accelerate.
What This Means for Your Daily Workflow
Practically speaking, here’s how AI model collapse affects the work you’re doing today:
Don’t trust AI for final output. Use it for ideation, rapid prototyping, or exploring directions. But always have a human (you) make the final decisions and refinements.
AI-generated designs are fine for “what if we tried this?” They’re not fine for “ship this to production.”
Question AI-generated research. User personas, competitive analysis, market research — if AI generated it, verify it against real human sources.
AI is increasingly trained on AI-generated research papers and reports. Which means you might be basing UX decisions on AI echoing AI.
Diversify your reference sources. Don’t just pull inspiration from Pinterest, Dribbble, or Behance anymore. Those platforms are increasingly polluted with AI-generated work.
Look at actual shipped products. Talk to real users. Reference physical design and art. Get outside the AI feedback loop.
Save your old AI outputs. If you generated something with AI a year ago and it was good, save it. The same prompt today might give you worse results due to AI degradation.
This sounds paranoid, but we’re already seeing it happen.
How to Build AI-Resistant Design Processes
Here’s what I’ve started doing in my own work to minimize the impact of AI model collapse:
1. Use AI as a Starting Point, Never an Endpoint
Generate options with AI. Then apply human judgment, refinement, and decision-making. The goal is to use AI efficiency while maintaining human creative direction.
Think of AI as a junior designer who works fast but needs significant art direction. Not as a replacement for your design thinking.
2. Verify Everything Against Reality
If AI tells you something about user behavior, confirm it with actual users. If AI generates a design pattern, check if it actually works in real products.
The same UX thinking that makes you question assumptions in UX design education applies here: test, don’t assume.
3. Keep Human-Created Reference Libraries
Build a collection of design inspiration that you know comes from actual human designers working on real products. Curate it manually. Reference it when AI output feels off.
This is like maintaining a design system that doesn’t degrade over time, except it’s a reference system for your own judgment.
4. Trust Your Instincts When Something Feels Wrong
If AI output feels slightly off, it probably is. Model collapse creates subtle wrongness that’s hard to articulate but easy to feel.
Your design intuition — developed through years of actually looking at and creating things — is more reliable than AI output trained on increasingly synthetic data.
5. Document Your Design Decisions
When you make a design choice, write down why. Not for anyone else — for yourself.
This builds a personal design knowledge base that isn’t contaminated by AI feedback loops. It’s your actual thinking, not AI echoing AI.
Why This Actually Makes Human Designers More Valuable
Here’s the paradox: AI model collapse is making human creative judgment more valuable, not less.
As AI tools degrade, the ability to:
- Spot when AI output has gone wrong
- Understand why something feels off
- Make informed refinements
- Exercise genuine creative judgment
These skills become premium. Because AI can’t do them — it can only recognize patterns in its training data. And its training data is getting worse.
Remember that research about design skills now surpassing coding in AI job requirements? This is why.
Companies need humans who can direct AI tools, curate their output, and make the decisions AI can’t make. Especially as AI reliability degrades.
The Uncomfortable Questions Nobody’s Answering
AI companies aren’t talking about model collapse publicly because it threatens the narrative that AI keeps getting better.
But here are the questions designers should be asking:
How do you verify your training data isn’t contaminated? Most AI companies can’t answer this. Because they don’t know.
What’s your plan when synthetic data exceeds real data? We’re already there for many content types. There is no plan.
How do you maintain quality as models train on their own output? The honest answer is: they can’t, with current approaches.
When AI degradation becomes obvious to users, what then? We’re starting to find out.
These aren’t hypothetical concerns. They’re affecting the tools you’re using right now.
What to Do About It (Practical Steps)
This week:
- Audit which design tools you’re using AI for
- Identify which outputs you’re accepting without human review
- Start applying more scrutiny to AI-generated elements
This month:
- Build a human-curated reference library
- Document your design decision-making process
- Test AI outputs against real user needs, not just aesthetic judgment
This quarter:
- Develop workflows that use AI for efficiency but human judgment for quality
- Train yourself to spot AI degradation signs in tools you use daily
- Position yourself as someone who can direct AI, not just use it
The designers who succeed aren’t the ones who reject AI or blindly trust it. They’re the ones who understand its limitations and build processes that leverage its strengths while compensating for its weaknesses.
Especially as those weaknesses grow.
The Bigger Picture
AI model collapse isn’t just a technical problem for AI researchers. It’s affecting every designer who uses AI-powered tools.
And it’s not getting fixed. The fundamental issue — AI training on AI output — is built into how these systems work at scale.
Which means the solution isn’t better AI. It’s better human oversight.
The same UX design principles that tell you to verify assumptions, test with real users, and trust your judgment over what “should” work? Those apply to working with AI tools too.
AI degradation makes your design intuition more valuable, not less. Your ability to spot when something’s off. Your understanding of what actually works versus what looks like it should work. Your judgment developed through years of actual practice.
These are the skills that don’t degrade. Because they’re based on reality, not on recursive feedback loops eating their own output.
The next time your AI design tool gives you output that feels slightly wrong, trust that instinct.
It’s probably not you being picky.
It’s model collapse doing what it does: slowly turning the internet into a photocopy of a photocopy of a photocopy.
And your job, as a designer, is to be the person who notices when the copies have gone bad.