Something broke between February and August 2025, and I have receipts.
I was working on client team headshots in August. Ran the first photo through my usual background removal tool — the same one I’d used reliably for two years. The edge looked chewed. Like someone attacked it with a chainsaw set to “drunk.”
My first thought: bad photo. Tried three more photos. Same weird artifacts.
Got suspicious.
Went back to a project from February 2025. Found the original raw photos I’d used. Re-uploaded the exact same photo to the exact same tool.
Got worse results in August than I got in February with identical input.
Same photo. Same tool. Worse output. Not slightly worse — noticeably worse.
That’s when I realized: my AI design tools aren’t improving. They’re degrading.
Here’s what’s actually happening, why it’s not getting fixed, and what I changed in my workflow because I can no longer trust AI tools for production work.
The Systematic Test That Proved It
I didn’t want to believe it, so I tested it properly.
Found 10 photos I’d processed between early 2024 and early 2025. Saved originals, saved the AI outputs from when I first ran them. Re-ran all 10 through the background removal tool in August 2025.
Documented everything.
Clear pattern:
- 2024 photos: 7 of 10 gave worse results in August 2025 than original processing
- Early 2025 photos: 9 of 10 degraded when reprocessed
- One photo from March 2025 showed dramatic edge quality drop in August
Not “I think it’s worse.” Measurably worse. Side-by-side comparison worse.
My UX design brain kicked in: If same input produces worse output over time, the system is degrading.
Started researching why. Found the term: AI model collapse.
And once I understood what it was, I started seeing it everywhere in the AI design tools I use daily.
What Model Collapse Actually Means (Without the Jargon)
AI model collapse is what happens when AI systems train on data that was generated by other AI systems.
Think of it like this: you make a photocopy of a document. Then you make a photocopy of that photocopy. Then another copy of that copy. By the tenth generation, the text is barely readable, the images are distorted, and you’ve introduced artifacts that weren’t in the original.
That’s essentially what’s happening with AI design tools, except instead of photocopiers, it’s machine learning models training on their own output.
The technical term is “AI degradation” — the quality of AI-generated content decreases each time a model trains on synthetic data instead of real human-created content.
And the internet is now 50-60% AI-generated content. Which means AI design tools are increasingly eating their own output whether they want to or not.
My background removal tool? It was probably retrained on images that included AI-generated backgrounds or AI-processed edges. So the new version learned from AI output, not from human-judged quality standards.
The tool got “better” according to its training data. But the training data itself was degraded.
Why AI Design Tools Are Already Affecting Your Design Work
You might be thinking: “Okay, but I’m not training AI models. I’m just using design tools. How does this affect me?”
Here’s exactly how, based on what I’ve observed across multiple AI design tools:
Your AI plugins are getting worse.
That Figma plugin that generates UI components? It was trained on design files. Many of those design files now contain AI-generated elements. So the next version of that plugin is training on AI output, not human design decisions.
I tested this with a UI generation plugin I’d used on a project six months prior. Same prompt, worse components. The spacing felt off. The hierarchy was generic. It “worked” but felt soulless.
Stock photo generators are in a death spiral.
AI image generators trained on stock photos. Then people started uploading AI-generated images to stock photo sites. Now AI is training on AI images, and the quality is noticeably degrading.
Look at hands in AI-generated images from 2024 vs. 2025. They’re getting worse, not better. That’s model collapse in action.
I noticed this when generating team placeholder images for a SaaS product design project. The hands had too many fingers. The facial proportions were slightly wrong. Nothing I could ship.
Design inspiration is contaminated.
When you search for “modern dashboard design” or “SaaS landing page,” increasingly you’re seeing AI-generated examples. Which means when you reference these for inspiration, you’re being influenced by AI output that was influenced by AI output.
It’s feedback loops all the way down.
I started keeping a manual reference library of designs I know came from real shipped products with real users. Because I can’t trust search results anymore.
The Signs I’ve Been Noticing Across Multiple AI Design Tools
Let me describe some things I’ve experienced that I couldn’t quite explain until I understood model collapse:
Weird edge cases that shouldn’t happen.
My background removal tool used to nail complex hair edges. Now sometimes it just… doesn’t. The algorithm seems to have forgotten how to handle scenarios it used to manage fine.
That’s not a bug. That’s the training data degrading because it includes AI-processed hair edges that were already slightly wrong.
Increasingly generic output.
I use an AI writing assistant for first-draft captions. It used to give varied suggestions with different tones. Now everything sounds kind of samey.
That’s because it’s training on increasingly homogenized content — AI output tends toward the mean. When AI trains on AI writing, it loses the edges that make writing interesting.
Subtle wrongness I can’t pinpoint.
Generated an illustration for a client deck. Looked fine at first glance. But something felt off. The proportions weren’t quite right. The color relationships were slightly weird. Competent but soulless.
I ended up paying a human illustrator because I couldn’t articulate what was wrong, but I knew it wasn’t shippable. That “wrong but can’t explain why” feeling? That’s model collapse creating subtle degradation.
Consistency problems.
Generate the same prompt three times, get wildly different quality levels. One’s great, two are garbage. That’s because the model’s confidence in its own output is degrading.
I tested this with my background removal tool. Same photo, run five times consecutively. Got three acceptable results, two unusable ones. In 2024, same test gave five acceptable results.
These aren’t bugs. They’re symptoms of AI design tools degrading as they train on increasingly synthetic data.
Why AI Companies Can’t Just Fix This
Here’s the part that makes this problem unsolvable with current approaches: AI companies need massive amounts of training data. The internet was that data source.
But now the internet is majority AI-generated content. So every time they retrain their models to “improve” them, they’re inadvertently including more AI output in the training data.
They can try to filter out AI-generated content, but:
AI-generated content is increasingly hard to distinguish from human content. Even AI detection tools are unreliable.
The sheer volume makes manual curation impossible. You can’t manually verify billions of images or documents.
Users are incentivized to pass AI content as human-made. Stock photo sites, design portfolios, inspiration galleries — all contaminated with AI output passed off as human work.
Even “human-created” content now often includes AI-assisted elements. That design you’re looking at? Human made the layout, AI generated the illustrations, human refined them. What’s “human” anymore?
It’s like trying to un-mix ingredients after you’ve baked a cake. The contamination is already throughout the system.
Some researchers estimate that by 2026, over 90% of online content will be AI-generated or AI-influenced. Which means model collapse is only going to accelerate.
My background removal tool getting worse between February and August? That’s a five-month degradation cycle. It’s getting faster.
What This Changed in My AI Design Tools Workflow
Practically speaking, here’s how AI model collapse changed the work I do today:
I can’t trust AI design tools for final output anymore.
I use my background removal tool the same way I used to use it. But now I don’t accept the output without review. Every single result gets manually checked and usually manually refined.
Time change: What took 5 minutes now takes 20 minutes (AI generation + manual QA and cleanup).
Is it worth it? Yes, because AI is still faster than manual masking. But I’ve stopped treating AI output as production-ready. It’s now a rough draft that needs human finishing.
I question AI-generated research.
User personas, competitive analysis, market research — if AI generated it, I verify it against real human sources.
I caught an AI-generated competitive analysis that was partly hallucinated. The “competitor features” it listed didn’t exist. The AI had trained on product descriptions and assumed capabilities based on category, not reality.
Research as an excuse is bad enough. AI-generated research that’s actively wrong is worse.
I diversified my reference sources.
Don’t just pull inspiration from Pinterest, Dribbble, or Behance anymore. Those platforms are increasingly polluted with AI-generated work that looks good in thumbnails but falls apart under examination.
I look at actual shipped products. Talk to real users. Reference physical design and art. Get outside the AI feedback loop.
Same principle as not building hidden features — look at what people actually use, not what looks impressive in screenshots.
I save my old AI outputs.
If I generated something with AI a year ago and it was good, I save it. The same prompt today might give worse results due to AI degradation.
This sounds paranoid. But I’ve proven it with my background removal testing. Same input, worse output over time. So I keep the good outputs as reference.
How I Built AI-Resistant Design Processes
Here’s what I actually do now to minimize the impact of AI model collapse:
1. Use AI as a Starting Point, Never an Endpoint
Generate options with AI. Then apply human judgment, refinement, and decision-making. The goal is to use AI efficiency while maintaining human creative direction.
I treat AI like a junior designer who works fast but needs significant art direction. Not as a replacement for product design thinking.
My background removal workflow now:
- AI generates initial mask (2 minutes)
- I review edges, fix artifacts (15 minutes)
- Final QA against original intent (3 minutes)
- Total: 20 minutes vs 5 minutes before, but output is actually shippable
2. Verify Everything Against Reality
If AI tells you something about user behavior, confirm it with actual users. If AI generates a design pattern, check if it actually works in real products.
The same UX/UI design thinking that makes you question assumptions applies here: test, don’t assume.
I caught AI suggesting a navigation pattern that looked clean but failed basic usability testing. It had trained on designs that looked good, not designs that worked.
3. Keep Human-Created Reference Libraries
I built a collection of design inspiration that I know comes from actual human designers working on real products. Curate it manually. Reference it when AI output feels off.
This is like maintaining a design system that doesn’t degrade over time, except it’s a reference system for your own judgment.
When AI gives me generic output, I compare it against my human-created reference library. Usually shows me exactly what’s missing.
4. Trust Your Instincts When Something Feels Wrong
If AI output feels slightly off, it probably is. Model collapse creates subtle wrongness that’s hard to articulate but easy to feel.
Your design intuition — developed through years of actually looking at and creating things — is more reliable than AI output trained on increasingly synthetic data.
That chewed edge on my background removal result? My instinct said “wrong” immediately. Took me hours to prove it systematically, but my gut knew instantly.
5. Document Your Design Decisions
When I make a design choice, I write down why. Not for anyone else — for myself.
This builds a personal design knowledge base that isn’t contaminated by AI feedback loops. It’s my actual thinking, not AI echoing AI.
Also helps when AI suggests something that contradicts what I know works. I can point to documented decisions and outcomes, not just “this feels wrong.”
Why This Makes Human Designers More Valuable
Here’s the paradox: AI model collapse is making human creative judgment more valuable, not less.
As AI design tools degrade, the ability to:
Spot when AI output has gone wrong — I now catch degraded output before it ships
Understand why something feels off — Model collapse creates specific patterns of wrongness
Make informed refinements — Knowing what AI gets wrong helps fix it faster
Exercise genuine creative judgment — AI can’t decide what’s “good,” only what matches degraded training data
These skills become premium. Because AI can’t do them — it can only recognize patterns in its training data. And its training data is getting worse.
Remember that research about design skills now surpassing coding in AI job requirements? This is why.
Companies need humans who can direct AI design tools, curate their output, and make the decisions AI can’t make. Especially as AI reliability degrades.
The designers who adapt fastest aren’t rejecting AI tools. They’re learning to supervise them properly.
The Uncomfortable Questions Nobody’s Answering
AI companies aren’t talking about model collapse publicly because it threatens the narrative that AI keeps getting better.
But here are the questions designers should be asking:
How do you verify your training data isn’t contaminated? Most AI companies can’t answer this. Because they don’t know. The internet is contaminated. They train on the internet.
What’s your plan when synthetic data exceeds real data? We’re already there for many content types. There is no plan. They’re hoping to solve it before users notice.
Too late. I discovered it in August. Proved it systematically with 10 photos.
How do you maintain quality as models train on their own output? The honest answer is: they can’t, with current approaches. Model collapse is a mathematical certainty once synthetic data dominates training sets.
When AI degradation becomes obvious to users, what then? We’re finding out now. My background removal tool didn’t announce “we’re degrading.” It just got worse. I had to discover it myself.
These aren’t hypothetical concerns. They’re affecting the AI design tools you’re using right now.
What to Actually Do About It (Based on What I Did)
This week:
Audit which AI design tools you’re using for production work. Test them systematically like I did — re-run old projects, compare results. Document what you find.
Identify which outputs you’re accepting without human review. Add QA step.
Start applying more scrutiny to AI-generated elements before shipping.
This month:
Build a human-curated reference library of designs you know work. Real products, real users, real outcomes.
Document your design decision-making process. Write down why you chose X over Y.
Test AI outputs against real user needs, not just aesthetic judgment.
This quarter:
Develop workflows that use AI design tools for efficiency but human judgment for quality. Like my 20-minute background removal process: AI speed + human QA.
Train yourself to spot AI degradation signs in tools you use daily.
Position yourself as someone who can direct AI, not just use it. That’s the valuable skill as AI tools degrade.
The designers who succeed aren’t the ones who reject AI or blindly trust it. They’re the ones who understand its limitations and build processes that leverage its strengths while compensating for its weaknesses.
Especially as those weaknesses grow.
The Bigger Picture
AI model collapse isn’t just a technical problem for AI researchers. It’s affecting every designer who uses AI design tools.
And it’s not getting fixed. The fundamental issue — AI training on AI output — is built into how these systems work at scale.
Which means the solution isn’t better AI. It’s better human oversight.
The same UX design principles that tell you to verify assumptions, test with real users, and trust your judgment over what “should” work? Those apply to working with AI design tools too.
AI degradation makes your design intuition more valuable, not less. Your ability to spot when something’s off. Your understanding of what actually works versus what looks like it should work. Your judgment developed through years of actual practice.
These are the skills that don’t degrade. Because they’re based on reality, not on recursive feedback loops eating their own output.
The next time your AI design tool gives you output that feels slightly wrong, trust that instinct.
It’s probably not you being picky.
It’s model collapse doing what it does: slowly turning the internet into a photocopy of a photocopy of a photocopy.
And your job, as a designer, is to be the person who notices when the copies have gone bad.
I discovered it in August comparing to February results. Proved it systematically with 10 photos. Changed my workflow to compensate.
Five-month degradation cycle for a tool I’d trusted for two years.
It’s not getting better. Build your processes accordingly.
