I Wrote a Book About AI Sycophancy. I Didn’t Use AI to Write It.

Blog » Design » I Wrote a Book About AI Sycophancy. I Didn’t Use AI to Write It.

Someone at an event, after I mentioned the book.

“You wrote about AI and design. Did you use AI to write it?”

No.

“Really? Not even for editing?”

No.

They looked confused. Not skeptical – just visibly puzzled, in the way people get when they encounter a choice they wouldn’t have made and can’t immediately categorise.

“Why not?”

Because the first chapter is about tools that agree with everything you show them. An AI editor would have found every draft compelling.

They laughed. I don’t think they thought I was joking.


Using AI to Write a Book About AI Would Have Been Its Own Argument

I wrote this book without AI. Every chapter, every argument, every edit. Four months, a notebook, and no tool that would tell me my drafts were compelling.

The most predictable version of this project: run the manuscript through ChatGPT, get told each section is “well-structured and insightful,” ship it. A book about AI sycophancy, produced with the help of a sycophantic AI. The irony would have been neat. The argument less so.

Anyone who’s tried it knows what happens. You get agreement. You get polish. You get “great point – you might also want to consider…” followed by something you already said, reworded. Occasionally it suggests adding an executive summary. The book has no executive.

What you don’t get is resistance. Not real resistance. Building a sustained argument requires something pushing back – saying a thing, testing it, finding where it breaks, deciding whether the break invalidates the claim or sharpens it. That process needs a counterforce. AI is not a counterforce. It’s a mirror with better vocabulary than you.

The book is called “Looks Good to Me.”

That’s what the mirror says.

Available now:

Google Play Books ↗ – ebook

Amazon Kindle ↗ – ebook

Amazon ↗ – paperback

Why Using AI to Write a Book About AI Sycophancy Would Have Broken the Argument

There’s a practical reason too, beyond the irony.

The second thing the book examines – after sycophancy – is context loss. The way coherence erodes across sessions. The way a tool that doesn’t remember what you said thirty minutes ago can’t help you notice that chapter six contradicts something you established in chapter two.

Writing 115 pages of connected argument is the task where context loss is most expensive. Every earlier chapter has to pressure-test the next one. The argument about estimation failures is downstream of the argument about inverted baselines. The chapter on edges going extinct only lands if the reader understands the context decay problem first. This isn’t a collection of separate essays the way a blog is. It’s one argument distributed across nine chapters, where the structure itself is part of what’s being argued.

I needed something that could hold the whole shape of it at once. That tool was a notebook. It had no opinions about the argument.

The AI design tools available to designers and product teams right now are useful for a narrow set of tasks. Generating options quickly. Drafting copy variants. Handling repeatable, low-stakes decisions. What they are not useful for is sustained critical thinking that requires accumulating context and catching contradiction across a body of work. The context window isn’t the constraint. The architecture is.


What “Looks Good to Me” Is Actually About

“Looks Good to Me: On AI Sycophancy, Context Loss, and Inverted Baselines” is nine essays on what actually breaks when AI enters the design process.

Not predictions. Not 2030 speculation. Present observation – what’s happening now, in actual product teams, to actual design decisions.

The nine structural problems:

Sycophancy – tools that agree with everything you show them. Every variant is “clean.” Every direction is “strong.” You’re not getting feedback. You’re getting applause from something that has never cared about outcomes.

Context loss – the way coherence erodes across sessions. What you established in the first exchange is gone by the fifth. Design decisions accumulate. AI memory doesn’t.

Edges going extinct – the unusual case, the edge user, the scenario that doesn’t fit the pattern. AI output regresses toward the centre. The outliers – which is often where the real problems live in UX design – disappear quietly.

Inverted baselines – when broken becomes normal slowly enough that nobody flags it. The team adapts to what the tool produces rather than to what users actually need. The standard inverts. Nobody notices because the inversion is gradual.

Model collapse, estimation failures, inverted bus factors, and three more – which read better in order, as intended.

The book is 115 pages. It takes roughly two hours. It doesn’t have a solutions section, because most of these aren’t problems with solutions. If you find that frustrating, chapter four covers why. They’re problems you manage by being clear-eyed about what you’re actually working with – which requires, as a prerequisite, not having outsourced your clarity to the thing you’re trying to assess.

Available now:

Google Play Books ↗ – ebook

Amazon Kindle ↗ – ebook

Amazon ↗ – paperback

The AI Bubble Doesn’t Pause for Books About the AI Bubble

Using AI to write a book is, right now, an unremarkable choice. The tools are fast. The output is creditable. For writing that needs to exist rather than to argue – summaries, documentation, content at volume – it’s rational.

What’s less rational is using the same tools for work that requires sustained disagreement with received wisdom. The AI bubble is partly a confidence problem: tools that produce fluent, well-organised text create the impression that the thinking has already been done. It hasn’t. Fluency is not argument. Organisation is not logic.

The loudest AI optimists are usually the ones who’ve only seen it at its most agreeable: drafting, summarising, explaining things they already understood. The failure modes – sycophancy, context loss, the regression toward average output – show up when the work requires something harder than agreement. Using AI to write a book about AI means asking a sycophantic tool to help you document its own sycophancy. It will do this enthusiastically. Every draft will be well-structured. Every argument will be insightful. It will agree with you that AI tools have a sycophancy problem, and it will phrase that agreement beautifully. Ask it to push back. It will do that too, with equal enthusiasm, and then agree with you again.

Most AI criticism is also AI-assisted. Think-pieces about disruption, generated with the tools doing the disrupting. Critical analysis, smoothed into palatability by the thing being criticised. The bubble sustains itself partly because the commentary reads exactly like the tools it’s describing – and from the outside, fluent and well-organised text looks the same whether there’s a real argument underneath or not. That distinction matters. It’s also, not coincidentally, the first chapter of the book.

The SaaS product teams using these tools right now are making decisions that will be much more visible in two years than they are today. Some of those decisions are being made by people who’ve stopped asking whether the output is right and started asking only whether it looks right.

That distinction – between looking good and being good – is what the book is about.

The internships post covered what the AI disruption means for junior designers specifically. The book covers what it means for the design process broadly. Same argument, longer form, fixed in time.

That last part matters. Blog posts get updated. Arguments shift as tools shift. A book is a stake in the ground: this is what was true in early 2026, documented while it was happening, by someone running a UX/UI design practice and watching these patterns emerge in real product teams.

Using AI to write a book about AI sycophancy would have missed the point. Writing it without the tools, about the tools, was the minimum requirement for honesty.

Apparently that makes you a curiosity at events.


The book took four months. The arguments in it took longer – they started accumulating the moment I watched a product team spend thirty minutes iterating on AI-generated wireframes that all looked like the same wireframe.

“Looks good to me,” someone said.

I wrote it down.

__
DNSK WORK
Design studio for digital products
https://dnsk.work