Last year, a SaaS company paid me to fix their onboarding. Churn was 23% monthly. Brutal.
First call, the CS lead opened their dashboard. Gorgeous interface. Real-time health scores. Customer journey maps. Risk alerts. Engagement tracking across seven touchpoints.
“We have complete visibility.”
I asked if they knew why customers were leaving.
She pulled up someone who’d canceled yesterday. Health score the day before: 76/100. Green. Healthy. Low risk.
“The algorithm didn’t flag them.”
The algorithm was spending $2,400/month being confidently wrong.
I asked to see the actual product. Five minutes in: customers couldn’t export their data. The button was buried under Settings → Data → Advanced → Custom Reports. Three customers had opened support tickets about it that week.
CS team sent them documentation. Marked tickets “resolved.”
Customers canceled anyway.
The CS software tracked every interaction. Counted every touchpoint. Calculated health scores. Showed excellent engagement right up until cancellation.
It just didn’t notice the engagement was frustrated clicking trying to find a button.
Turns out “user frantically clicking everything” and “user loves your product” look identical to an algorithm.
The Metrics That Lie
What the dashboard showed:
- Touch points: 15/month ✓
- Health score: 78/100 ✓
- Feature adoption: 12/20 features ✓
- Response time: 4.2 hours ✓
- NPS: 42 ✓
What actually happened:
- Customer churned
- Product didn’t work
- CS spent 15 hours engaging with someone already mentally gone
- Health score green until 3 days before cancel
- Dashboard said “excellent account management”
The software measured activity. Not whether that activity prevented a single cancellation.
Like measuring button clicks instead of whether users accomplished anything.
When Health Scores Miss Everything
Case 1: The 83/100 Who Left
Logged in 23 times one week. Used 14 features. Opened 6 help articles. Replied to check-in emails.
Algorithm verdict: Excellent engagement! Healthy! Green!
Reality: Clicking everything trying to find the one thing they bought your product for. Every login was frustration. Every feature was “maybe this one?” Every article was desperation.
Canceled two days later.
CS team: “But the dashboard didn’t flag any risk?”
The customer flagged plenty. In support tickets the algorithm didn’t read.
Case 2: The Perfect Playbook Execution
15 touchpoints in 4 months. 100% completion. Check-ins sent. Resources shared. Reviews scheduled.
Customer replied twice:
- “Having trouble with this feature”
- “Please stop emailing me”
CS software counted both as successful engagement.
Customer churned. CS rep got praised for high completion rates.
This is what happens when you optimize for team activity instead of customer outcomes. Perfect scores while losing customers.
The Project That Taught Me
That company? They paid me $8,000 for a UX audit. Two weeks. Found 23 problems. Prioritized them. Built prototypes for critical fixes.
The export button? Top of the list. Maybe 4 hours to fix.
Three months later I followed up.
“We’ve been really busy with CS initiatives. Focused on improving health scores.”
They’d spent those months:
- Implementing new engagement playbooks (12 touchpoints!)
- A/B testing check-in emails
- Building custom risk dashboards
- Training on new CS software features
The export button was still buried. Customers still opening tickets. CS still sending documentation. Health scores still green until cancellation.
$2,400/month for CS software. $8,000 for solutions. Implemented the software. Ignored the solutions.
Six months later: job listing for new CS Manager. The old one burned out.
Churn rate in the listing: “currently 26%, goal 15%.”
It got worse. The CS software just documented why with prettier charts.
What CS Software Actually Does
It creates busy work disguised as strategy
Alert: “Customer health declining!”
CS team: Schedule call! Send resources! Log touchpoint! Update notes! Adjust playbook! Monitor score!
None of this fixes: the product doesn’t do what they need.
But it feels like progress. Software generates reports showing activity. Everyone thinks success is happening.
It’s not. It’s failure documentation with better UX.
It turns people into scores.
Through dashboards, customers become:
- Health: 67 (needs intervention)
- Engagement: declining (trigger playbook 3)
- Adoption: 45% (send tips)
- Risk: medium (schedule check-in)
CS stops asking “what problem are they solving?” and starts asking “how do we improve their score?”
The software determines who you’re designing for. And it’s not the customer.
It measures activity, not outcomes.
15 touchpoints logged. Zero problems solved. Dashboard shows “excellent engagement.”
CS team thinks they’re succeeding while customers quietly leave.
It creates false confidence.
“Our health scores are mostly green!”
Great. What’s your churn rate?
“…still working on that.”
CS software gives detailed dashboards, algorithms, real-time alerts. Feels like control. Actually just expensive awareness of problems you’re not fixing.
What Actually Works (And It’s Not Software)
Fix the product.
If customers struggle, it’s probably your product, not your outreach cadence.
No amount of check-in emails makes confusing interfaces clear. No algorithm predicts churn better than “does this actually work for you?”
Most CS problems are UX debt disguised as relationship problems.
Ask about goals, not features.
CS software pushes feature adoption. “They’re only using 40%!”
Maybe that 40% is all they need. Maybe the other 60% solve problems they don’t have.
Stop optimizing for adoption. Start optimizing for goal completion.
(CS software doesn’t measure that, funny enough.)
Measure outcomes, not activity.
Real success metrics:
- Are they accomplishing what they hired your product to do?
- Is it getting easier?
- Would they recommend it?
- Are they renewing?
CS software metrics:
- Did we touch base 12 times?
- Did they click our email?
- Did they log in?
- Did they use features we wanted?
Not the same thing.
React to problems, don’t predict them.
“This customer will churn in 30 days!”
Based on what? Login frequency? Feature adoption?
Neither predicts churn better than: “Are you getting value?”
If no, fix the problem. Don’t schedule three more touchpoints hoping their score improves.
When It Actually Helps (Rarely)
CS software isn’t useless. It works when:
You have 500+ customers and need organization.
At scale, you need systems. CS software is CRM for post-sale. Fine.
But organization isn’t insight. Knowing which customers exist isn’t knowing if they’re successful.
Your product works and you’re optimizing edges.
If customers generally succeed, CS software helps find patterns.
If your product has fundamental problems, CS software just documents them with better charts.
You treat it as a todo list, not a crystal ball.
Use it for: “Customer asked for help, follow up Tuesday.”
Don’t use it for: “Health score 65, trigger automated playbook, hope algorithm prevents churn.”
One is organized support. The other is metrics theater.
The Reality Check
If you’re spending thousands monthly on CS software but customers still leave, the software isn’t the problem.
Your product is.
CS software can’t fix:
- Confusing interfaces
- Features nobody finds
- Broken workflows
- Missing functionality
- Poor onboarding
- Products designed for pyramids instead of people
It only documents those problems with impressive dashboards while your CS team checks in with already-frustrated customers.
Customer success isn’t a software problem. It’s a product problem.
If your product works, customers succeed. If it doesn’t, no touchpoints save them.
CS software just makes losing customers feel data-driven.
Which is somehow worse than losing them without detailed reports explaining exactly how you failed.
Next time your CS team presents health scores showing everything’s fine but churn is terrible, ask: “Are we measuring our activity or their outcomes?”
If it’s your activity, the software is working perfectly.
Your customers just aren’t.
