The research is done.
Six weeks. Twelve user interviews. A 47-slide synthesis deck with themes color-coded in four shades of blue. The researcher presents to the cross-functional team on a Thursday afternoon. Everyone nods. Someone says “this is really valuable.” Someone else says they’d love to dig into the data more when they have time.
The deck goes into Confluence under Research > 2025 > Q3.
Design sprint starts Monday. The designer opens Figma. Opens the wireframes from the kickoff meeting four weeks ago.
The direction hasn’t changed.
I’ve been in that room. Not as the researcher. As the person watching the deck go into Confluence.
The question of UX research vs UX design gets framed as a skills comparison. Two disciplines. Complementary. Research informs design. Design generates hypotheses. Research tests them. It’s a loop. Every conference talk about UX design has a slide showing this loop. The arrows are very clean.
Nobody draws a slide showing what the loop looks like when research takes six weeks and the sprint is already booked.
A proper research cycle – recruiting, screening, running interviews, transcribing, synthesizing, presenting – runs four to seven weeks. A design sprint runs two to four. These timelines don’t fit inside each other unless research starts before the project does, which requires someone to have predicted the project. Most companies have one researcher, maybe two, running reactive studies on three products simultaneously while also sitting in on every design review “to stay close to the work.”
Translation: to make sure the design decisions they can’t influence at least make sense to them.
The UX Research vs UX Design Loop Is on Every Diagram. It Isn’t in Any Calendar.
The double diamond shows it. The design thinking cycle shows it. Every UX/UI design process framework has the little arrows that loop back. Research feeds design feeds validation feeds research. Tidy. Sequential. Implying that someone, at some point, sat down and said: we won’t start designing until we know what users need.
That meeting did not happen.
The meeting that happened was a kickoff. Someone shared a rough direction in Figma – a few flows, some reference screens, a general shape of the thing. Everyone said it looked promising. Someone asked whether they should wait for research before going further. The PM said research was running in parallel. Research had not started. Recruiting had not started. The screener had not been written.
Discovery is scheduled for weeks one through three. Research kicks off in week two once recruiting is sorted. Research synthesis lands in week six. Design hits stakeholder review in week five. The timeline means design is already in front of the VP when the researcher is still cleaning up transcripts and tagging affinity notes alone on a Friday afternoon.
Research and design share a project name in Jira. They do not share a timeline. The org chart shows them as parallel tracks feeding into delivery. The Gantt chart shows delivery starting while one of those tracks is still running.
Nobody updates the Gantt chart to fix this. The sprint is already booked. It was booked before the kickoff call ended.
What Actually Happens to the UX Research Report
The report is real. This part matters – the work is real, the insights are real, the six weeks were real.
It’s 47 pages. Executive summary upfront. Themes organized by frequency. User quotes with names redacted and job titles included for context. A “key findings” slide. Three recommendations at the end, specific and actionable.
The designer read the executive summary and two of the theme sections. Roughly forty minutes on a Tuesday between two other tasks. Three quotes got pulled for a stakeholder presentation. One became a Notion card in the research repository under “Insights > To Prioritize.”
The stakeholders found the quotes “interesting.”
Someone asked whether the findings changed anything.
Long pause.
Stakeholder: “We’ll definitely take these into account going forward.”
Translation: the direction doesn’t change. It was set in the kickoff meeting, before the research ran, by a whiteboard session and a competitor audit. Research confirmed some of it. Contradicted two flows that had already been handed off to development.
Nobody went back.
The three recommendations – specific, prioritized, with direct user quotes attached to each one – were added to the design backlog under a column called “Research Insights – To Review.” The column has eleven cards in it. Eight are from studies run in 2024. Nobody is assigned to any of them. There’s no sprint allocation. It is not technically neglected; it’s just not part of any current project, and it won’t become part of one because adding it to a sprint would require someone to take something else out, and nothing is coming out of the sprint.
The report was last opened in November. Four people have accessed it since it was published. Three of them were on the research team.
Research and Design Report to the Same Person. They Don’t Work on the Same Timeline.
Research reports to the VP of Product. Design reports to the VP of Product. On paper, same team. In practice, two separate calendars that share a Slack channel.
A “research-design sync” exists. It was weekly from January to March. Biweekly from April. “As needed” from July.
Translation: it stopped happening.
Nobody cancelled it. It still exists in the calendar. Sometimes someone declines. Sometimes it just sits there with no attendees confirmed, no agenda, and everyone too busy to reschedule and too aware of the optics to delete it. It has been rescheduled four times. The current slot is Thursday at 4:30 PM. Nobody has ever attended a meeting that started at 4:30 PM on a Thursday.
The gap between ux research vs ux design isn’t in the job descriptions. It’s in the calendar.
The last time research presented findings and design discussed implications in the same room was Q2. Fourteen people attended. Eleven had other priorities that prevented follow-through on any of the action items. The three who didn’t were the research team, who sent a follow-up email in August that received two emoji reactions and no replies.
This is the UX iceberg problem with a budget attached. The visible part – screens, prototypes, shipped features – gets funded and tracked and reviewed. The part below – the evidence, the synthesis, the questions that should have changed the direction – exists in a shared drive folder that nobody has the time or mandate to act on.
It’s a fucking uncomfortable thing to say at a company that has “we put users first” in its values. So nobody says it cleanly. Both teams keep doing their jobs. The researcher runs the studies. The designer ships the screens. Occasionally the two things are related. More often they are parallel tracks that converge only in the retro, where someone notes that next quarter the team should “work more closely with research.”
Next quarter the sprint plan is already full.
UX Research Gets Cut First. The Order Tells You Something.
When the budget tightens, research goes before design does.
Not because the company doesn’t believe in research. Because research outputs are invisible to anyone who hasn’t read them, and most people haven’t read them. You can see a product with no design. You can’t see a product built on wrong assumptions until it’s shipped and the conversion numbers come back.
Using AI for UX research has made this easier to rationalize. Run the questions through an LLM. Get a synthesis in four hours. The synthesis reflects the questions asked, which reflect the assumptions already held, which are the things research was supposed to challenge. But it’s faster. And faster is what the sprint needed.
In the quarterly planning meeting, someone calls this an efficiency gain. That’s the word used: efficiency. There is a slide with logos. Nobody asks what happens when the people writing the research questions already know the answer they’re hoping for. The meeting runs on time.
Research headcount gets cut. AI tools get added. The insights confirm the direction. The direction ships. Eighteen months later someone commissions a research study to find out why users aren’t converting, and a researcher spends six weeks producing a 47-slide deck that goes into Confluence under Research > 2027 > Q1.
The debate about UX research vs UX design assumes a feedback loop exists between the two. It assumes the 47-slide synthesis deck changes the wireframes. It assumes the researcher and the designer are working on the same timeline, toward the same decisions, with the same ability to stop the sprint when the evidence says to.
Most companies have one researcher attending design reviews “to stay close to the work.” That’s not a feedback loop. That’s a product designer vs UX designer org chart with an extra calendar invite. Research runs. Design ships. The report is in Confluence under Research > 2025 > Q3, last opened by the person who put it there.
The researcher knows this. The designer knows this. The VP of Product knows this. None of them are wrong to keep doing their jobs. The researcher runs rigorous studies. The designer ships thoughtful screens. The VP hits the quarterly roadmap. These are all real things that happened. They just didn’t happen to each other.
Both teams know this is the reality.
Neither says it in the all-hands.
