LLM traffic converts 3× better than Google search
58% of buyers now start their research in ChatGPT or Gemini, not Google. Most startups aren't showing up there yet.
The ones that are get cited by the AI tools their buyers, investors, and future hires already use. And they convert at 3×.
Download the free AEO Playbook for Startups from HubSpot and get the exact steps to start showing up. Five minutes to read.

Dear Readers,
I was anti synthetic user research until I tried it on the GrowthDesigners.co website. In last week’s newsletter, I reviewed one of the many synthetic user testing tools so you didn’t have to. The results convinced me that synthetic users aren’t replacing real humans when it comes to testing. But, it’s very helpful for auditing your product and exposing weak thinking.
But you don’t need to pay for a product to try this yourself. This week, we're featuring community member who built his own synthetic user feedback setup using Claude Code. Then we bring in the critics, because the backlash to synthetic research is real, pointed, and worth reckoning with.
~Scott Christensen
Founder @ Sendsight | GrowthDesigners Community Co-Lead

DIY With Claude Code (Featuring Travisse Hansen, Head of Product at Boostly)
Travisse Hansen is the Head of Product and Product Marketing at Boostly and runs ClaudeFluent, a live virtual class on getting the most out of Claude. He built an open-source synthetic user feedback skill for Claude Code that you can run from an IDE or terminal. Watch his full walkthrough here.
How it works
You define personas (an entrepreneur, a product manager, a direct-to-product marketer, a customer success leader, etc.) and point them at any product or page. Claude assumes each persona, navigates the experience, and gives you two layers of feedback:
Stream of consciousness simulation. Real-time reactions as the persona moves through the product. Travisse found this surprisingly useful for putting yourself in the headspace of different user types.
Summary feedback. A structured recap of observations, friction points, and impressions for each persona.
Results compile into a document automatically, dropped into your docs folder or a new user research folder.
Getting started (5-10 mins)
Install the skill. Tell Claude to install it directly. Just say "please install the skill" with the skill file loaded.
Set up your personas. The persona section starts empty or with a template. Fill it in with your own users or point it at your existing personas.
Run it. Start a Claude session (ideally with the Chrome extension active for deeper interaction) and type something like
run synthetic feedback against [your product] as though you were trying to [user goal].
Where Travisse found it most useful
At Boostly, the team ran synthetic feedback against a pricing revamp and a new website:
Pricing visibility skepticism. The synthetic personas flagged that pricing wasn't visible by default, creating user skepticism before they even engaged.
Missing enterprise discounts. Even though Boostly furnishes discounts for larger organizations, they weren't listed anywhere. The personas noticed.
Quick validation loops. Instead of waiting for a research cycle, Travisse could spot-check changes in minutes. "100 times faster than gathering the same personas live."
As Travisse put it: "I found this tremendously useful for quick validation and spot checking your thinking."

Not So Fast: The Case Against Synthetic Users
We've shown you what synthetic testing can do. Now let's hear from the people who think it's dangerous, misleading, or at best a distraction from real research.
The backlash is louder than you might expect, and it's coming from some of the most respected names in UX.
"Synthetic users seem to care about everything. This is not helpful."
Nielsen Norman Group (Maria Rosala and Kate Moran) ran a head-to-head comparison of synthetic vs. real users in 2024. Their finding: "Real people care about some things more than others. Synthetic users seem to care about everything." They also found synthetic users to be sycophantic, one-dimensional, and too shallow to generate genuine insight. In one tree-testing exercise, ChatGPT "vastly outperformed most real people," proving it wasn't modeling real behavior at all, just performing well on the task.
"Absolutely the wrong direction"
Jared Spool, founder of Center Centre and UIE and one of the most influential voices in UX, has been blunt: he called synthetic users "absolutely the wrong direction for UX professionals to go" and dismissed the underlying tools as "some imaginary random poetry generator."
"One hour with a real patient revealed more than three weeks of AI"
IDEO published "The Case Against AI-Generated Users," describing how "one hour with a real patient and physician revealed far more depth and complexity than three weeks of AI-generated information could convey." Their conclusion: "The solution isn't to make up fake people, but to get better access to real people."
The deeper concerns
Beyond individual critiques, several themes keep surfacing:
Sycophancy. Synthetic users tend to praise everything. They don't push back the way a frustrated real user does. You miss the emotional texture: the sighs, the confusion, the workarounds.
No unknown unknowns. AI can only reflect what's in its training data. The whole point of user research is to discover what you didn't know to ask about. Synthetic users can't surprise you in the ways that matter most.
Bias amplification. As Vitaly Friedman (Smashing Magazine) warned, AI-first research "strengthens biases, supports hunches and amplifies stereotypes" rather than challenging them.
Ethics. Erika Hall, author of Just Enough Research, frames it as a moral question: "It is unethical, indefensible, and also unnecessary, to create a product or service that affects other people without having conversations with representatives of those populations."
Plausible nonsense. The outputs look convincing even when the underlying measurement is weak. Konstantinos Papangelis (RIT professor) calls this "epistemic freeloading: appropriating the language and authority of scientific research while abandoning its standards."
And the academic research is catching up to these concerns. A systematic literature review published this month (Kuric et al., March 2026) synthesized 182 studies on LLM-generated participants and identified four core issues: cognitive misalignments, distortions, misleading believability, and overfitting/contamination.
Their conclusion: despite various technical improvements, "fidelity improvements they demonstrated remain modest." The researchers propose treating synthetic participants as "heuristic-like" tools rather than human equivalents, which notably, aligns pretty closely with where we land below.
These aren't fringe opinions. They represent a meaningful portion of the research community.

So Where Do We Actually Land?
After testing the tools, talking to builders, AND hearing the critics, here's our honest take:
Use synthetic feedback for:
Spot-checking pricing pages, onboarding flows, and landing pages before a deeper study
Generating hypotheses about where friction exists
Rapid iteration and "testing the test," refining interview guides before fielding with real people
Edge-case behavior simulation across different persona types
Catching the obvious stuff your team is too close to see
Don't use it as a replacement for:
Deep empathy research and understanding complex human motivation
Decisions that require accurate population estimates or individual-level prediction
Sensitive domains where model bias or lack of lived experience can mislead
Validating findings that will drive major, irreversible product bets
Discovering unknown unknowns: the things you didn't know to ask about
The emerging consensus, and one we share, is a hybrid approach: use synthetic for speed and breadth early, then reserve human research for ground truth and nuance when the decision is expensive or irreversible. It's not either/or. It's knowing which tool fits the moment.
Growth is hard. Pushing metrics is hard. But the barrier to getting directional user feedback just got a lot lower, as long as you know what it can and can't tell you.
We Want Your Take
We brought this debate to Hot Take Tuesday in the Slack community.
Are you pro-synthetic, anti-synthetic, or somewhere in between? Have you run your own comparison of synthetic vs. human insights?
Drop in and tell us.
Scott Christensen is the co-founder of Sendsight, a product that automatically maps, audits, and improves lifecycle journeys across email and product.
He previously worked in growth design at Mercury and Expedia, and co-leads the GrowthDesigners.co community.
Growth Design School is Now On Demand

The on demand learning platform for Growth Design School
Growth Design School is our community create program that provides a unique blend of fully-remote learning, including 5 hours of video content, case studies, customized assignments, collaborative debriefs, and more!
We switched from a live cohort to a fully on demand model which drops the price from $950 to $290 to make this more accessible to all.
Learn More - https://www.growthdesigners.co/school


