There's a new approach spreading through product teams that's fundamentally changing how we understand customers. It's called the AI digital twin—and according to new research from Growth Unhinged, leading companies are already using it to simulate customer reactions to products before they launch.
The idea is deceptively simple: train an AI on real customer data—sales calls, support tickets, reviews, feedback—and use it as an always-available research participant. Instead of waiting weeks to schedule interviews or paying thousands for a focus group, you ask your digital twin what it thinks.
And apparently, it works.
The Customer Research Bottleneck
Here's a dirty secret in product management: most teams don't do nearly enough customer research. They know they should. They say they will. But when you're juggling roadmap pressure, stakeholder requests, and a backlog that never shrinks, the research gets pushed.
A study by the Product Management Festival found that while 87% of product managers believe customer research is critical, only 23% conduct it regularly. The gap between intention and action is massive.
Why? Because traditional research is slow and expensive. Recruiting participants takes weeks. Scheduling interviews requires coordination gymnastics. Analysis eats up hours. By the time you have answers, the window for decision-making has often closed.
This creates a predictable failure mode: teams ship features based on assumptions. Some assumptions are right. Many are wrong. And you only find out which is which after you've burned engineering cycles and frustrated customers.
Enter the Digital Twin
Kieran Flanagan at HubSpot has been experimenting with a different approach. As detailed in the 2026 State of AI for GTM report, he's built what he calls a "digital twin" of his customers using Claude.
The system runs on three layers:
Real customer data. Not synthetic personas or imagined user stories—actual transcripts from sales calls, reviews from G2, objection notes from the CRM. This grounds the AI in genuine buyer language instead of what marketers think customers might say.
Behavioral insights. The AI surfaces patterns—motivations, emotional drivers, objections, buying triggers. You're not asking "would they buy?" You're mapping why they buy or why they walk away.
Anchor statements. This is clever. Instead of asking the AI to rate something 1-5 (which produces useless "3" answers), Flanagan creates reference statements in customer language, ranging from "this sounds too complex, not worth switching" to "this is exactly what we've been looking for." The AI responds naturally, then indicates which anchor it's closest to.
The result? A persistent research participant you can test any campaign, feature, or message against—instantly.
Why This Actually Works
The skeptic in you is probably raising an eyebrow. Can an AI really simulate customer reactions accurately?
The answer depends entirely on what you feed it. As Flanagan puts it: "Synthetic data in, synthetic insights out."
The digital twin approach works because it's not asking AI to imagine customers from scratch. It's asking AI to pattern-match against real customer behavior you've already captured. The AI isn't inventing opinions—it's reflecting patterns that exist in your actual data.
This is similar to how research from PyMC Labs and Colgate-Palmolive found that AI can accurately predict purchase intent when trained on sufficient behavioral data. The twin is only as good as the voice-of-customer you train it on.
For product teams sitting on mountains of support tickets, NPS verbatims, and sales call recordings, this is a goldmine waiting to be extracted.
From Weeks to Minutes
The practical implications are significant.
Traditional customer validation might look like this:
- Define research questions (1 day)
- Write screener and discussion guide (2 days)
- Recruit participants (1-2 weeks)
- Schedule and conduct interviews (1 week)
- Analyze and synthesize findings (3-5 days)
Total: 3-4 weeks, plus budget for incentives and potentially recruiting services.
With a well-trained digital twin:
- Load your data into a project (done once)
- Ask your question
- Get an immediate response with reasoning
Total: minutes.
This doesn't replace deep qualitative research. You still need real conversations to discover things you didn't know to ask about. But for validation—checking whether your messaging lands, whether a feature concept resonates, whether objections you're anticipating are real—the speed difference is transformative.
The Voice of Customer Problem
Here's where it gets interesting for product teams specifically.
Most organizations are drowning in customer feedback. It arrives through support tickets, sales calls, social mentions, app reviews, NPS surveys, feature requests, community forums. Each channel captures a sliver of truth. But synthesizing across channels? That requires either a dedicated VoC program (expensive) or manual heroics (unsustainable).
The digital twin approach suggests a different path: instead of manually synthesizing feedback, you train an AI on all of it. The AI becomes your synthesized customer voice, accessible on demand.
This addresses one of the most persistent complaints from product managers: they know customers are telling them things, but the insights are scattered across systems and nobody has time to connect the dots.
When the dots are connected into a digital twin, patterns emerge that individual feedback items obscure. You start seeing themes. Contradictions resolve into segments. The confused noise of customer feedback becomes a coherent signal.
Practical Implementation
If you're intrigued, here's how to start building your own digital twin:
Start with what you have. You don't need perfect data. Sales call transcripts from Gong or Chorus. Support tickets from Zendesk or Intercom. Reviews from G2 or Capterra. NPS verbatims. Feature request comments. Whatever you've got.
Load into a project. Claude Projects, ChatGPT's custom GPTs, or any LLM that supports persistent context. The key is keeping the data accessible across conversations.
Write grounding instructions. Tell the AI what it's simulating. Something like: "You represent customers of [product]. Your responses should reflect the actual patterns in the provided customer data—their language, concerns, priorities, and objections. Don't imagine what a customer might say; reflect what they actually say."
Use anchor statements. For quantitative-ish feedback, create 5 reference points in customer language. This prevents the AI from defaulting to wishy-washy middle answers.
Test and calibrate. Ask questions you already know the answers to. If customers consistently complain about onboarding in your real data, does the twin surface that? Calibrate until it reflects reality.
The Catch
This approach has limitations worth acknowledging.
It reflects the past. Your digital twin knows what customers have said, not what they'll say when the market shifts or new competitors emerge. It's a snapshot, not a crystal ball.
It can't discover unknowns. Real research uncovers surprises—insights you couldn't have anticipated. The twin only surfaces patterns from data you've already collected. If you're missing crucial customer segments or contexts, the twin will be blind to them too.
Data quality matters enormously. If your sales team only talks to certain customer types, your twin represents those customers. If support tickets skew toward angry customers, your twin skews negative. Garbage in, garbage out.
It's not a replacement for relationship. Some of the best product insights come from longitudinal relationships with customers—watching them evolve, understanding their context over time. A digital twin is transactional in a way human research relationships are not.
Where This Is Going
The Growth Unhinged report suggests that 53% of leaders are seeing little to no impact from AI in their go-to-market efforts. But a small group is pulling ahead dramatically. The difference often comes down to context—teams that connect AI to their actual customer intelligence versus teams using generic tools with generic prompts.
Digital twins represent a specific instance of this broader pattern: AI gets powerful when grounded in your data, your customers, your context.
For product teams, this suggests a strategic imperative: start treating customer feedback as training data, not just insights to be read and forgotten. Every support ticket, every sales call, every review is a data point that could be teaching your AI to understand customers better.
The companies that build robust digital twins of their customers will iterate faster, validate cheaper, and make fewer assumptions. They'll still get things wrong—but they'll get fewer things wrong, and they'll find out faster.
Making It Work
If there's one takeaway, it's this: the value of AI for customer understanding scales with the quality and completeness of your customer data.
Teams that have fragmented feedback scattered across a dozen systems with no synthesis strategy will struggle to build useful digital twins. Teams that aggregate, organize, and connect their customer voice data have a foundation to build on.
This is why tools that centralize customer feedback matter more than ever. Not because dashboards are useful (though they can be), but because aggregated data is trainable. The corpus of your customer voice becomes an asset—one that compounds in value as AI gets better at extracting meaning from it.
The best time to start building that corpus was years ago. The second best time is now. Every piece of feedback you capture today is teaching material for the AI systems of tomorrow.
Customer research is no longer about occasional deep dives. It's about building a persistent, always-learning understanding of your customers—one that's ready when you have a question.
And the question-asking is about to get very, very fast.
