Your company probably shipped an AI feature last quarter. Maybe it was a smart recommendation engine. Perhaps an AI-powered search. Or that chatbot everyone in leadership was excited about.
Here's the uncomfortable question: Do you actually know if it's working?
If you're like 82% of organizations, the honest answer is no.
The Measurement Gap Nobody Talks About
A new report from Thomson Reuters dropped this week with a statistic that should concern every product leader: only 18% of organizations actually track ROI on their AI tools. Even more striking, 40% admitted they don't measure it at all, and another 40% don't even know whether their company measures it.
Let that sink in. We're in 2026. Companies are spending millions on AI initiatives. And the vast majority have no idea if those investments are paying off.
But here's where it gets worse.
Among the 18% who do track metrics, the focus is almost entirely internal. According to the Thomson Reuters data, organizations overwhelmingly prioritize:
- Internal cost savings (77%)
- Employee usage rates (64%)
- Employee satisfaction (42%)
Meanwhile, the metrics that actually matter for product success are barely on the radar:
- Client/customer satisfaction (26%)
- External revenue generation (23%)
- New business won due to AI (17%)
We've built sophisticated AI systems and then measured success by whether employees log in to use them. That's like measuring the success of a restaurant by how much the chefs enjoy cooking, while ignoring whether anyone actually eats the food.
The Customer Feedback Blind Spot
The Thomson Reuters report surfaced another revealing data point: 60% of corporate tax professionals and 67% of corporate legal professionals don't actually know whether AI was used in the work delivered to them.
Think about that from a product perspective. Your customers are receiving AI-enhanced outputs, and they have no visibility into how AI shaped their experience. They can't tell you if the AI helped or hurt. They don't know what to praise or critique.
This isn't unique to professional services. It's happening across every industry shipping AI features:
- The e-commerce site that can't tell if AI recommendations are driving conversions or annoying customers
- The SaaS platform with an AI assistant that nobody knows how to evaluate
- The fintech app with AI-powered insights that might be valuable or might be noise
Without systematic customer feedback, AI features exist in a measurement vacuum. And products built in measurement vacuums tend to fail—just slowly enough that nobody sounds the alarm.
Why Traditional Metrics Miss the Point
When Elizabeth Beastrom, president of tax and accounting professionals at Thomson Reuters, was asked about the internal focus, she explained that "in the early stages of adoption, firms tend to focus on the metrics that are easiest to observe and quantify."
She's right. Employee usage is easy to track. Cost savings are relatively easy to calculate. Customer impact? That requires actually talking to customers.
But "easy to measure" and "worth measuring" are different things.
Here's the trap product teams fall into:
Phase 1: Ship AI feature with excitement and fanfare.
Phase 2: Track internal metrics. Usage is up! Engineers are using the tools!
Phase 3: Declare victory based on adoption numbers.
Phase 4: Six months later, wonder why customer churn hasn't improved and expansion revenue is flat.
The problem is that internal metrics create an illusion of success. High usage doesn't mean high value. Employees might be using an AI tool because they're required to, or because it's novel, or because it's there—not because it's genuinely making their work better for end customers.
The Voice of Customer Imperative
What would it look like if product teams measured AI success the way it should be measured?
It starts with customer feedback. Not NPS scores sent quarterly. Not satisfaction surveys that sit at 3% response rates. Real, continuous, qualitative feedback that captures how customers actually experience your AI features.
1. Capture feedback at the moment of AI interaction
When a customer uses an AI-powered feature, that's when you need to understand their experience. Not two weeks later. Not in an annual review. Right then.
Did the AI recommendation help them find what they needed? Did the automated response answer their question? Did the predictive feature surface something valuable?
Most companies have no mechanism to capture this. The customer interacts with AI, forms an opinion, and that opinion evaporates into the void.
2. Connect feedback to specific AI capabilities
Generic satisfaction scores are useless for AI improvement. You need to know exactly which AI features are landing and which are missing.
This means tagging feedback to specific capabilities:
- "The smart search understood what I was looking for" → AI search is working
- "The recommendations were completely irrelevant" → AI recommendations need work
- "The automated summary saved me an hour" → AI summarization is valuable
Without this granularity, you're optimizing in the dark.
3. Track sentiment trends over time
AI features evolve. Models get updated. Training data shifts. What worked last month might not work this month.
Continuous feedback tracking lets you catch regressions before they become churns. If sentiment around your AI assistant suddenly dips, you can investigate immediately—not discover it in quarterly business reviews when customers have already left.
4. Close the loop between feedback and iteration
Here's where most companies fail even when they do collect feedback: the insights don't reach the people who can act on them.
Customer feedback about AI features needs to flow directly to the product and ML teams making decisions about those features. Not filtered through three layers of reporting. Not summarized in a quarterly deck. Direct, actionable, timely.
From Measurement Gap to Competitive Advantage
The Thomson Reuters report reveals an industry-wide blind spot. But blind spots create opportunities.
The companies that figure out how to measure AI impact through the customer lens—not just the internal efficiency lens—will build a compounding advantage.
Here's why: AI features are increasingly table stakes. Everyone's shipping them. The differentiation isn't in having AI; it's in having AI that demonstrably improves customer outcomes.
When you can prove that your AI features increase customer satisfaction, reduce time-to-value, and drive better outcomes, you can:
- Command premium pricing for AI capabilities
- Build customer loyalty through genuine value delivery
- Iterate faster based on real-world feedback
- Avoid the "spray and pray" approach to AI investment
Meanwhile, your competitors are still measuring usage rates and wondering why their expensive AI initiatives aren't moving business metrics.
Making Customer Feedback Operational
Knowing you should collect customer feedback and actually doing it at scale are different challenges.
Traditional approaches—surveys, interviews, support ticket analysis—are too slow and too sparse for the pace of AI iteration. By the time you've analyzed last quarter's feedback, you've already shipped three model updates.
This is where AI itself becomes part of the solution. Modern voice-of-customer platforms can:
- Aggregate feedback automatically from support conversations, in-app signals, social mentions, and direct submissions
- Extract themes and sentiment without requiring manual analysis for every piece of feedback
- Surface patterns specific to AI features by connecting customer language to product capabilities
- Alert teams in real-time when sentiment shifts or new issues emerge
The irony is rich: using AI to measure whether your AI is working. But it's also practical. The volume of customer signals in modern products is too high for manual analysis. You need systematic, automated approaches to feedback synthesis.
The Path Forward
The Thomson Reuters data should be a wake-up call. If your organization is part of the 82% not tracking AI ROI, you're flying blind on some of your most significant product investments.
But the solution isn't just "track more metrics." It's track the right metrics. And the right metrics start and end with the customer.
Here's a simple framework to start:
This week: Identify every AI-powered feature in your product. List them. Be specific.
This month: For each AI feature, define what "success" looks like from the customer's perspective. Not usage. Not adoption. Actual customer value.
This quarter: Implement systematic feedback collection tied to those AI features. Start simple—even a thumbs up/down after AI interactions is better than nothing.
Ongoing: Build feedback review into your AI iteration cycle. Before shipping model updates, review customer sentiment. After shipping, track if sentiment improves.
Conclusion
The AI adoption curve has been steep. The AI measurement curve hasn't kept up.
Most companies are investing heavily in AI capabilities while remaining willfully ignorant about whether those capabilities create customer value. They measure what's easy—usage, costs, internal sentiment—while ignoring what matters: customer impact.
This measurement gap won't last forever. As AI features proliferate and differentiation becomes harder, the companies that can prove customer value will win. The ones that can't will wonder where their AI investments went.
The fix isn't complicated. Listen to your customers. Systematically, continuously, at the point of AI interaction. Let their feedback guide your AI roadmap.
Because in the end, the only ROI metric that matters is whether your customers are better off.
Building products that actually deliver value starts with understanding what your customers experience. Pelin helps product teams transform scattered customer feedback into clear insights—so you can measure what matters and build what works.
