There's a fascinating contradiction emerging in enterprise AI. According to Adobe's 2026 AI and Digital Trends report, 78% of organizations expect agentic AI to handle at least half of their customer support interactions within 18 months. That's not a pilot program. That's a fundamental shift in how companies plan to operate.
But here's the uncomfortable part: only 31% have implemented a measurement framework for agentic AI. Nearly half have neither framework in place or aren't sure one exists.
The ambition is there. The infrastructure isn't.
For product teams, this gap matters enormously. Because when support goes autonomous but your feedback loops stay manual, you're flying blind into the most significant customer experience transformation in a decade.
The 2-5 Second Window
Adobe's research reveals something that product managers intuitively know but rarely see quantified: half of customers give promotional content only 2–5 seconds to capture their interest. Emails, ads, social posts—the window to relevance is brutally short.
And when you miss that window? Disengagement isn't gradual. It's immediate.
This isn't just a marketing problem. It's a product problem. Every touchpoint with your customer—from onboarding emails to in-app messages to support responses—now operates under this same unforgiving timeline.
The organizations winning aren't necessarily faster. They're more relevant. And relevance requires understanding your customers at a depth that most enterprises still can't achieve.
Why Pilots Never Become Products
Adobe's findings echo what many of us have watched unfold: generative AI is delivering tactical wins. 70% of organizations report improved personalization. 64% see better lead generation. 59% report improved customer retention.
But only 36% consider themselves ahead of the curve in digital CX maturity.
The pattern is familiar. AI experiments succeed in isolation. Then they hit the enterprise—and stall.
The culprit is usually mundane: data fragmentation, uneven alignment between leadership and practitioners, and measurement frameworks that don't exist. Adobe reports that across workflows, most organizations have no active use of agentic AI. Fewer than a quarter are running limited pilots. Organization-wide adoption sits at just 16% for customer support and 13% for brand discovery.
Enterprise software is littered with promising pilots that never scaled. The difference between a demo and a deployment isn't the AI model—it's everything around it.
The Misalignment Problem
Here's where Adobe's findings get uncomfortable. Nearly one-third of organizations report that executives and day-to-day practitioners are misaligned on AI strategy. Another 47% say alignment is only partial.
The top driver? Executive misunderstanding of AI (61%).
This creates a predictable dysfunction. Executives emphasize revenue growth and customer satisfaction—legitimate priorities. Practitioners focus on operational realities like content creation and activation. Both matter. But when teams don't share definitions, decision rights, or success metrics, AI programs fragment into disconnected wins.
The result: organizations prioritize "easy" metrics (cost savings, productivity) even when the real risk is customer trust, relevance, and retention.
Product teams feel this acutely. You need customer insights to make decisions. But the data lives across twelve different systems, owned by departments that don't talk to each other, measured by metrics that don't align with product outcomes.
What Customers Actually Want
Adobe asked customers about their comfort with AI. The findings are instructive:
43% would interact with a brand's AI concierge. That's genuine curiosity. But customers also set hard boundaries. They're uncomfortable with agents making purchasing decisions. They're skeptical of agent-to-agent interactions. They pull back when they discover content is AI-generated or when they learn they're interacting with AI unexpectedly.
The most important trust factor customers cite: the ability to switch to a human at any time.
This matters for product teams designing feedback loops. Your customers may tolerate AI handling routine interactions. But they expect—and deserve—human attention when stakes are high. The question isn't whether to automate customer intelligence. It's which signals require human synthesis.
The Data Quality Gap No One Wants to Fix
Adobe's report is blunt about the real blocker: most organizations have the technology for generative AI but lack the data foundation for agentic AI.
Less than half say their data quality and accessibility is adequate for AI in general. Fewer say they have a shared customer data platform capable of supporting agentic AI.
And yet—data quality, unification, and governance rank far below other AI investment priorities. Organizations admit data limits AI progress, then invest elsewhere.
This is how "AI transformation" becomes a year of pilots and a shelf of decks.
For product teams, the data gap creates a specific problem: you can't understand customers you can't see clearly. When identity, consent, and signal quality are inconsistent, every AI-powered insight inherits that inconsistency. Your churn predictions are built on fragmented data. Your feature prioritization reflects incomplete feedback. Your customer segments describe ghosts.
What This Means for Product Teams
The Adobe report points to an uncomfortable truth: most organizations are building AI capabilities without building the foundations those capabilities require.
If you're a product leader navigating this environment, here's what matters:
Relevance windows are collapsing. Every customer touchpoint now operates under 2-5 second attention constraints. This affects onboarding, support, in-app messaging—everything. The teams winning aren't just faster; they're more precisely relevant.
Pilots without playbooks stay pilots. The gap between AI experiment and AI deployment is rarely technical. It's operational: measurement frameworks, escalation paths, cross-functional ownership. If you're running AI pilots for customer insights, define what "working" means before you scale.
Internal alignment is the real bottleneck. When executives and practitioners disagree on AI strategy, customer intelligence programs stall. Before adding another tool, ensure your team has shared definitions, decision rights, and success metrics.
Data quality isn't optional. Agentic AI for customer experience requires unified data: identity resolution, consent management, signal quality. If your customer data is fragmented across twelve systems, your AI insights inherit that fragmentation.
Trust is a feature, not a checkbox. Customers accept AI assistance but demand human escalation. Build feedback loops that distinguish between signals AI can synthesize and signals that require human judgment.
The Readiness Imperative
Adobe's 78% figure represents an expectation, not a capability. Most organizations expect AI agents to handle half their customer support within 18 months—but most haven't built the infrastructure to make that possible.
The window is narrowing. Early adopters are already connecting customer signals across channels, building unified data foundations, and designing AI workflows with measurement built in. Late movers will find their customer insights increasingly lagging behind the experiences those customers actually want.
The question for product teams isn't whether agentic AI is coming to customer experience. It's whether your organization will have the customer intelligence to make it work.
Because AI doesn't replace the need to understand your customers. It amplifies whatever understanding—or misunderstanding—you already have.
Product teams using Pelin unify customer feedback from support tickets, sales calls, user interviews, and in-app behavior into actionable insights—building the data foundation that agentic AI requires.
