Tracing failure is the norm, not the exception
Almost 95% of Chief Data Officers said they could not trace agent decisions end-to-end for regulators if pressed today. Only 5% said they had achieved full traceability across production deployments.
“When it works, success is claimed quickly. When it fails, the blame is lonelier,” said one respondent inside the published report. CDOs take 46% of the credit for AI successes, but inherit 56% of the blame when agent-driven systems misfire.
Pilots collapse before scale reaches boardrooms
More than half of organisations, 52%, have delayed AI deployments over concerns relating to reasoning opacity, workforce trust and integration snags. Almost 58% said fewer than half of their AI agent pilots survive past the proof-of-concept stage.
The data also shows leadership miscalculations widen the gap: C-levels overestimate the accuracy of their agents by 68% and underestimate production timelines by 73%.
Hallucinations disrupt jobs and operations
A total 59% said their teams have faced operational disruptions in the past 12 months due to hallucinations, logic breakdowns or flawed agent outputs. Nearly 75% of data leaders said trust is their biggest blocker. One in three, 38%, said they expect Agent accuracy to be higher than 80%, even though many fall below that threshold in live pilots.
Boardrooms and data chiefs converge on one point: 91% believe internal or “shadow AI” tools are active in their organisations, often without governance visibility or internal review. Data leaders say this is raising execution risk faster than executive oversight can keep pace.
AI agents are scaling into real companies and real jobs, but confidence, tracing and explainability are gating factors for mission-critical and regulated deployments. The technology is accelerating. The teams deploying it say trust is not.