AI Superpowers: Still Stuck in Pilot Purgatory?
The Hype vs. The Reality
The marketing blitz around AI's "superagency" – the idea that AI will turn every worker into a hyper-productive genius – is in full swing. McKinsey, among others, is pushing this vision, citing a potential $4.4 trillion productivity boost (
Superagency in the workplace: Empowering people to unlock AI’s full potential). They surveyed thousands, crunched the numbers, and concluded that employees are ready, even eager, for AI to transform their jobs.
But here's where the numbers start to smell a little fishy.
The McKinsey report states that only 1% of companies are "mature" in their AI deployment, meaning it's fully integrated and driving substantial results. One percent. That's not a typo. Ninety-nine percent are still figuring it out, stuck in pilot purgatory, or just throwing money at the problem hoping something sticks. RSM’s Middle Market AI Survey 2025: U.S. and Canada, notes that 91% of organizations use generative AI in some part of their business, yet many leaders remain stuck in pilot purgatory.
Agentic AI isn’t replacing humans. It’s redefining how humans, technology, and machines can collaborate to create value.
The gap between the "superagency" promise and the current reality is vast. It's like promising everyone flying cars when most people are still stuck in traffic with a flat tire.
Agentic AI: Trust, But Verify the Data
The Missing Link: Data Literacy
The problem isn't the technology itself. Agentic AI is evolving rapidly, with models now capable of reasoning, planning, and even acting autonomously. Microsoft Dynamics 365 Supply Chain Management now includes supplier communications agents that handle these exact workflows, reducing manual follow-ups and exception handling across procurement operations. The real bottleneck, according to Andrew Beers, chief technology officer at Tableau, is data fluency.
He says, “In order for there to be AI success, people will have to change their relationship with data." That's putting it mildly.
The McKinsey survey revealed that high-achieving organizations were three times more likely to trust AI over their own intuition, compared to low-achievers. (I'll admit, that makes me a little nervous.) But here's the rub: simply trusting AI isn't enough. As Tulia Plumettaz, director of data science at Wayfair, points out: “We have a widespread culture of experimental validation. We don’t accept an answer of, ‘The model said so.’ No. Model outcomes are continuously scrutinized through live testing and validation.”
In other words, it's not about blindly accepting the AI's pronouncements; it's about understanding the data behind them, questioning the assumptions, and validating the results. This requires a level of data literacy that most organizations simply don't possess.
Blind Faith in AI: A Recipe for Disaster?
The "Trust But Verify" Paradox
The McKinsey report highlights a critical paradox: employees trust their own employers to deploy AI ethically (71% trust level), more than universities or tech companies. That's great, but trust without understanding is a recipe for disaster.
What happens when an AI model makes a biased decision, reinforcing existing inequalities? What happens when a "hallucination" leads to a costly error? If employees lack the data literacy to identify these problems, the "superagency" dream quickly turns into a super-sized mess.
And this is the part of the McKinsey report that I find genuinely puzzling. They acknowledge the importance of data literacy, but they don't quantify it. They don't measure the current level of data fluency across different organizations or industries. They simply assume that employees are "ready" for AI, even though the evidence suggests otherwise.
AI Readiness: Comfort vs. Competence?
A Methodological Critique
Here's my thought leap: how was "readiness" even measured? Was it a simple survey asking employees if they felt comfortable using AI? If so, that's a deeply flawed metric. Feeling comfortable with a technology is not the same as understanding its limitations and potential pitfalls. It's like feeling comfortable driving a car without knowing how the engine works. You might get from point A to point B, but you're likely to crash along the way.
To be more exact, 47 percent of C-suite leaders say their organizations are developing and releasing gen AI tools too slowly, citing talent skill gaps as a key reason for the delay.
Is your business unit really ready for artificial intelligence?
A Bridge Too Far?
The "superagency" vision isn't inherently wrong. AI has the potential to augment human capabilities, automate tedious tasks, and unlock new levels of productivity. But that potential won't be realized unless organizations invest in data literacy, foster a culture of critical thinking, and move beyond the hype.
The numbers suggest that we're still a long way from that reality.
The "So What?" Test
The "So What?" test is this: if AI is truly going to revolutionize the workplace, we need to see more than just pilot projects and optimistic surveys. We need to see tangible evidence of increased productivity, improved decision-making, and reduced costs. We need to see organizations embracing data literacy as a core competency, not just a buzzword.
Until then, the "superagency" remains a delusion, a marketing fantasy that obscures the hard work and critical thinking required to make AI truly transformative.
A Dose of Reality
The truth is, most companies are still in the very early stages of AI adoption. They're experimenting with different tools, exploring various use cases, and trying to figure out how to integrate AI into their existing workflows. That's perfectly fine. But let's not pretend that we're already living in the "superagency" future. We're not even close.
It's All Smoke and Mirrors
The numbers speak for themselves: the "superagency" narrative is built on a foundation of hype and wishful thinking.