Stop Buying AI Licences. You’re Doing It Wrong.
MIT research shows 95% of AI pilots fail. Futurist Andrew Grill knows exactly why—and how to fix it.
“I don’t get asked to speak at companies that have got it sorted,” Andrew Grill tells me. “I’m asked to come in because the doctor needs to triage the patient.”
Grill isn’t exaggerating. He’s completed about 80 engagements this year and has seen troubling patterns. MIT research says 95% of AI pilot projects fail. Why? Companies treat AI as a software update rather than a transformation.
Grill wrote Digitally Curious for executives who often delegate AI decisions to IT out of discomfort. At the NewsRewired conference in London in November, after an “Ask Me Anything” session, he said, “I have a hunger for Q&A. It tests my knowledge and makes me work for my money.”
Executives are not utilising the available tools.
The fundamental problem? C-suite leaders aren’t personally using AI. “They need to have that aha moment where they realise I didn’t know that AI could do that,” Grill says. “When they have that moment, they go, Oh, now I get it. Now I can use this for my business.”
He understands their hesitance. Many executives are older and delegate to IT, so they never fully experience AI themselves. Grill recalls working for someone in the 90s who relied on printed emails and hand-written replies but eventually adapted to new tech.
“I don’t know a CEO in the world today that wouldn’t at least tap out an email on their own phones. They know the power of immediacy, but it’s taken them a while to get there.”
AI won’t wait for them to catch up.
The Copilot problem
Grill paid extra for Microsoft 365 Copilot but used it only a few times, finding other tools better.
At a 300,000-person company, the CIO bought 8,000 Copilot licenses. Couldn’t justify buying more of them. “Everyone has Word, Excel and Outlook because that works. Guess what? They’ve been trained on how to use those tools.” With Copilot, companies just roll it out. Employees open Copilot, they try one bad prompt, one mediocre result, and never open again.”
Buying Copilot licenses isn’t doing AI. “You need to train for curiosity, not compliance.”
Four barriers (process is the killer)
Grill identified four barriers to AI adoption: training, budget, data, and reimagined processes.
Budget is easy. Companies find money for what saves money. Data is the real roadblock. Poorly structured or biased data can lead to AI hallucinations. You can’t build a skyscraper on a swamp.
Process is the killer. “We’re trying to pave cow paths using AI tools for old workflows, and that’s a waste.”
For example, a distribution company holds weekly two-hour meetings with 30 people to review stock—60 hours wasted that could be reduced by restructuring.
Companies stuck in broken processes don’t have time to fix them. Inefficient processes are precisely why they’re so busy.
What the 5% do differently
The successful 5% start with a boring problem. “They identify a boring, expensive, repetitive bottleneck. Some job or task that people don’t like doing—expenses, compliance, that sort of stuff.”
Failed projects put tech first. Someone sees a demo and says, “We need that.” There’s no business case—just endless pilots.
Key takeaway: Focus on solving real business problems, not just implementing new technology. Measure AI success in terms of hours saved and dollars retained. Start with achievable goals and scale up.
Universities train students for outdated jobs.
Grill’s blunt about education. Teachers banned AI because it enables cheating, “because they don’t know what to do otherwise.” Result? “Kids have been conditioned: AI’s cheating. Don’t touch it.”
Students graduate without day-one skills. Grill says it’s like joining a company, not knowing Word, PowerPoint, or Excel.
For journalism students, it’s worse. “Universities are training students for a world that no longer exists. Business will have to retrain every graduate from day one.”
When I point out that I was told calculators were cheating in the late 70s, he laughs. “I bet you spell check every day. Is that cheating? Because AI powers it.”
The watch test
Before our interview, Grill asked ChatGPT 5.1 to draw a clock showing 7:22. It drew a clock showing 10:10.
Why? Every watch ad uses 10:10—the “smile position.” AI has seen thousands of those images and assumes that’s correct. “If it can’t get that right, when is it going to take over?”
You can’t trust every AI answer. “You know when it’s wrong only if you’ve seen the right answer before.”
Key takeaway: Always verify AI output, either by regular sampling or leveraging another AI for quality control. Reliability matters.
On AI-generated content, Grill’s sceptical. “If everyone used the same models or the same prompts, everything looked the same. It’s called the magnet of mediocrity.”
At a journalism conference, the CEO of a major publisher compared obituaries for Robert Redford. The human-written one was 5% better. “Now, 5% better is better than AI slop. Maybe it had a bit more emotion in it.”
Brand safety isn’t just about avoiding offensive content. “It’s about avoiding bland content.”
Key takeaway: Human content is valued for originality and emotional depth. Expect a shift toward appreciating non-AI output.
What should CEOs and executive teams actually do?
Three things.
One: Personally use the tools. Every executive must develop hands-on AI experience to guide strategy. Experiment with tools like NotebookLM and image generators to understand their impact.
Two: Establish an AI Council across the executive team—legal, HR, technology, and creative—meeting monthly to share AI learnings and explore cross-departmental opportunities.
Three: Dedicate a failure budget. Executives must empower experimentation by allocating resources for innovative pilots—even without guaranteed ROI.
Key takeaway: Ring-fencing a budget for failure allows for experimentation, learning, and innovation without fear of missing ROI targets.
What’s changed since the book
Grill published Digitally Curious in September 2024. Knowing it was already out of date when he submitted the manuscript in April, he created the book’s own GPT. He thinks it’s the first book to have one. QR code in the print version. “People can ask the book any question they want,” mentions Grill.
What he couldn’t squeeze into the book: agentic AI. AI agents that can actually do tasks. “Since I published it, they’ve become more and more prevalent.”
He asked an AI to gather all his speaking engagements into a spreadsheet. It took about 5 minutes to automate a routine task.
Amazon recently blocked Perplexity from using its shopping agent. “When Amazon blocks something, you know they’re onto something.”
The prediction for 2026? “The AI initiatives won’t all be around tech. They’ll be around the process, because companies need to get AI ready.”
Waiting won’t work.
What Grill sees across industries: “This pattern of wait and see. We’ll see what other people do. It’s quite a dangerous approach.”
The winners ask how this will break their business model, then try it themselves.
Key takeaway: Success stems from curiosity, process reimagining, and critical thinking, not merely AI adoption.
“When people hear me talk, they go, you just gave me that push I needed to look at that ChatGPT thing a bit more deeply.”
Final takeaway: Curiosity and a willingness to explore AI matter more than the number of tools or licenses you own.
If you want to learn more about Andrew Grill you can subscribe to his Digitally Curious Podcast buy his book from Amazon or contact him via his website.







Thanks Andrew for being so generous with your time.
John, great to talk to you and great questions following my Ask Me Anything at Newsrewired.