The Guardian’s AI Strategy: Caspar Llewellyn Smith on Navigating Digital Disruption
The Guardian’s Chief AI Officer on licensing deals, falling search traffic, and why AI can’t replace journalists.
Most UK and European publishers are developing their AI strategies. Some block AI crawlers. Others negotiate licensing deals. The Guardian appointed Caspar Llewellyn Smith as its first Chief AI Officer to address a fundamental challenge: how does a news organisation survive when AI models use its journalism for training data, search engines send less traffic, and readers get answers without visiting the site?
Llewellyn Smith is unequivocal about the scale: “This might not be the most consequential technology ever created, but its impact on the publishing and media industry is absolutely the biggest thing any of us will see in our lifetimes.”
A Two-Decade Journey Through Digital Transformation
Llewellyn Smith is a Cambridge University graduate who dropped out of City University’s postgraduate journalism course two-thirds through to take a job at The Daily Telegraph. He started at the bottom as the most junior person on the Saturday arts and books section.
His first boss pushed him to write, and he did, covering music and arts. In 2003, he moved to The Guardian to launch Observer Music Monthly. “A super fun way to spend your 30s,” he said. What was less fun was watching the entire music press collapse around him as digital killed the business model.
Back in 2003, with unprecedented access to the world’s biggest stars, he interviewed the likes of Beyoncé and other global artists; however, shortly afterwards, they no longer needed journalists. They had Spotify, Instagram and other social networks, which allowed them to communicate directly with their fans and control the narrative. The music press lost relevance, and Llewellyn Smith watched it happen up close.
The lessons stuck. From Observer Music Monthly, he moved up: as Head of Culture for Guardian News & Media, then editor of theguardian.com and The Guardian’s other digital platforms, as Director of Digital Strategy. By 2015, he’d made the executive committee.
He spent five years as Chief Product Officer before taking the AI role. The connection between building magazines and building apps? Both require balancing competing elements, deciding how much advertising readers will tolerate, and presenting journalism in ways people actually want to consume it.
A Unique Role in British Journalism
The Guardian is among the first major UK newspapers to appoint a dedicated Chief AI Officer. The role caps Llewellyn Smith’s 20-year run at The Guardian, during which he’s lived through multiple waves of digital disruption.
So what does a Chief AI Officer actually do all day?
“It’s a pretty all encompassing topic,” Llewellyn Smith admits. There are four areas of focus.
First, external work establishing licensing agreements with AI companies, making sure The Guardian gets paid when its journalism trains large language models. Second, internal deployment of AI tools across the newsroom and commercial operations. Third, strategic thinking about what AI means for journalism’s future and The Guardian’s business model. Finally, ensuring that all departments and functions within The Guardian understand the potential impact of AI and have the tools and training to succeed in an ever changing landscape.
The Guardian has set up its own AI Council—senior editors and technologists wrestling with fundamental questions about journalism in an AI-saturated world. It’s not just about implementing technology.
The Audiencers Festival is back with a focus on community-building and targeted strategies that take different reader segments into account, the Audiencers’ Festival Hamburg on March 3rd isn’t one to miss! Featuring Financial Times (UK), NTM (Sweden), DER SPIEGEL (Germany), The Kyiv Independent (Ukraine), DIE ZEIT (Germany) and more.
Rolling Out AI Tools (Carefully)
The Guardian has given its staff enterprise versions of ChatGPT and Gemini. But there are guardrails. Journalists can’t paste confidential information into these tools—data governance prevents sensitive material from being shared with third parties.
“We’ve got these tools across to everyone, and we’ve had around 79% adoption thanks to the extensive and well received training which was rolled out,” Llewellyn Smith says.
“Everyone can use them for brainstorming, drafting, and researching.”It’s even just doing things like getting email inboxes under control, these things aren’t necessarily big or sexy, but they add up and sort of cumulatively, they should make all of our lives a little bit easier and a little bit more productive, which is good.”
The organisation is also using Google’s NotebookLM. Journalists can research within a prescribed set of documents in what Llewellyn Smith calls “a really safe environment.” The pattern: use AI for productivity gains while maintaining strict boundaries around editorial integrity.

The Licensing Fight
How AI companies use journalistic content for training is contentious. Llewellyn Smith is blunt: “Our journalism gets used to train these models. We need to be remunerated for that. We should have a say in whether it gets used or not, there needs to be a value exchange for the relationship to work.”
The Guardian secured a licensing deal with OpenAI. But they’re selective about negotiations. “We’ve done a handful of deals, and we’ve also chosen not to do some because we haven’t been happy with the terms. We’re fortunate that we have a robust business and we are in control of these situations. I think we’re very well positioned at the Guardian to navigate this new news ecosystem in which LLMs will play a bigger role in all our lives.”
Why does alignment matter? Because AI fundamentally threatens journalism’s value proposition. Search referral traffic—historically a major audience driver—is already declining as AI-powered search features provide direct answers without requiring clicks through to source articles. If you remove the friction from people accessing information, you remove their need to actually go and visit the source of that information, which is a massive concern for everyone who produces information on the Internet.
What AI Can’t Do
Llewellyn Smith is clear about AI’s limitations in journalism. “It cannot do original reporting. It’s not very funny. And a lot of our journalists are really witty, and the journalism is characterised by the quality of the writing.”
So, where does The Guardian use AI? Parsing huge datasets for investigative work. Transcribing interviews. Improving website accessibility through better alt text. Tasks that free journalists to focus on original reporting and uncovering wrongdoing.
“If we can use it in beneficial ways, creating more time for us to do the things that we should be able to do best—that’s the goal,” he says. “But there are certain things it’s not going to be able to do. We still absolutely depend on human beings for that.”
His AI Toolkit
Llewellyn Smith uses both ChatGPT and Gemini extensively for brainstorming and drafting. When writing, he’ll workshop ideas with Gemini—asking it to suggest improvements or rephrase sections, then pose the same questions to ChatGPT to compare responses.
He’s also testing Claude on a personal laptop disconnected from Guardian systems. “Supposedly the state of the art in terms of what it can do,” he notes, though he says all these models are “converging somewhat in their capabilities.”
His advice? “It’s just important that people get their hands on this thing, to understand the technology and get an intuitive feel for the things it’s good at and things it’s bad at.”
2026 and Beyond
The conversation around artificial general intelligence (AGI) has shifted. A year ago, many predicted AGI might arrive in 2026 or 2027. That view has “slightly receded.” Llewellyn Smith doesn’t expect AGI within the next 18 months, though he acknowledges the remarkable progress frontier models have made in the three years since ChatGPT launched.
Instead, he predicts adoption will be “spotty or jagged”, successful in some business applications, patchy in others. Progress will come with ongoing concerns about big tech’s role in our lives and legitimate sustainability questions.
The impact on the media will be substantial. “Progress will look patchy—two steps forward, one step back—over the next 12 to 18 months,” Llewellyn Smith says. “But people will be ill-advised to think that the media isn’t going to look significantly different within the next five years.”
The Human Element
What about aspiring journalists worried that AI will replace them? Llewellyn Smith offers measured reassurance. Yes, journalism “remains embattled” and was already difficult before AI. No, it’s not the profession to enter if you want to get rich. But The Guardian will keep using AI to augment content and build better products while relying on humans to write.
“If you want to do what we’re really here to do—speak truth to power, find out things about bad people—we still absolutely depend on human beings to do that,” he says.
The challenges are real. So is the value of skilled, ethical journalism in an increasingly automated world.









Really thoughtful interview. Love how The Guardian is treating AI as a tool to support journalists not replace them. The focus on licensing, ownership, and protecting original reporting feels like the right long-term strategy. Smart, balanced approach.
Great interview. I worry about cognitive offloading and becoming too dependent on AI for tasks such as brainstorming and ideas generation. Am I alone?