AOP Tech Talk: AI in Publishing, Revenue, Responsibility and the Race for Trust
Publishers are moving beyond experiments to execution. At the AOP Tech Talk, leaders revealed how AI is driving revenue, reducing bias and redefining what audiences can trust.
Based on discussions from the AOP Tech Talk on 9th October in London on “Artificial Intelligence”, featuring Ivar Krustok (Delfi Meedia), Iva Johan (Bernadette), Mario Lamaa (Immediate Media), Domenico Palmieri (Condé Nast), Camealia Xavier-Chihota (The Digital Voice), Sam Kumar (esbconnect), Amy Arnell (The Brandtech Group), and Kamran Vahabi (GumGum).
Introduction
Few topics divide publishers more sharply or unite them more urgently than artificial intelligence. Once a speculative talking point about newsroom automation, AI has become a strategic pillar for audience engagement, advertising yield, and editorial productivity.
At the AOP Tech Talk titled Artificial Intelligence, industry leaders from Immediate Media, Condé Nast, Delfi Meedia, Bernadette, GumGum, The Digital Voice, esbconnect and The Brandtech Group gathered to ask not whether AI belongs in publishing, but how to use it profitably and responsibly.
The session moved briskly from revenue and workflow optimisation to issues of bias, governance, and accountability. Together, the discussion revealed a clear pattern: the publishers winning with AI aren’t necessarily the ones with the most sophisticated technology, but those matching tools to practical problemsand keeping human oversight at the core.
Workflow Innovation and Practical Wins
Ivar Krustok, Chief AI and Innovation Officer at Delfi Meedia Estonia’s largest digital publisher with more than 200,000 paying subscriber opened with a pragmatic approach. His team organises AI projects into three streams: workflow optimisation, long-term LLM initiatives, and data utilisation.
One of his earliest lessons: start small. “The wins come from tools that fix everyday bottlenecks, not the grand projects that sound impressive on stage,” he said.
At Delfi, that philosophy delivered tangible results. A two-day sprint produced an internal editorial tool that integrates Google Keyword Planner and Search Trends APIs, letting journalists identify trending topics without navigating Google’s cumbersome interfaces. A fact-checking integration with Norwegian startup Factivers now flags potential inaccuracies before publication an automated checkpoint rather than a final arbiter of truth.
Perhaps the most effective innovation came from an unexpected place: data dashboards. Krustok’s team noticed that while departments spent months building complex visual dashboards, few employees actually looked at them. So they built a “data agent” that delivers plain-language summaries directly in Microsoft Teams answering questions like “which campaigns underperformed this week?” in seconds.
“Speed matters,” Krustok said, “but reliability matters more.” His own experiment indexing 25 years of archive content in four languages proved the point. The system was powerful, but expensive to query, slow at scale, and still prone to hallucinations. “A tool that occasionally invents quotes,” he admitted, “is worse than no tool at all.”
The takeaway was pragmatic: prioritise tools that solve specific pain points, even if they’re unglamorous. For many publishers, AI’s true productivity gains come from invisible automation rather than headline-grabbing experiments.
Ivar Krustok at the AOP TechTalk in London, 9th October 2025.
AI for Growth, Not Just Efficiency
If Delfi’s work showed how AI can streamline operations, Mario Lamaa, Managing Director of Data and Revenue Operations at Immediate Media, argued that the real opportunity now lies in driving top-line growth.
Speaking alongside Domenico Palmieri, Associate Director of Global Ad Analytics at Condé Nast, and Iva Johan, Head of Strategy at Bernadette, Lamaa said the conversation inside most publishers has finally shifted: “We’ve moved from how much can we save to how much can we earn.”
Immediate’s internal tool First Draft captures that shift. Originally conceived as an archive management system, it now uses generative AI to create initial versions of articles drawn from decades of print and digital content. A food feature that once took two or three days to re-work for a new audience can now be republished internationally in ten minutes, courgettes swapped for zucchinis, ounces for grams.
This localisation capacity is no small matter for a brand like Good Food, which already dominates the UK but has limited penetration abroad. “It’s not about recycling content,” Lamaa said. “It’s about scaling your brand voice in new markets.”
He also pointed to Immediate’s Prism Assistant, a conversational AI system that lets sales teams access first-party data insights instantly. Historically, pulling reports took days; now, account managers can query data in natural language and deliver insights to clients the same afternoon. Roughly 40% of Immediate’s digital ad revenue now involves first-party data activation, and the target is near 100%.
Condé Nast, meanwhile, has been refining AI-driven personalisation within its subscription ecosystem. Palmieri cited The New Yorker—where 65% of revenue now comes from subscriptions—as a testbed for dynamic paywalls and recommendation algorithms that present readers with precisely-timed offers.
The Financial Times’s adaptive paywall reportedly boosted revenue per user by 60%. Gartner’s Ask Gartner chatbot, available only to paying subscribers, shows how conversational AI can strengthen loyalty by providing value locked behind the paywall.
“The future of publishing,” Palmieri said, “isn’t about cutting costs with automation it’s about scaling relevance.”
Iva Johan, who admitted to “outsourcing her life to AI for the past year,” offered the consumer’s perspective. She now asks ChatGPT the questions she once Googled. “The entire funnel from awareness to conversion can happen in one conversation,” she said. For publishers, that’s both a threat and an opportunity: how do you surface within an AI-mediated dialogue when there’s no clickthrough?
Johan urged publishers to think conversationally: “What’s your tone of action? How do you respond in a machine interface?” For her, AI requires brands to define personality, not just tone of voice.
Domenico Palmieri, Mario Lamaa, Iva Johan,and Alastair Lewis at the AOP TechTalk in London, 9th October 2025.
Bias and the Trust Deficit
Where the first panels focused on growth, Amy Arnell, Generative AI Business Manager at The Brandtech Group, shifted attention to the blind spots that could undermine it.
Around 40% of AI training data contains bias. In 2023, generative models produced three times as many men as women when asked for “CEOs” or “leaders.” Facial-recognition systems misidentified women of colour 34% more often than lighter-skinned men. “These aren’t anomalies,” Arnell said. “They’re inherited structures.”
The problem starts with who builds the technology: 92% of AI developers are male, and just 1.2% of AI venture funding last year went to female-founded teams. That homogeneity, Arnell warned, seeps into design decisions, product training, and ultimately content.
She told a revealing story: a podcasting AI that produced show notes attributing all substantive dialogue to the male co-host, relegating his female colleague to the role of “introducer.” When challenged, the AI explained that the man had more online presence, so it “assumed” he was the main voice. “The algorithm didn’t malfunction,” Arnell noted. “It just learned our biases too well.”
Sam Kumar, CTO at esbconnect, reinforced the point from a technical angle. Large publishers, he said, have implemented content safeguards, but few systems recognise temporal bias how norms from thirty years ago differ from today’s. “When you train on archives, the model doesn’t know that 1994 was a different world,” he said. “Context ages.”
The Guardian, Kumar added, mitigates this by prominently dating older articles. But AI summaries tend to strip away timestamps, presenting all information as contemporaneous truth. That flattening of time where historical language reads like current reporting creates subtle distortions that chip away at trust.
Arnell called it “the invisible tax of AI”: every bias embedded in past content compounds when automated at scale. For publishers, that’s not only an ethical risk but a commercial one. “Trust is the only currency that compounds faster than revenue—and can evaporate just as quickly.”
Sam Kumar, Amy Arnell, Kamran Vahabi, and moderator Camealia Xavier-Chihota at the AOP TechTalk in London, 9th October 2025.
Governance and Accountability
If bias represents the moral hazard of AI, governance defines the practical one.
Kamran Vahabi, Publisher Development Director, EMEA & APAC at GumGum, put it bluntly: “Once your logo sits on an AI-generated article, you own every word. The algorithm can’t take the blame.”
He argued that publishers must apply the same editorial accountability to machine-assisted content as to human-written stories. “Audiences don’t care whether it was written by GPT or Greg they care whether it’s accurate.”
Domenico Palmieri noted that the EU has taken a principle-based regulatory approach, while the UK still relies largely on self-governance. Transparency, he said, is now the foundation of reader trust. “If an article is AI-assisted, say so clearly,” he added. “A discreet label at the end isn’t enough if readers can’t tell what it means.”
The idea of watermarking came up repeatedly, but Palmieri and others questioned its practicality once content is paraphrased or summarised across multiple sources.
Camealia Xavier-Chihota, Marketing and Social Media Director at The Digital Voice, cited Chris Daly of the Chartered Institute of Marketing: “Human oversight is the missing link. Without it, we’re just letting algorithms amplify our worst habits.”
Her concern was broader than fact-checking. “AI shouldn’t ghost the audience,” she said. “If we automate empathy out of the process, we lose what made media human.”
Iva Johan linked this back to brand experience: “AI will magnify whatever values are already in your organisation. If you don’t prioritise diversity or ethics, your AI tools won’t either.”
The consensus was clear: transparency labels, diverse teams, and regular audits must become operational norms, not afterthoughts. Publishers cannot afford to wait for regulators to dictate standards.
Where Publishers Go from Here
Across the panels, a pragmatic blueprint for AI in publishing emerged:
Start with real bottlenecks. Translation, transcription, and workflow assistants consistently deliver measurable value.
Prototype fast, deploy carefully. Krustok’s two-day builds showed the power of quick experimentation but also the need for guardrails before newsroom adoption.
Measure adoption, not hype. A simple tool used daily beats a complex system everyone ignores.
Build diversity into the process. From training data to hiring, diversity isn’t ethical window-dressing—it’s functional risk management.
Embrace transparency. Labelling AI-assisted work builds trust, even when the technology is invisible.
Accept the tension between personalisation and serendipity. Over-tailored content may raise engagement but narrows worldviews.
Ivar Krustok summarised the philosophy well: “AI should make the boring parts faster so the meaningful parts can stay human.”
That line captured the day’s subtext. For all the technical sophistication on display, the thread running through every example whether First Draft at Immediate, Condé Nast’s dynamic paywalls, or Delfi’s data agents was profoundly human. AI isn’t replacing judgement; it’s re-shaping where that judgement is applied.
As the discussion closed, the mood was sober but optimistic. No one doubted AI’s capacity to transform publishing. The challenge now is to ensure that transformation strengthens, rather than corrodes, the trust on which the industry depends.
Thanks for reading all the way to the end of the article! This post is public so feel free to share it, and if you have not done so already sign up and become a member.