The Measurement Paradox
Why marketers’ confidence in their own numbers has flatlined—and what it means for budgets, trust, and the future of the industry
Here’s something odd. Marketers have more data than they’ve ever had. More tools, more dashboards, more ways to track every click and conversion. You’d think they’d be brimming with confidence about what’s actually working.
They’re not.
New research from EMARKETER and TransUnion based on a survey of 196 US marketers conducted in July 2025—reveals a profession stuck in neutral. Yes, 62% say they have some confidence in their performance metrics. But that’s where the good news ends. More than half (54%) report no change in their confidence year-on-year. Worse, 14% say it’s actually declined.
Think about that for a moment. We’re living through an era of unprecedented technological sophistication in marketing. The tools keep getting better, the data keeps flowing. Confidence should be climbing. Instead, it’s flatlining. In some cases, going backwards.
Too much of everything
The problem isn’t lack of data. It’s fragmentation. Nearly half of marketers (49.5%) point to siloed or incomplete data as the main reason they question their measurement accuracy. Close behind: cross-channel deduplication issues (48%) and the limitations of walled-garden platforms (41%).
Jeremy Rose, who heads up unified marketing measurement at Bayer, puts it bluntly: “The key to unified measurement is unified data, and that starts with breaking down the walls between systems that were never designed to work together.”
He’s talking about the real world of marketing technology a sprawling mess that’s grown through acquisitions and partnerships rather than any coherent plan. Your CRM speaks one language. Your advertising platforms speak another. Your analytics tools? Something else entirely. It’s a Tower of Babel situation, and critical information gets lost in translation.
“Interoperability is the ability for data to move between platforms and systems in a consistent, usable way, and it is no longer optional given the complexity of today’s marketing ecosystem,” Rose says. He’s right. Organisations need to invest in the infrastructure that makes these connections possible clean rooms for secure collaboration, identity resolution to stitch customer journeys together, APIs that let tools talk to each other.
When Rose talks about data flowing “without friction,” he means something specific: the ability to deduplicate, calibrate, and compare results in ways that support actual decision-making. Not just pretty charts. Real answers.
The problem gets messier as the boundaries between digital and physical blur. Consumers shop across both. But only 17% of marketers rate their ability to measure consistently across online and offline channels as poor or very poor. That sounds good until you look at where confidence actually sits. It’s lowest for influencer and creator marketing (44.4% lack confidence), in-store and offline activity (38.3%), and social platforms (35.2%). Exactly the channels that matter most—and measure worst.
There’s an interesting split between agencies and brands, too. Agencies are more likely to cite cross-channel deduplication as a major barrier. Brands, meanwhile, point to lack of internal expertise. Different problems, same outcome: not enough confidence in the numbers.
When trust breaks down
Measurement problems don’t stay in the marketing department. They metastasise.
Six in ten marketers (60.2%) say their internal stakeholders question the validity of their metrics at least sometimes. One in five (20.4%) aren’t even sure if non-marketing colleagues trust their performance reports. Another 12.2% know their reporting is trusted only a little—or not at all.
This scepticism has teeth. Over a quarter of marketers (28.6%) report that between 11% and 20% of their budget has been reallocated or put at risk in the past year because of doubts about measurement accuracy. When the C-suite doesn’t trust your numbers, they don’t trust your budget requests either.
The data shows a clear pattern: marketers whose internal stakeholders trust their performance reporting “a lot” are far more likely to say none of their budget has been jeopardised. Trust protects budgets. Lack of trust puts them at risk. It’s that simple.
Earning that trust isn’t just about producing accurate numbers. It’s about building confidence in the measurement methods themselves—and then amplifying that trust across the organisation. Marketing teams that stay isolated, constantly defending their methodologies, only reinforce the doubt. It becomes a vicious cycle.
Prove it or lose it
With budgets under pressure and trust in short supply, marketers have responded by focusing relentlessly on ROI. Two-thirds (67.4%) say proving incremental return on investment has become more pressing in today’s economy. It’s the top measurement challenge, full stop.
Brian Silver, executive vice-president of marketing solutions at TransUnion, doesn’t mince words: “Marketers are always being asked to do more with less. And even though many marketing organisations are realising the value of metrics beyond just return, incremental ROI is still one of the most important measures of a campaign’s success, especially when budgets are shrinking.”
Nearly 30% of marketers face moderate to significant cuts to their measurement and analytics budgets for the next fiscal year. Economic uncertainty is forcing hard choices. In that environment, proving performance isn’t just important—it’s existential.
The top measurement priorities for the next 12 months cluster around this reality: aligning marketing metrics to business outcomes (66.3%), improving cross-channel attribution accuracy (55.1%), and reducing time to insight (36.2%). Everything points towards justifying spend and proving value.
But there’s a risk in this obsession with short-term ROI. Silver argues that marketers need to evolve beyond channel-level metrics towards enterprise-level outcomes. They need attribution models that account for customer journeys spanning multiple touchpoints and devices. “To get a clear view of performance across all touchpoints, marketers should be taking a hard look at their tech stacks and how interoperable they are, where their data is coming from, and how quickly they can move from insight to action,” he says.
The goal shouldn’t just be justifying yesterday’s spending. It should be guiding tomorrow’s allocation. But that requires a different mindset and better infrastructure.
The AI bet
Faced with budget cuts and mounting pressure, many marketers are turning to artificial intelligence as a lifeline. Half of those surveyed have adopted—or plan to adopt—AI and machine learning to automate their reporting. The top use case? Analysing data and creating reports (40% of US marketers cite this).
The pitch is compelling: automate the repetitive stuff, surface insights faster, do more with less. And there’s an interesting correlation in the data marketers whose internal stakeholders trust their reporting are more likely to have adopted or be planning to adopt AI automation. It’s not a replacement for credibility. It’s an amplifier.
But let’s not get ahead of ourselves. Research from BearingPoint in April 2025 found that whilst every C-level executive surveyed expects to implement AI-driven customer insights by 2028, only 9% are currently using or planning to use AI for marketing performance analytics in 2025. That figure’s expected to triple to 29% by 2028, but we’re still talking about a three-year gap between aspiration and reality.
The challenge isn’t just adopting the technology. It’s integrating it thoughtfully into existing workflows and governance structures. AI isn’t a magic wand. It’s a tool that requires careful implementation.
Rethinking the stack
More than a quarter of marketers (26.5%) are dissatisfied with their current measurement tech stack. That dissatisfaction is driving change.
Platform-provided attribution is still the most common methodology—65.8% use it. But marketers aren’t relying on it exclusively anymore. They’re supplementing it with incrementally testing and experiments (52%) and marketing mix modelling (49.5%). These aren’t niche approaches anymore. They’re becoming standard.
Investment patterns tell the story. Nearly half of marketers (46.9%) plan to increase spending on MMM over the next year. Another 34.7% plan to invest more in multitouch attribution. These are the two most reliable methodologies—and they’re getting the resources.
Silver sees this as part of a broader shift: “The days of monolithic measurement are over. The most effective measurement strategies are going to feature AI-enabled data collection and data management, which will serve as a foundation for bringing together core methodologies, like MMM, MTA, and incrementality testing.”
No single methodology can answer every question. Platform attribution is good for near-term optimisation but tends to overstate credit. MMM captures broader patterns but lacks granularity. Incrementality testing provides causal certainty but takes time and resources. The smart approach is to use multiple methods each one covering the blind spots of the others.
Think of it like a diversified investment portfolio. You’re not putting everything into one asset class. You’re balancing near-term tactical tools with long-term strategic models, platform-specific insights with cross-platform analysis, modelled attribution with experimental validation. It’s more complex, sure. But it produces more robust answers.
The testing gap
Here’s a startling finding from the research: nearly half of marketers (49%) adjust their media strategy only once a quarter or less. Once a quarter! In an environment where consumer behaviour shifts weekly and competitors move constantly, that’s glacial.
Static reporting can’t capture marketing’s full impact. Yet many organisations remain stuck in quarterly review cycles that made sense 20 years ago but are hopelessly outdated now.
The alternative is always-on experimentation. Incrementally testing, lift studies, geo-holdout tests these approaches let marketers isolate true cause-and-effect rather than just observing correlations. Instead of relying entirely on modelled or platform-reported attribution, you can test different measurement methods to figure out which tactics actually deliver incremental value.
The evidence from experiments compounds over time. Each test adds to a library of insights. That library strengthens both decision-making and stakeholder trust. You’re not just reporting correlations anymore—you’re demonstrating causation.
Some 36.2% of marketers say reducing time to insight is a top priority. That recognition is important. The organisations that can move quickly from insight to action—testing hypotheses rapidly and adjusting based on results will be the ones that capitalise on emerging opportunities. But that requires different workflows, not just better tools. It means embedding experimentation into routine operations rather than treating it as an occasional special project.
The transparency problem
Measurement tools are only as valuable as the trust they inspire. And too often, marketers undermine that trust by presenting results as definitive whilst glossing over the assumptions underneath.
When numbers shift unexpectedly or contradict stakeholders’ intuitions, that opacity invites scepticism. A better approach is to be explicit about data sources, assumptions, and limitations. Share not just what the numbers say but how you arrived at them.
Rose makes this point emphatically: “One of the biggest mistakes is to have multiple different models saying contradictory things when it comes to performance. The most important thing we can do is to provide a single source of truth that brings together the best parts of our different measurement methodologies in one place. That makes it easy to align across teams, build trust, and more quickly and confidently react to changing marketing conditions and consumer behaviour.”
Transparency means showing your working. It helps align expectations and reduces internal scepticism. And here’s the paradox: acknowledging uncertainty can actually strengthen credibility. Stakeholders generally respond better to honest assessment of limitations than to false precision. When results are unfavourable, transparent methodology provides the context for productive discussion rather than defensive arguments.
It also helps to document everything clearly. Accompany your reporting with a methodology brief that summarises data sources, assumptions, and margins of error. This positions marketing as a reliable partner in driving business outcomes. It makes it harder for stakeholders to dismiss results out of hand.
Culture eats strategy
Silver argues that fixing measurement isn’t just a technical problem. It’s a cultural one.
“Innovation in marketing measurement starts with a strong data foundation,” he says. “But the real transformation happens when organisations pair that foundation with cultural change. That means breaking down silos between marketing, analytics, IT, and finance so measurement isn’t stuck in a corner but embedded into every stage of decision-making.”
He’s talking about measurement as a forward-looking decision system rather than a backward-looking scorecard. One that guides marketers on where to invest, what to test, how to grow. Building that kind of system requires shared accountability across functions, openness to experimentation, and a commitment to treat insight as a core input to strategy.
These cultural shifts are hard. They threaten established hierarchies and comfortable routines. But organisations that manage it gain real competitive advantage.
One practical step: establish a recurring “measurement council” a cross-departmental working group that reviews metrics, shares perspectives, and aligns KPIs to business priorities. This creates a single version of truth whilst reducing internal scepticism through inclusive dialogue. It ensures measurement questions get input from diverse viewpoints rather than staying siloed in the marketing analytics team.
The council approach works because it makes measurement a shared responsibility. When finance, IT, marketing, and business leadership all contribute to defining what success looks like, everyone has skin in the game. And when the numbers come in, there’s collective ownership rather than finger-pointing.
What needs to happen
The plateau in measurement confidence revealed by this research isn’t inevitable. The challenges are real fragmented data, internal scepticism, budget pressure. But they’re solvable.
What’s needed is investment in interoperable infrastructure that actually connects disparate systems. Adoption of complementary measurement methodologies that balance each other’s weaknesses. Incorporation of experimental frameworks that provide causal evidence rather than just correlation. And cultivation of organisational cultures that value transparency and cross-functional collaboration.
None of this is easy. But the alternative is worse—continuing erosion of trust, shrinking budgets, and diminishing influence. Marketing’s strategic role depends on its ability to demonstrate value. And that demonstration depends on measurement.
The stakes are high. When measurement breaks down, marketing loses its seat at the table. Business decisions get made without marketing input. Resources flow to areas that can demonstrate clear returns. Marketing becomes a cost centre rather than a growth driver.
Organisations that solve the measurement puzzle won’t just defend existing budgets—they’ll earn the resources and trust necessary to seize emerging opportunities. They’ll move faster than competitors because they can identify what works and double down on it. They’ll waste less money on ineffective tactics because they can spot failures quickly.
The solution isn’t any single technology or methodology. It’s a comprehensive rethinking of measurement’s role. That means treating it not as a reporting exercise but as a strategic capability—one that combines rigorous methodology with transparent communication, technical sophistication with cultural change, near-term optimisation with long-term vision.
Some practical recommendations emerge from the research:
Set aside 5-10% of your media budget for structured experiments each quarter. Small, continuous tests compound over time, creating a library of evidence that strengthens decision-making and stakeholder trust.
Use AI to accelerate time-to-insight things like simulating attribution scenarios or running anomaly detection. But layer in human oversight to pressure-test assumptions. This hybrid model helps leaders trust the outputs instead of questioning them.
Make your methodology explicit. Accompany every report with a brief that summarises data sources, assumptions, and margins of error. This positions marketing as credible and makes it harder for stakeholders to dismiss results.
Establish that cross-departmental measurement council. Meet regularly to review metrics, share perspectives, and align KPIs to business priorities. This creates a single version of truth and reduces scepticism through inclusive dialogue.
The bigger picture
Marketing’s evolution from art to science hinges on measurement. Without it, marketing remains stuck making intuitive guesses rather than data-informed decisions. With it, marketing becomes a disciplined growth engine.
The current plateau in confidence isn’t the end of that evolution. It’s a necessary reckoning with the challenges. And those challenges whilst substantial are surmountable.
The organisations willing to confront these challenges honestly will emerge stronger. They’ll invest in better infrastructure. They’ll embrace methodological pluralism rather than looking for one perfect tool. They’ll foster transparency rather than hiding behind jargon. They’ll build cross-functional alignment rather than operating in silos.
And they’ll recognise something fundamental: in an age of data abundance, the scarcest resource isn’t information. It’s the wisdom to interpret it well. That wisdom comes from combining multiple perspectives, testing rigorously, acknowledging limitations, and being transparent about what you do and don’t know.
The measurement paradox—more data, less confidence—won’t resolve itself. It requires deliberate action. But for organisations that take that action, the payoff is substantial: not just better measurement, but better marketing. And ultimately, better business results.
Thanks for reading all the way to the end of the article! This post is public so feel free to share it, and if you have not done so already sign up and become a member.










