
CTV Attribution Isn’t Broken. The Industry’s Measurement Model Is.
At nearly every major advertising gathering over the past two years, connected TV has been described the same way: powerful, fast-growing, and frustratingly hard to measure. That framing is understandable. Streaming has overtaken a huge share of viewing time, ad dollars continue to move into CTV, and marketers still do not have universal agreement on how to assign credit when TV exposure drives a website visit, an app install, a store visit, or a sale. As of March 2025, streaming accounted for 43.8% of total TV usage in the U.S., underscoring just how central the channel has become.
According to Ad Age the industry’s favorite conclusion—that CTV attribution is inherently “vexing”—is too soft on the real problem. The issue is not that connected TV cannot be measured. The issue is that too many advertisers are still trying to measure a digital, cross-device channel with outdated rules, siloed reporting, and inconsistent standards.
That distinction matters. If CTV were truly unmeasurable, the market would be built mostly on faith. Instead, buyers are pouring more money into the channel precisely because it offers addressability and the possibility of business-outcome measurement. IAB’s 2025 outlook identified CTV as one of the fastest-growing channels and explicitly tied buyer interest to personalization and measurement capabilities. At the same time, IAB’s 2024 digital video study found that measurement issues remain prevalent in CTV and online video, especially around transparency, co-viewing, and viewability, with buyers asking sellers for more first-party data access, better performance solutions, and new standards.
That is not a contradiction. It is the state of a maturing channel. CTV can be measured. It just cannot be measured casually.
The first mistake is confusing platform reporting with attribution. OEMs, streaming apps, DSPs, and retail media environments all generate useful data, but each sees only part of the consumer journey. A platform may know an ad was served. Another may know a household later visited a site. A third may know a transaction occurred. None of those views, on its own, is attribution. Attribution is the disciplined process of connecting exposure to outcome under disclosed assumptions and known limitations. The Media Rating Council’s outcomes and data quality standards put transparency at the center of that task, requiring disclosure of methodology, reliability, and limitations rather than treating black-box outputs as settled truth.
The second mistake is pretending consumer behavior is linear. CTV rarely behaves like paid search, where a click and a conversion may happen within the same session. TV is often an initiating or accelerating touchpoint. A household sees an ad on the big screen, then later searches on a phone, clicks an email, types in a URL on a laptop, or walks into a store. Even the basic OTT attribution explainer the user shared makes the core point clearly: viewers move across screens, and multi-touch approaches are designed to capture the cumulative effect of those exposures better than simplistic first-touch or last-touch models. The same document also highlights a major limitation of streaming-era data: much of it is household-level, not person-level, which means attribution can be directionally strong while still carrying identity ambiguity inside the home.
That is why the real conversation should be about measurement architecture, not whether attribution is possible at all.
A credible CTV measurement stack starts with exposure and identity resolution. In practice, that often means some combination of IP matching, household graphs, site pixels, conversion APIs, and view-through logic. Vendors have built products around this because the market demanded an answer to the obvious question: what happens after the TV ad is shown?
That kind of approach is useful, but it should not be mistaken for the whole answer. IP and household matching can be powerful, especially for DTC and lead-gen marketers, yet it remains only one layer of truth. Shared devices, dynamic IP behavior, privacy changes, and household-level ambiguity all mean deterministic-looking attribution still needs validation. The MRC’s emphasis on data quality and disclosed limitations is important here because it forces the industry to separate operational measurement from causal proof.
This is where multi-touch systems such as Northbeam and Rockerbox have gained traction. Their value is not that they magically “solve” CTV, but that they place CTV inside a broader measurement framework alongside paid social, search, email, affiliate, and other channels. Rockerbox positions itself as a unified measurement platform that combines attribution, MMM, and incrementality testing, while Northbeam markets a stack built around multi-touch attribution, MMM, and incrementality as well. Both are effectively responding to the same market reality: view-based and TV channels matter, but they cannot be evaluated in isolation or by last click.
That aligns with where the standards conversation is headed. In 2024, IAB Tech Lab released the VAST CTV Addendum to support features intended to improve video ad delivery and measurement, particularly in technically messy environments such as server-side ad insertion. In 2026, IAB launched Project Eidos, which explicitly includes cross-channel outcomes, attribution, and incrementality as one of its core workstreams. The message from the standards side is not that CTV is unknowable. It is that the ecosystem still needs common language, cleaner data flows, and interoperability.
That brings the industry to the most important distinction in modern TV measurement: attribution is not the same thing as incrementality.
Attribution estimates who should get credit. Incrementality asks the harder question: would the business outcome have happened anyway?
For marketers serious about CTV performance, incrementality is the gold standard. It is the closest thing advertising has to a controlled diagnostic exam. Geo holdouts, audience holdouts, scale tests, and matched-market designs can isolate the lift generated by TV versus what would have happened in the absence of exposure. Rockerbox now explicitly supports geo holdouts and scale-up designs, and Northbeam has rolled out automated incrementality testing to sit alongside attribution and MMM.
This is also where many internal marketing teams have the right instinct when they talk about monitoring DMA-level orders. DMA analysis is useful. It can show where demand moves after a campaign launches, where creative is resonating, and whether exposed markets are outperforming controls. But DMA monitoring, by itself, is not the same as causal proof. Circana notes that DMA-level modeling is broader and can reduce forecasting accuracy relative to more granular data. That does not make DMA analysis worthless; it makes it an operating layer rather than the final court of truth. Used well, DMA readouts are excellent for directional diagnosis and regional optimization. Used alone, they can overstate certainty.
A more mature hierarchy is emerging. At the bottom is platform reporting, which is fast and useful but partial. Above that is touchpoint monitoring through systems like Northbeam or Rockerbox, which helps marketers understand pathing, assisted conversions, and blended contribution. Above that is deterministic household or visit matching, which sharpens exposure-to-outcome visibility for TV. At the top sits incrementality, which determines whether the channel is truly creating lift. That stack is not theoretical anymore. It is increasingly how sophisticated teams evaluate CTV in practice.
The broader market is already pointing in this direction. IAB’s Measurement Center is now focused on cross-channel measurement, attribution, incrementality, and MMM, and recent IAB commentary has warned that widely adopted solutions still miss on rigor, speed, and efficiency, with CTV often underrepresented in MMM frameworks. That is not an argument against CTV. It is an argument against lazy measurement.
So yes, the CTV ecosystem is still in transition. The pipes are still being standardized. The identities are still imperfect. The seller landscape is still fragmented. But the leap from “not yet perfect” to “too vexing to measure” is where the industry loses the plot.
CTV attribution is not broken. The old way of measuring media is.
The advertisers winning in connected TV are not waiting for a mythical universal dashboard. They are building layered measurement systems: exposure matching, touchpoint visibility, incrementality testing, and regional monitoring tied to actual business outcomes. That is what turns CTV from an awareness line item into an accountable growth channel.
And that is the real story of TV right now. Not that attribution is impossible. That the bar for doing it correctly has finally caught up with the medium.





