What MMM Actually Measures
Marketing Mix Modelling (MMM) is a statistical technique that uses historical time-series data to decompose observed business outcomes (revenue, sales volume, units sold) into contributions from marketing activities and non-marketing baseline factors. It answers: of our total revenue each week, how much is attributable to each marketing channel, how much to price and promotions, how much to macroeconomic conditions, and how much to the organic baseline?
Unlike attribution modelling — which tracks individual user journeys — MMM operates at the aggregate level. It does not know what any individual customer did; it observes that when marketing spending patterns changed, business outcomes changed in corresponding ways, and it builds a regression model that captures those relationships.
This aggregate approach is both MMM's strength and its limitation. Its strength: it can include TV, radio, outdoor, and other offline channels that digital attribution cannot track; it captures long-run effects through adstock modelling; it is privacy-safe and cookie-independent. Its limitation: it requires significant historical data, cannot detect what caused a single conversion, and produces estimates with confidence intervals — not the false precision of single-number ROAS calculations.
Data Requirements
MMM requires a minimum of 2–3 years of weekly data to produce reliable models. Models built on shorter histories produce unstable coefficient estimates because there is insufficient variation in the independent variables to separate their effects. The minimum data set for a basic MMM:
| Data Type | Granularity | Notes |
|---|---|---|
| Dependent variable (revenue/sales) | Weekly | Must be net of returns; consistent currency and accounting treatment throughout |
| Marketing spend by channel | Weekly | Gross spend; all channels including those with small budgets |
| Impressions or GRPs (for media) | Weekly | Better than spend alone for channels with variable CPMs |
| Price index | Weekly | Average selling price or price relative to competitors |
| Distribution / availability | Weekly | Store count, availability index, or similar |
| Seasonality indicators | Weekly | Holiday flags, weather indices for weather-sensitive categories |
| Competitor spend (estimated) | Weekly | Nielsen Ad Intel, WARC, or Kantar media spend data |
| Macroeconomic indicators | Weekly/Monthly | Consumer confidence, CPI, unemployment — relevant for the category |
Data quality is the primary determinant of MMM quality. Spend data with inconsistent channel definitions (sometimes digital includes agency fees, sometimes it does not), revenue data that excludes certain product lines, or date-shifted data (spend recorded at invoice date rather than activity date) are common problems that produce systematically wrong models. A significant portion of any MMM project is data cleaning and harmonisation — do not underestimate it.
Model Specification
The foundational MMM equation is a log-linear regression: log(Sales) = α + β₁·log(TV_GRPs) + β₂·log(Digital_Spend) + β₃·Price + β₄·Seasonality + ε. The log-log specification captures the diminishing returns relationship (each additional pound of spend drives less incremental revenue) more accurately than a linear specification.
Modern MMM typically extends this base with: adstock transformations (capturing carryover effects of advertising); saturation functions (imposing diminishing returns at high spend levels); hierarchical Bayesian estimation (incorporating prior beliefs about parameter ranges to stabilise estimates where historical data is sparse); and structural time series components (separating trend, seasonality, and cyclical patterns from marketing effects).
The Bayesian approach to MMM — now the dominant methodology following Google's Meridian and Meta's Robyn — imposes prior distributions on parameters based on documented empirical evidence about advertising response. This addresses the collinearity problem (when channels are always spent together, their individual effects cannot be estimated reliably from data alone) by incorporating external evidence about likely effect sizes.
Adstock and Saturation
Adstock is the mathematical representation of advertising carryover — the documented phenomenon that advertising effects persist beyond the period of exposure. A TV campaign running in January continues to affect purchasing behaviour in February and March as brand awareness, recall, and purchase consideration built during the campaign decay gradually over time.
The adstock rate (decay parameter, typically expressed as the percentage of effect retained each week) varies by channel and category. Documented adstock patterns: TV advertising typically has carryover rates of 60–80% per week (effects persist for several months); digital channels have carryover rates of 30–60% (shorter persistence, faster decay); promotions and price activity typically have near-zero adstock (effects are immediate and do not persist).
Saturation captures diminishing returns — the relationship between spend level and response. At low spend, each additional pound drives significant incremental sales. At high spend, the marginal response declines. The Hill transformation is the most common mathematical form for saturation: Response = spend^α / (spend^α + K^α), where α controls curve steepness and K sets the inflection point. The fitted values of these parameters tell you where on the response curve current spending sits — and therefore whether increasing or decreasing spend will produce better marginal returns.
Diminishing Returns Response Curves
Response curves are the most actionable output of an MMM — they show the relationship between spend level and revenue response for each channel, enabling direct budget optimisation. A channel operating on the flat part of its response curve (high spend, minimal incremental response) should have budget reduced; a channel operating on the steep part (low spend, strong incremental response per pound) should have budget increased.
Reading response curves requires understanding three regions: the steep increasing region (low spend, high marginal ROI — the brand is under-investing in this channel); the diminishing returns region (moderate spend, declining but still positive marginal ROI — the typical operating range); and the saturation plateau (high spend, near-zero marginal response — the brand is over-investing).
The budget optimiser calculation: given the response curves for all channels and a total budget constraint, the mathematically optimal allocation equates the marginal return of the last pound spent across all channels. If moving £1 from TV to digital increases total revenue, that reallocation improves efficiency until the marginal returns equalise. This is the core budget optimisation calculation that MMM enables.
Revenue Decomposition
Revenue decomposition is one of the most useful MMM outputs — it breaks total revenue into its component contributions: baseline (what the business would earn without any marketing activity), each marketing channel's contribution, price effects, distribution effects, and seasonality. This answers the fundamental question: of our £10M in quarterly revenue, how much can we actually attribute to marketing versus what we would have earned anyway?
The baseline is often startlingly large for established brands — typically 50–70% of revenue is baseline in mature categories. This reflects the accumulated brand equity, distribution strength, and loyal customer base that generates sales regardless of current marketing activity. The remaining 30–50% is the total marketing contribution. Decomposing that 30–50% across channels, promotions, and other factors reveals the true marketing efficiency picture.
A common error: interpreting the baseline as "free money" that does not require investment to maintain. The baseline is largely the result of past marketing investment building brand equity. Cutting all marketing does not preserve the baseline — it erodes it. Ehrenberg-Bass Institute's documented research on brand equity decay shows that brands that stop advertising lose baseline sales gradually over 12–24 months.
Budget Optimiser
The MMM budget optimiser takes the fitted response curves for each channel and calculates the allocation that maximises total revenue given a budget constraint. The optimisation can be run at: current total budget (find the best allocation of what we have); budget expansion (what additional revenue does an extra £1M generate and where should it go?); budget reduction (if we must cut 20%, where do we cut with minimum revenue impact?); and unconstrained (what is the budget level that maximises ROI?).
Optimiser outputs should always include confidence intervals — the model's estimates of parameter uncertainty mean the optimal allocation is itself uncertain. A robust optimiser presents the expected revenue for a range of allocations around the optimal point, showing whether the model is confident in the recommendation or whether results are relatively flat across a range of plausible allocations.
Model Validation and Calibration
MMM validation has two components: in-sample fit (does the model reproduce historical sales patterns?) and external calibration (do the model's channel contribution estimates align with independent measurements?). In-sample fit is necessary but not sufficient — a model can fit the historical data well while producing unreliable forward-looking channel estimates.
External calibration against incrementality test results is the gold standard for MMM validation. If your Google Ads holdout test shows incremental ROAS of 2.5× and your MMM attributes an implied 5× ROAS to Google Ads, the model needs re-specification. Meridian's Bayesian framework explicitly supports calibration constraints — you can input known incrementality estimates as prior distributions that constrain the channel coefficients to plausible ranges.
MMM Limitations and Common Errors
MMM limitations practitioners must understand: it cannot separate the effects of channels that are always moved together (if TV and digital always increase together during peak periods, their individual contributions cannot be reliably estimated); it produces aggregate estimates that mask geographic and audience-level variation; it cannot easily capture the synergistic effects between channels; and it is a lagging instrument — models run annually or bi-annually miss intra-year tactical changes.
Common errors: treating MMM output as precision estimates rather than ranges; ignoring the model's confidence intervals in budget decisions; using MMM models built on pre-2020 data without recalibration for the significant structural changes in consumer behaviour during and after the COVID period; and presenting MMM results to stakeholders as definitive answers rather than probabilistic estimates that inform decisions.
Meridian and Robyn: Open-Source MMM Tools
Google's Meridian and Meta's Robyn are the two primary open-source Bayesian MMM frameworks. Both are available as Python (Meridian) and R (Robyn) libraries respectively, enabling organisations to build and maintain their own MMM without vendor dependency.
Meridian (Google, 2024) uses Bayesian hierarchical modelling with built-in support for geo-level modelling, calibration from incrementality tests, and response curve visualisation. Its key advantage is explicit incrementality test integration — it allows Bayesian updating of channel contribution priors based on holdout test results, directly addressing the calibration gap in traditional MMM.
Robyn (Meta, 2021) uses ridge regression with evolutionary algorithm budget optimisation. It includes automated model selection and is widely deployed across the industry due to its maturity and active development community. Robyn's strength is its budget optimiser and the Robyn Analyst API for automated scenario analysis.
Both tools require statistical and Python/R competency to implement and maintain. The models themselves are commodity; the skill is in data preparation, specification decisions, validation, and translating model outputs into business decisions.
Further Reading
Go deeper with these reference guides from the Digital Codex library.
Sources & References
All frameworks, models, and data in this guide draw from peer-reviewed research, official documentation, and documented practitioner case studies.
Google's official open-source Bayesian Marketing Mix Modelling framework.
Meta's official open-source MMM framework with budget optimisation.
Binet and Field's documented research on marketing effectiveness — foundational for MMM interpretation.
WARC's documented practitioner guide on MMM specification and interpretation.