What Brand Safety Is
Brand safety in advertising means ensuring that a brand's ads do not appear in content environments that could damage the brand's reputation, embarrass the organisation, or associate the brand with content it finds objectionable. In programmatic advertising, where automated buying can place ads on thousands of different sites and pages without manual review, brand safety is an active, technology-enabled discipline rather than a property of any individual placement.
Brand safety incidents have generated significant documented press coverage over the years: major brands appearing alongside extremist content on YouTube (The Times investigation, 2017); ads appearing next to fake news content; ads served on sites with bot traffic generating impression counts with no human viewers. These incidents prompted the industry to develop formal standards and measurement frameworks.
The Real-World Risks
Brand safety risks in programmatic fall into several categories, each with different sources and different controls:
| Risk Type | Examples | Primary Control |
|---|---|---|
| Inappropriate content adjacency | Ad appearing next to extremist content, graphic violence, adult content, hate speech | GARM category blocking + verification vendor pre-bid filtering |
| Contextual mismatch | Funeral services ad appearing next to a tragic news story; alcohol ad appearing in children's content | Keyword exclusions + content category exclusions |
| Made-for-advertising (MFA) sites | Low-quality sites created solely to generate ad impressions — no real editorial content | Site allowlists; IAS/DoubleVerify MFA filtering |
| Invalid traffic (IVT) | Bot traffic generating fake impressions with no human viewers | Verification vendor IVT filtering; ads.txt checking |
| Political/controversial content | Ads adjacent to divisive political content that could associate the brand with a political position | Political content exclusion categories |
GARM Brand Safety Standards
The Global Alliance for Responsible Media (GARM) — a cross-industry initiative under the World Federation of Advertisers — has published the Brand Safety Floor and Suitability Framework, which is now the industry standard classification system for brand safety risk levels.
The GARM framework defines two levels of risk:
Brand Safety Floor: Content categories that all advertisers should avoid — no legitimate advertiser should ever want their ad adjacent to this content. Includes: child sexual abuse material; dangerous/extremist content promoting violence against people or organisations; terrorism; and hate speech. DSPs and verification vendors universally block these categories.
Brand Suitability: A spectrum of content risk that is appropriate for some brands and not others, depending on the brand's values and audience. GARM defines nine risk categories (debated social issues, arms/firearms, crime, death/injury, online piracy, hate speech, offensive language, drugs, gambling, adult/explicit sexual content) each with three risk tiers (high, medium, low risk within the category). Brands configure their suitability settings based on their specific sensitivities.
Blocklists and Allowlists
A blocklist (also called an exclusion list) is a list of specific domains, apps, or content categories that the advertiser has excluded from their programmatic buying. Blocklists prevent ads from appearing on explicitly identified problematic sites — whether because the site has brand safety issues, low viewability, high fraud rates, or any other reason the advertiser has identified.
An allowlist (inclusion list) is the opposite — a list of specific domains or apps where ads are permitted to appear. An allowlist approach provides maximum brand safety (ads only appear on pre-approved, explicitly vetted sites) at the cost of significantly reduced scale (no sites outside the list are reached). Allowlists are most appropriate for campaigns with strict brand safety requirements or limited budgets where quality matters more than reach.
Practical management: most sophisticated programmatic buyers maintain a rolling blocklist updated based on: monthly verification vendor reports identifying poorly-performing or brand-unsafe sites; category exclusions for content types inconsistent with the brand; and minimum viewability thresholds (excluding sites where historical viewability is below 50% for example). Building and maintaining a blocklist is ongoing work — new sites continuously enter the programmatic marketplace and must be evaluated.
Keyword Exclusions
Keyword exclusions prevent ads from appearing on pages that contain specific words or phrases — regardless of whether the page's overall content category is blocked. A page about crime reporting that contains the word "murder" might be appropriately classified as "news" by the content taxonomy, but a brand that wants to avoid any association with violent content would exclude the keyword "murder" to ensure no ad appears on any page containing it.
Keyword exclusion lists require careful calibration. Over-aggressive keyword exclusions significantly reduce scale: blocking every page that contains the word "violence" will exclude large portions of legitimate news content, sports reporting (boxing, MMA), and cultural commentary. The goal is to exclude clearly inappropriate contexts without blocking broad swathes of legitimate, brand-appropriate inventory. Verification vendors provide contextual keyword exclusion services that assess keyword context rather than simple keyword presence — reducing the scale impact of exclusions.
Verification Vendors: IAS and DoubleVerify
Independent verification vendors provide a third-party measurement layer on top of programmatic campaigns, measuring viewability (was the ad actually visible to a human?), brand safety (did the ad appear in brand-appropriate content?), and invalid traffic (was the impression viewed by a real human?). The two market leaders are Integral Ad Science (IAS) and DoubleVerify (DV).
Verification vendors integrate with DSPs in two ways: pre-bid filtering, where the verification vendor flags inventory as unsafe or low-viewability before the bid is submitted, preventing the impression from being bought; and post-bid measurement, where every served impression is independently audited and brand safety and viewability data is reported after delivery.
Pre-bid filtering is more efficient — it prevents wasteful spend on inventory that does not meet standards — but requires the DSP to have an active integration with the verification vendor's data. Post-bid measurement provides the most granular reporting and serves as a check on DSP-reported numbers (which may differ from independent measurement).
Contextual Targeting as Brand Safety
One of the most effective brand safety strategies is not just blocking problematic content but actively targeting only content that is brand-appropriate. This is the brand suitability frame: instead of defining brand safety by what to avoid, define it by what to seek. A children's educational brand using contextual targeting restricted to educational, family, and entertainment content categories will, by definition, not appear on inappropriate content — because they are only buying inventory in appropriate categories.
This approach — restrictive contextual targeting rather than broad contextual with extensive exclusions — provides better brand safety with less operational overhead. The trade-off is scale: restricting to a narrow set of content categories reduces the addressable inventory universe. For campaigns where brand safety is the primary concern and scale is secondary, this approach often produces better outcomes than broad buying with extensive exclusions.
Brand Suitability vs Brand Safety
Brand safety (avoiding genuinely harmful content) and brand suitability (the broader question of which content environments align with the brand's values and audience) are related but distinct concepts. A content environment can be brand-safe (not harmful) but still brand-unsuitable for a specific advertiser.
Example: a page about responsible gambling is brand-safe (not extremist, violent, or illegal) but brand-unsuitable for a children's education brand. A news article about a violent crime is brand-safe as reported factual journalism but brand-unsuitable for a brand selling baby products that wants to maintain a positive, nurturing image. Brand suitability configurations should be brand-specific — built around the brand's actual audience, values, and content sensitivities rather than copied from industry defaults.
Measuring Brand Safety
Brand safety is measured through verification vendor reporting: the percentage of impressions that were served on content within the defined safe or suitable categories. Standard reporting metrics include: brand safety rate (percentage of impressions on brand-safe content); viewability rate (percentage of impressions meeting IAB viewability standard); invalid traffic rate (percentage of impressions identified as bot or invalid traffic); and placement quality scores by site domain.
Monthly verification reporting should be reviewed to identify: sites with consistently low brand safety rates (candidates for blocklist); content categories that are generating high IVT rates (potential fraud patterns); and discrepancies between DSP-reported impressions and verification vendor measured impressions (large discrepancies may indicate measurement or ad serving issues).
Building a Brand Safety Framework
A practical brand safety framework for programmatic campaigns:
- Define brand suitability tiers. Using the GARM framework as a starting point, specify which content risk tiers are unacceptable (blocked), which require campaign-specific evaluation, and which are acceptable. Document this as a brand safety policy.
- Configure DSP category blocks. Apply the GARM brand safety floor blocks (always) and brand suitability category exclusions (brand-specific) at the DSP line item level.
- Activate verification vendor pre-bid filtering. Connect IAS or DoubleVerify pre-bid segments to block low-viewability, high-IVT, and brand-unsafe inventory before bidding.
- Build an initial blocklist. Start with industry-shared blocklists (most DSPs provide these) and add brand-specific exclusions based on past campaign data.
- Review monthly. Monthly verification reports identify new problematic placements; update the blocklist accordingly. Review keyword exclusion lists seasonally to catch new terminology.
Sources & Further Reading
All frameworks, data, and examples in this guide draw from official documentation, peer-reviewed research, and documented practitioner case studies. We learn from primary sources and explain them in our own words.
World Federation of Advertisers' official GARM Brand Safety Floor and Suitability Framework documentation.
IAS official documentation on brand safety measurement, verification, and contextual classification.
DoubleVerify's official documentation on brand safety and brand suitability standards.
IAB Tech Lab's documented supply chain verification standards including ads.txt and sellers.json.