The brand safety crisis has seen ads from major brands placed adjacent to harmful content time after time, causing immeasurable harm to the very trust that makes a product a brand. For that reason, brand safety is seen as a problem primarily for advertisers.

At last year’s Cannes Lions, Unilever’s Chief Marketing Officer Keith Weed declared that “a brand without trust is just a product.” Indeed, ad agencies have been hiring senior brand safety officers and launching collaborative groups such as the Advertising Protection Bureau (APB) to tackle this head-on. Fortune 500 brands are adopting practices and implementing technologies to stay ahead of the curve, albeit with limited success. 

But it is actually publishers who create the meeting ground of content for consumers and marketers. Advertisers have increasingly been demanding that the entire industry, including publishers and platforms, make the adjustments necessary for providing a transparent and brand safe environment. 

While ads.txt has gone a long way to clean-up the supply-side of digital advertising, it has not addressed the brand safe content conundrum. Advertisers have started shining more attention on this as well, such as the WFA’s argument that “…it’s not just about knowing that budgets have been well spent. We also need to be reassured that brand and consumer interests are protected in these new platforms.”

Why this is a make or break moment for publishers 

Simply put, the onus is no longer on advertisers alone as publishers now have a more prominent role in the brand safety process. They need to provide a fully clear picture of the content so advertisers can make informed decisions about where and when they place their ads.  

But do publishers have the right insight and data about their content? It depends on the content source and format. Content published on websites typically comes from three sources: The primary source is originally produced content. The second is syndicated content, which is taking up a growing piece of the pie, as production costs balloon and editorial teams shrink. And third, many sites are starting to publish User Generated Content (UGC), with varying degrees of moderation. 

One would think that originally produced content comes with endless data; a publisher must know if an article includes offensive language or if a video straight out of production contains sexual content. For the most part, the knowledge exists, but there is very little metadata produced and attached to content.

The demand for content, particularly video content, is far outpacing the output that even the biggest publisher can produce. There is a reason why YouTube is the second largest search engine in the world. Publishers are gradually turning to syndicated content to help stem the tide of lost users to social platforms, and additional content is produced by other media outlets, content syndication agencies, and an increasing number of tech vendors who automatically produce and syndicate video. 

These sources often use a mix of tech tools and manual taggers to enrich the content and submit the content for syndication along with attached metadata. UGC, including content from influencers, is the final and biggest source of digital content. It’s no wonder that this is also the biggest source of non-brand-safe content. Despite years of pressures from advertisers, brands keep finding their messages tethered to unsavoury content about ISIS, child abuse, and most recently white supremacy. 

What solutions can technology provide?

Numerous companies have sprouted up around point solutions addressing one aspect of the landscape. These point solutions alone cannot deliver the clarity and understanding needed for successful brand safety protection. There are tools like Natural Language Processing (NLP), the ability to understand semantics while analyzing keywords or entire texts. Also, Image recognition has been good at catching most violations but can be expensive to run and a barrier for small publishers. 

When it comes to video, most point solutions fall short on the sensitivity factor. On one end, a video clip with no explicit sex can be categorized as safe even if it includes implicit sexual content. And an overly sensitive model would flag any scene with bikinis as inappropriate, arrive at too many “false positives,” and block appropriate content time and time again. 

While each is good at detecting part of the equation, they cannot provide full brand safety coverage. But we’re finally at a point in time where the right mix of tools and technology, working together, can provide accurate enough coverage for brands. The winning solutions can align the work and massive data outputs of several point solutions into one cohesive actionable output. And they can do all of this in real-time, at scale and with guaranteed accuracy.

Where does the industry go from here? 

The average consumer, unaware of the intricacies of digital advertising, believes that every ad placement is directly chosen by the brands. Any association with odious content is taken as a direct reflection of the brand. Even a single brand safety issue can damage years of goodwill, perception or the brand’s bottom line. 

Given this, advertisers are rightfully approaching brand safety with urgency and priority. They are turning to advanced technology tools, like content data management platforms and third-party verification solutions. In the short-term, they will continue to turn to tech vendors and platforms that can deliver contextual data, but ultimately advertisers will push this investment cost and responsibility to the publishers.

Publishers and even platforms like YouTube and Facebook need to prepare for this pending shift of responsibility and invest in the next generation of brand safety solutions.

Their tools must also integrate the contextual data with the rest of their data and — perhaps most importantly — share it with advertisers. This is an important piece of the puzzle that will help advertisers protect their brands and regain consumer trust.