We’re well into 2018, and it seems like the state of Brand-Safety and Ad-Fraud is far from ideal, and that’s putting it mildly. Despite using various ad-verification solutions, the world’s leading digital advertisers are still highly compromised, as they find their ads repeatedly served to fake bot users or alongside negative brand content. Our analysis of some of the webs most reputable publishers, has shown that advertisers are still exposed to fraud levels of up to 30%, causing detrimental damage to their ROI. Avoiding negative content association is seeming like a greater and greater challenge too, causing real panic and leading advertisers to divert serious portions of their ad-spend towards safer, offline-channels. A leading global sports brand has even revealed to us that they plan on pulling ads from news sites, and focusing more on “brand-safe” websites with less negative content.
With so many verification solutions out there, why aren’t things getting better?
What’s fundamentally flawed in today’s solutions, is the almost complete reliance on simplistic technology and dated practices, which generate poor results, and offer after-the-fact damage reports, rather than a real protective solution. Here are a few practices that should be retired, if we want to bring on the next generation of Brand-Safety:
01 Impression sampling
Advertisers running massive scale pose a real challenge for verification vendors. The idea of checking every single impression, analyzing the content on the page and verifying the user’s authenticity by the millions, is pretty overwhelming. For this reason, the common industry practice is to sample a portion of the impressions (sometimes as little as 1% of the actual traffic), and to make probabilistic assumptions based on that sample. We’ve actually run multiple tests, comparing the results of sampling with the results of comprehensive, per-impression analysis, and have found massive discrepancies between the two. The bottom line is, if you’re not checking every impression, every time, you’re not delivering accurate results.
02 Keyword lists
The key component in the struggle for better brand-safety, is training our algorithms to analyze content at a near-human level. Well, at least to a level where they can scan an article and label it is as negative and brand-damaging. Also, consider that this complex analysis must be performed at huge scale and at incredible speed and cost-efficieny, if we’re to allow real negative placement prevention. However, the majority of Brand-Safety vendors today choose to circumvent this challenge by simply uploading comprehensive lists of negative key-words, and flagging content when those words happen to appear in them. The problem with this prevalent practice is that it provides very little accuracy. With no real NLP capabilities, there is no understanding of context, and no way to really determine if a piece of content is safe or not.
03 Limited data analysis
Fighting fraud is an extremely difficult challenge. Fraudsters are constantly thinking of new, and frankly, innovate ways to remain undetected, while increasing their activity. However, while the bad guys are bringing their A game, the good guys are relying more and more and pre-purchased lists of fake IP’s, using them to flag suspicious traffic. This practice isn’t neccesarily a bad one, and should actually be utilized as an additional measure, but it cannot be the key weapon of choice when going up against SIVT. Good guys should be looking at heaps of additional data, examining behavioral anomalies at both the user and network level, looking for data discrepancies and setting up proprietary honeypots (bot-traps), to proactively prevent any fake traffic from being served.
04 Scraping, cataloguing and indexing
If you’ve heard of “pre-bid” Brand-Safety, it’s very important to understand the practices that go behind these kind of solutions. Basically, entire web pages and sites are scraped, their content analyzed, indexed, labeled and ultimately given some form of “Brand-Safety Score”. This supposedly allows advertisers to buy pre-filtered, brand-safe inventory without any worries. In reality though, these practices do not account for the dynamic nature of websites, and are unable to keep up with all the new content which is constantly being produced. The only way to provide real protection, is to perform a live analysis, in real-time of every impression generated. This is a far more complicated task, but one that guarantees far greater success in catching brand-damaging exposure before it occurs.
One of the major repercussions of the digital Brand-Safety crisis, is the advertiser’s erosion of trust. With all the lack of transparency going on in the supply chain, one would expect full transparency from the verification vendors who’s goal it is to restore trust and secure advertisers. However, this is not the case today, as many players are providing little to no data when it comes to what content or traffic they’re flagging as invalid. The “it’s a black box and we can’t expose exactly why we flagged this” excuse, just won’t cut it anymore. Advertisers expect, and frankly – deserve to know what inventory has been flagged, and what was the exact reason for doing so. They should be provided with enough data for them to perform their own analysis and make sure that the verification efforts are genuine.