How a rise in bad bots sees marketers waste billions in online ad spend
Bots are automated programs designed to automate specific tasks at a rate that humans simply cannot manage.
There are bots on Twitter that replace the word “blockchain” in headlines with “Beyoncé.
Or bots that tweet every new word that appears in the New York Times (recent words include “overtouching”, “deadass” and “doomscrolling”).
Other bots colorize black and white photos on Reddit.
Or, connect you with a customer service agent.
Good Bots vs Bad Bots
In total, bots account for approximately 45% of all web traffic. Among this army of bots, these can be further divided into good bots and bad bots.
- Good bots are industrious worker-bees. They automate tasks such as web crawling, updating sports scores and weather, and a host of other things which make the web easier to use.
- SEO: Search engine crawler bots crawl, catalogue, and index web pages (e.g. Google
- Website Monitoring: Checking loading times, down times, and so on
- Aggregation: gathering information from various websites or parts of a website, collating them into one place
- Updates: updating information, such as sports scores and weather
Bad bots are used by bad actors and fraudsters to carry out account hijacking, web scraping, stealing financial data, DDOS attacks (and as we shall see draining billions in advertising spend across display, PPC, and paid social media campaigns).
- Spam: (for e.g. within the “comments” section of websites and fake reviews
- DDoS: Bots can be used to take down your site with a denial of service attack
- Ad Fraud: Bots can be used to click on your ads automatically
CHEQ and the University of Baltimore economics department showed that even opportunistic bots are set to cost businesses $10 billion in 2020 . They affect everything from poker sites to film review sites. This includes:
- Classifieds sites, with competitors stealing listings by scraping content affecting the traffic and revenues
- eCommerce attacks, where bots try to steal price information in real-time and use it as a competitive intelligence
- Bots racking up fake transactions
- Bots faking reviews, which damage products and brands
- Hurting ticketing sites through denial of inventory, spinning, and scalping, scraping seat map inventory, fan account takeover, and fraud.
Brands hit by bot attacks
In one lawsuit Ticketmaster acknowledged that the cost of bots ’’restricting other customers’’ use of the site through their abusive conduct, using number and letter generators to gain access to events is extremely difficult to ascertain. ’’The ticketing giant argued that accessing more than 1,000 pages of the site and making more than 800 reserve requests in a 24- hour period, made potentially bad actors liable for damages in the amount of twenty-five cents ($0.25) for each page request, or reserve request. In total, they claimed a loss of over $5000 in a one-year period.
In another example, between Dec. 1, 2018, and March 31, 2019, gambling site, Partypoker reported confirming and closing 277 bot accounts while redistributing $734,852.15 in funds to affected players – stressing both the financial and administrative costs of fighting back
Doing basic bot detective work on your site
Detecting damaging (bad) bot traffic can involve checking out your Google analytics. In particular you need to check out these metrics: bounce rate (bots for instance will hit one page and leave); page views (a user viewing 50-60 pages is not human); page load metrics (If load times suddenly slow down, and your site is feeling sluggish, this could indicate a jump in bot traffic, or even a DDoS (Distributed Denial of Service) attack); and average session duration( two seconds is classic visit time for most bots).
Protecting your entire site
You can give some directions to bots about what to do on your site. This basic move involves setting up a simple text file, called Robots.txt. This tells search engine crawlers which pages or files the crawler can or cannot request from your site. This is used mainly to avoid overloading your site.
The obvious drawback of course is that bad bots will ignore your robots.txt file. This is particularly true of malware robots or email address scrapers that cause damage to sites.
To protect against scraping content, false transactions, fake reviews, and barraging ticketing sites, you will need to engage a tech solution. Some of the most common ones are CHEQ, Distil Networks and Akamai.
Bots and Digital advertising fraud
Now, on to the effect of bots on one of the biggest spending departments: marketing. Marketers are set to lose $23 billion annually in online ad spending because of ad fraud. This is largely as a result of bot traffic. Individual bot attacks can devastate marketing budgets. In the past decade, there have been a number of bots that have damaged ad spend. These are listed from smallest reported losses per day, to highest reported losses for marketers.
- Drainerbot (2019) – an app-based fraud operation that used infected code on Android devices to deliver fraudulent, invisible video ads to the device. This is estimated to have wasted $100 a day for marketing ad budgets.
- DNSChanger (2011)– a malware that infected computers by modifying a computer’s Domain Name System entries to point toward its own rogue name servers, which then injected its own advertising into Web pages.
- Clickbot.A (2006)– a botnet that infected 100,000 machines in a month, using victims computers to automatically click on PPC ads.
- Judy (2017) – an auto-clicking adware which was found on 41 apps developed by a Korean company which generated large amounts of fraudulent clicks on advertisements.
- Hummingbad (2016) – a malware which at its peak controlled 10 million Android devices globally and raked in $300,000 a month.
- Fireball (2017) – The Fireball malware infected over 250 million computers costing $10,000 a day.
- Chameleon (2013) – One of the first ad fraud bots identified, the botnet stole more than $6 million per month by generating fake customer clicks on online display ads.
- Hyphbot (2017) – thousands of publishers were subject to bots creating fake versions of their websites, a technique called “domain spoofing,” where brands inadvertently bought advertising space on these fake sites meaning that advertisers wasted money and publishers missed out on ad dollars.
- 3ve (2018) – 3ve utilized the malware packages Boaxxe and Kovter to infect a network of PCs. They were spread through emails and fake downloads, and once infected, the bots would generate fake clicks on online advertisements . This sparked an FBI investigation.
- Methbot (2015) – Controlled by a single group based in Russia and operating out of data centers in the US and Netherlands, this “bot farm” generated $3 to $5 million in counterfeit inventory per day by targeting the premium video advertising ecosystem.
Bots in PPC and Paid Social Campaigns
The challenges of bot attacks in programmatic display advertising have been well documented – not least in our study on The Economic Cost of Online Ad Fraud report. However almost no attention has been payed to means of protecting adverting spend from bot attacks on paid search (such as Google AdWords and Bing) or paid social (think Facebook, Pinterest and Twitter). This is despite the fact that globally, paid search remains the most-used digital advertising channel, attracting 47% of total digital ad revenues. Google search alone was worth $98 billion in 2019. Marketing spend on these channels continues to grow.
It has been shown that 14% of all clicks across PPC campaigns are invalid with bots playing a significant role in damaging campaign metrics and creating wasted spend.
Complex human-like bots are able to replicate human mouse movements, scroll pages, interact, and click on ads, and install apps. By invading your entire (customer relationship management software) CRM, this impacts conversion rates, rendering your analytics meaningless. For instance, Adaeze Okpoebo, Lifecycle marketing manager at NASDAQ-listed software company LogMein says bot clicks in particular caused “many issues” not least “sales reps following up on false leads, and bad and misleading reporting.” Groupon which invested nearly $400 million annually, including paid online advertising, had to be vigilant in preventing fraud.
Marina Vafaei, former team lead, risk management at Groupon said “I have managed risk and prevented fraud on more than $1.7billion in sales. We constantly had to be aware of the threat of bots attacking the site in a massive network that served 26.5 million active customers in the US alone.”
Eliminating bots in paid search and paid social campaigns
Cybersecurity company, CHEQ has launched the first cyber security and AI based advanced click fraud protection solution for paid search and paid social campaigns. The winner of The Drum Best Search solution 2020, CHEQ for PPC is the first solution to block invalid clicks across platforms including Google Ads, Facebook, Pinterest and Twitter ad campaigns.
This builds on CHEQ’s bot-busting cybersecurity engine used by clients such as Dentsu, Outbrain, and Spark Foundry to detect bot-nets like the ones described above. This includes creating over 1000 different user parameters to block ad fraud in real-time, including Device/Browser/OS Fingerprinting and sophisticated honeypots (proprietary bot-traps). This is uniquely able to catch “sophisticated invalid traffic” as set out by the Media Rating Council (MRC) in their Invalid Traffic Detection and Filtration Guidelines.
Under the MRC guidance, “General invalid traffic” consists of traffic identified through routine means of filtration. “Sophisticated invalid traffic” consists of more difficult to detect situations that require advanced analytics, multipoint collaboration/co-ordination, and significant intervention to analyze and identify. Looking at bot traffic, SIVT attacks typically involve bots masquerading as legitimate users according to the MRC guidelines. Sophisticated invalid traffic has vastly overtaken general invalid traffic, with bots leading this attack.
Capturing bot activity
For the first time bot activity on campaigns can also be seen.
CHEQ For PPC includes a heatmap of every single user that visited their site from ad campaigns, including Google and Facebook, alongside Amazon, Baidu, Bing, Facebook, LinkedIn, Snap, Twitter, Pinterest, Yahoo, and Yandex. This shows real-human users, as well as how bots are interacting with your reports or landing pages, through their movement and scrolling. Customers can alternate and compare authentic users with fraudulent, non-human, invalid users.
This is part of a wider transparency that is required in detecting, the good, the bad, and the bot. CHEQ provides analysis of blocked impressions, and supplies log level data (user agent, IP addresses, and time stamps), ensuring bots are removed from campaign attribution. The use of machine learning also enables CHEQ to predict and eliminate future threats.
Bots are a large problem on the internet in general, and in advertising in particular. The ubiquitous bot is used by fraudsters to drain billions on ad campaigns. This is severely damaging the chance for real users to interact with a brand, or buy your goods or services, with real customers replaced with junk or bot traffic through the use of bots. To protect your site, and your ads, look no further than CHEQ and Request A Demo.