bots | CHEQ

CHEQ acquires Ensighten

Learn More

Bot Friday: How Bots and Fake Users Impact Retailers on Black Friday

Black Friday is a big deal. The retail ‘holiday’ now encompasses nearly an entire week and generates revenue numbers equal to the GDP of a small country. For retailers, the holiday season is often a make-or-break quarter, and a good Black Friday, Small Business Saturday, or Cyber Monday can be key to success.

And while blockbuster deals can still get people out to the brick-and-mortar stores, Black Friday is increasingly an online affair: American consumers spent $8.9 billion online during Black Friday 2021, and $10.7 billion on Cyber Monday and are expected to surpass that in 2022.

But where money goes, cybercriminals typically follow, and cybercriminals and bad actors have found plenty of ways to take advantage of retailers’ investments in Black Friday through various forms of bots, web scrapers, and fraudulent traffic.

Last year, we discovered that bots and fake users made up 35.7% of all online shoppers on Black Friday. Among the forms of fake traffic we uncovered were malicious scrapers and crawlers, sophisticated botnets, fake accounts, click farms, proxy users, and illegitimate users committing eCommerce-related fraud.

As we approach the 2022 holiday shopping season, we’ve decided to analyze how bots and fake users affected eCommerce sites on previous Black Fridays, and used that information alongside current fake traffic rates to uncover the potential financial and operational impacts retailers can expect this coming Black Friday in our new report, How Bots and Fake Users Impact Sales on Black Friday 2022.

To build our report, we analyzed data from 233 million eCommerce site visits originating from all source types (direct, organic, paid), across a 6-month span (January – June 2022) and studied the validity of each site visit. From there, we were able to pull inferences from typical site traffic numbers, consumer spending patterns and media spending in the eCommerce space.

$368M Could be Lost to Fake Clicks on Retail Ads

Bots and fake users frequently click on advertisements they encounter online, either for purposes of ad fraud, to inflate marketing budgets, or simply to scrape a website for competitive users. This can be done on paid search platforms, advertisements on social media networks, and other forms of display and text ads.

The eCommerce industry is certainly not immune to these actions. Based on the standard rates of fraud that are encountered across retailer websites from paid sources, analyzed alongside the volume and frequency of advertising clicks during the holiday season, CHEQ predicts that retailers will lose about $368 million to fraudulent clicks this Black Friday alone.

Get the Full Story in Our New Report

Invalid traffic is a year-round platform, but Black Friday and the holiday shopping season is a period of increased activity among cybercriminals, and retailers should be prepared to deal with ad fraud, skewed metrics, and cart abandonment.

To learn more about fake traffic and how it can affect eCommerce websites this holiday season in the full Black Friday report, available here.

Bots and fake users on the internet can commonly interact with websites that companies use to run their businesses. This phenomenon – known as the Fake Web – can drain budgets, skew important metrics, infiltrate databases, and generally make it more difficult to run an organization online. The issues caused by the Fake Web can sometimes become difficult to address because there are so many different types of threats that engage with content in different ways. These threats include, but are not limited to: botnets, account hijackers, click farms, carding attackers, and other forms of invalid traffic. Throughout this article, we will focus specifically on the issue of click spamming, how it compares to other types of spam, and what businesses can do to address it. 

What is spam?  

When some people think of ‘spam,’ images of mass quantities of unwanted emails entering a person’s inbox might come to mind. Email spamming is a very common and long-standing type of spam, but today spambots use a variety of channels – not just email. An example that recently caught the attention of Elon Musk during this attempted Twitter acquisition, was bots spamming users on social media via commenting on posts, sending messages, and otherwise flooding user feeds. 

Another type of spam completely is called “click spamming.” This shares some qualities with other forms of spam because it deals with mass quantities of invalid or unwanted actions, however in this case those actions are clicks. Click spamming begins by hijacking another user’s web session or browser in order to impersonate them. The bad actor can then repeatedly click on various links throughout the internet as a legitimate human user, and often go undetected. 

Paid ads click spamming 

Click spamming can occur on paid advertisements when a malicious user imitates a legitimate human user or gains access to their real click via the installation of malware on their device. The bad actor might then choose to repeatedly click on advertisements in order to drain a company’s budget, access discounts or coupons intended for other users, or enter a website page so they can further infect it or commit additional fraud.

Organic click spamming 

Similarly to the click spamming that occurs on paid ads, malicious users can also hijack clicks on organic links as well. Organic links include anything that a regular user might click on that exists outside paid promotions or ads. For example, a link to an article, a link shared on a social media platform, or a link discovered in the non-paid section of a search engine results page. 

Who commits click spamming?  

Because the action of impersonating another user and taking over their clicks can be done in an automated and programmed way, click spamming is often committed by bots. However, malicious humans can also find ways to enter someone else’s account and take over their clicks. While click spamming is a specific type of action, it can either be done one-by-one by a malicious human or at a large scale across multiple devices by automation tools. Each user and bot may also have different intentions or reasons for click spamming which we will outline in the following paragraph. 

Why would anyone commit click spamming? 

When a bot or malicious user click spams, they might seek to simply overwhelm a server so that a site crashes and other users can’t access it. This is one way bad actors can sabotage a business’s online operations. These users also frequently seek to gain access to additional information. Maybe they are clicking on a specific ad or link because they want to get behind it and achieve some type of discount or free item. Additionally, click spammers might spam clicks on their own ad, so they can then submit those clicks as invalid to their ad platform, and gain refunds for “fake clicks” that they actually committed themselves. In general, the end game of click spamming is usually to gain either information or funds. 

What damage does this cause? 

Click spamming can cause headaches, as well as financial and operational issues, for businesses and end-users alike. The following are four examples of some common consequences of click spamming. 

Paid advertising damages 

When spammers click on the advertisements of other companies on the internet, that organization loses several CPC costs to an invalid user that does not have the intention or ability to convert. That budget is now lost, and cannot be used on legitimate users with a much higher likelihood of becoming customers. Additionally, if click spamming continues to occur, optimizations could become skewed toward additional invalid users, and audiences can become polluted so that they are no longer effective. Even if a click spamming attack only occurs once, the damage can continue by infecting future campaigns through these learned optimizations. 

On-site damages

Whether click spamming happens by a hijacker spamming clicks on paid ads, or repeatedly clicking on organic links, that fake user can then arrive on a website where they can continue to damage a brand in a variety of ways. Bots and malicious users may choose to fill out forms, submit actions various times or click around on the site in order to test gated pages or gain information on how the website is set up. If these malicious users are mistaken for users that have a legitimate interest, these actions might cause a company to revamp their website content and creative to better serve these bots, which opens them up for additional future damages as well. 

Analytics damages 

When bots and fake users arrive on a website, they disrupt the company’s analytics and source of truth. Since today most business decisions are made based on data, even a small percentage of invalid traffic can skew metrics and lead to poor decision-making and company performance. Now image bots and fake users are not just occasionally accessing a website, but clicking at rapid speeds and impersonating legitimate users – yes, click spamming – that would cause analytical damage at an even higher rate. For this reason, businesses that care about protecting their business intelligence systems are rightfully concerned about click spamming and other high-volume cyber threats. 

Damages to the legitimate user 

As previously mentioned, companies are not the only ones who suffer from click spamming. Since click spamming is often committed by a malicious user taking over the clicking actions of a legitimate user or impersonating a regular internet user, that real user suffers damage as well. Their accounts may be flagged for malicious activities, their IP or server could end up blocked from some sites they want to visit, and their overall internet experience could become more limited. Not to mention, if a malicious user was able to hijack their clicks, perhaps they can find a way to hijack additional information about that user which could ultimately lead to account takeover or even identity fraud. 

[Get a free Invalid Traffic Scan. Plug CHEQ in for free and see how many bots and fake users are in your funnel.]

How can businesses identify click spamming? 

An initial sign can be an influx of clicks – on a specific ad, a variety of ads, or organic links – from a single user or IP. While a single user might choose to visit a business’s website more than once, it is typically done over time rather than in rapid succession. So if an organization notices strange user behavior on these links and assets, it might be worth it to investigate further. Additionally, since many bad actors continue to commit other types of fraud once they arrive on a website, site operators can check their analytics and heat maps for unusual patterns or user behaviors on-site as well. Awareness is the first step in combating click spamming and protecting both the organization and the end customer. 

What can be done if a business is under a click spam attack? 

If a company suspects they are currently under a click spam attack, there are some immediate protective measures they can take. First, if the attack appears to be coming from a specific IP or location, they might choose to block that from their advertisements to avoid additional budget drainage. If the click spammer appears to also be taking harmful actions on the website, the company might add a CAPTCHA form or other form and page protections so the user cannot make it further down the funnel. If the invalid user has converted and entered any databases, it is wise to monitor that contact’s behaviors and remove them from any future marketing campaigns or audiences.

How can businesses be more proactive in the future? 

While the actions mentioned previously can help avoid a major internal crisis when click spamming is actively occurring, they are all very reactive and tactical actions. Organizations that are looking to be more proactive against click spamming and other cyber attacks should consider installing some type of cybersecurity software. Previously, many companies put this responsibility entirely on the CISO and IT department, but now that it has become more apparent that cyber threats impact businesses holistically, operations and marketing professionals are pushing for better protection as well. Specifically, many organizations are turning to go-to-market security to secure their entire business.

Over the past decade, live streaming has gone from a niche platform for gamers to a cultural phenomenon and streamers have gone from microcelebrities to household names with follower counts surpassing the biggest names in Hollywood.

Twitch, the predominant live streaming platform,  has grown from approximately 500,000 concurrent viewers in 2016, to over 2.76 million concurrent viewers in 2021, and amassed over 24 billion hours watched in 2021.

Those extraordinary viewership numbers have enticed other media giants to enter the market–YouTube, Facebook, and Instagram all now offer live-streaming capability.

In this online economy, views are money, and where money goes, fraud often follows.

In the shadow of the billion-dollar streaming industry is a booming marketplace for fake views generated by view bots and viewbotting services. These bots are cheap and easy to use, and for unscrupulous streamers, the reward often outweighs the risk.

What is viewbotting?

A Twitch streamer accidentally exposes himself using viewbotting software.
A Twitch streamer accidentally exposes himself using viewbotting software.

Viewbotting is a form of invalid traffic in which pieces of automated software (bots) are used to view streaming videos or live streams in order to artificially boost the view count.

Most view bots are simple scripts that open a video in a headless browser, but more complicated viewbotting services may also create fake accounts to mimic logged-in viewers, and incorporate a chatbot capability that will spam the stream’s chat or comments section with artificial banter to make audience numbers appear more legitimate.

Why do creators use View bots?

Alongside subscribers, views are one of the top metrics for success in the social media economy. Views determine whether a video is monetized on YouTube, where a video will rank in search results, and also act as a form of social proof–users are much more likely to click on a video that has lots of views.

Basically, the more views a YouTuber or Twitch streamer gets, the higher their earning potential.

To make matters worse, most streaming sites operate on the kingmaker system, promoting streams that have the most views and engagement at the top of their directories and search results. For newbie creators, this system can make it particularly difficult to break out and find viewership as they find themselves stuck streaming to small audiences, without any chance to be promoted by the algorithm.

The Twitch directory prioritizes high viewcounts
The Twitch directory prioritizes streams with high view counts.

With all of these factors in mind, it’s easy to imagine how tempting it can be for creators to use view bots to boost their viewership numbers and incite streaming platforms to promote their content.

It also doesn’t help that view bots are relatively easy to set up and use, and that it’s easy to have a layer of plausible deniability in using them, more on that later.

How does viewbotting work?

Most viewbotting is carried out either by viewbotting scripts that a user sets up themselves, or by paid viewbotting services that offer thousands of views for low prices. Let’s break it down.

Viewbotting scripts

Technically savvy streamers can easily write a script to run a headless browser to open a stream or video for a certain duration, thus creating a view. This can be scaled up by hosting hundreds or even thousands of instances on a cloud service such as AWS.

Open-source viewbotting scripts on GitHub
Open-source viewbotting scripts are available on GitHub

Viewbotting services

For streamers unable or unwilling to create their own view bot scripts, there are dozens of viewbotting services that offer thousands of views for low prices, often with free trials available to entice curious streamers.

Viewbotting services offer thousands of bots that will view your stream for a monthly cost as low as $10. These services offer a higher level of control than rudimentary scripts, such as the capability to add or remove viewers instantly, set viewer join and leave intervals, and choose the region from which “viewers” originate. Using view bot services is against the terms of service of major streaming platforms like Twitch and YouTube.

Streambot-viewbotting-service
Streambot is a popular viewbotting service. Such services violate the TOS of most streaming platforms

 

Malicious viewbotting

While not exactly a type of viewbotting, malicious viewbotting is also worth mentioning.

Malicious viewbotting is when people send bots to a stream other than their own, in an attempt to get the streamer banned from Twitch, or simply lower their credibility. This technique is so common that Twitch provides a FAQ for streamers to deal with it.

Malicious viewbotting is unlikely to result in a ban for the targeted streamer, but Twitch recommends reporting instances right away.

Perhaps the largest impact of malicious viewbotting is that it creates an attribution problem for Twitch, which may have difficulty discerning whether a user is viewbotting their own channel, and deserving of a ban, or is the victim of malicious viewbotting.

How to spot view bot use

Spotting unsophisticated viewbotters is not particularly difficult if you know what to look for. Typically, a simple view bot is just that: a bot that views videos. As such, they won’t increase any other metrics, such as engagement, chat activity, or subscriber numbers. If a streamer or content creator routinely gets high viewership numbers with relatively low engagement and subscribership, it’s a fair sign that foul-play could be involved. A formulaic, generic chat with repeated comments like “awesome stream” and “love this” is also a dead giveaway.

How viewbotting hurts go-to-market efforts

Not only is viewbotting a dishonest way to get ahead on streaming platforms, and unfair to other streamers who worked hard to build an audience, it also degrades the value of the platform for advertisers.

Like other fake engagement techniques, such as click fraud and lead-gen fraud, viewbotting is a form of Ad Fraud. Essentially, viewbotting steals from advertisers who paid for placement that was ostensibly supposed to reach real humans, not bots.

Under the Twitch Partner Program, high-profile streamers can earn revenue from ads displayed on their channel in the form of a commission for every ad impression. If those impressions are generated by bots, not real people, then the ad budget used to create and place those ads has essentially been wasted.

If it costs $2000 for 100,000 impressions on YouTube, and 15-20% of those impressions are fake, that’s $150-200 wasted. Considering most YouTube advertising campaigns measure impressions in the millions, the costs of those fake impressions can add up fast. Budget is also used faster when it’s wasted on fake impressions, causing brands to potentially miss out on opportunities with genuine customers.

And if advertisers aren’t vigilant in weeding out bot impressions, it’s possible to waste even more budget on remarketing efforts, sending good money after bad bots to retarget “viewers” that never existed in the first place.

But the waste doesn’t end there. False impressions also make their way into advertisers’ data, skewing their metrics and tainting the data used to make further advertising decisions.

[See how CHEQ secures your data, on-site conversion, and paid marketing from bots, fake traffic, and malicious users. Schedule a 30-minute demo today!]

Can you block viewbotting?

Unfortunately, since viewbotting takes place on streaming services, such as Twitch and YouTube, it’s up to those services to fight viewbotting and hold fraudsters accountable. Luckily, major streaming platforms are taking the problem very seriously, and have taken steps to ensure quality viewership and punish those who would cheat to get ahead.

Twitch actively monitors for viewbotting, and will indefinitely ban any user caught using view bots to boost their own streams. And though malicious viewbotting complicates this policy, the company has had some success: Twitch banned over 15 million bot accounts last year.

The live streaming giant has also successfully sued multiple creators of view bots, putting pressure on companies operating in the quasi-legal bot services market.

And while you can’t block viewbotting on the native platform, you can bot traffic from Google, Facebook, or YouTube campaigns using go-to-market security tools.

The high cost of ad fraud and fake Traffic

This kind of invalid traffic is a far-reaching problem that has grown to affect every corner of the internet and shows no signs of slowing down.

In our recent research report, we found that, on average, 22.3% of unique site visits across all industries are bots. But the gaming industry blew that average out of the water with an astounding 66% invalid traffic rate on unique site visits.

66 percent of unique site visits on gaming websites are fake traffic.

Audience pollution at that level can have major impacts on marketing data, efficiency, and even the bottom line. If 66% of site visitors are bots, then two-thirds of your marketing data is contaminated, and it’s impossible to make good marketing decisions based on bad data.

And beyond wasting time, those fake users, views, and ad clicks have real financial consequences. Advertisers are estimated to lose $68 billion in wasted ad spend by the end of 2022, according to Juniper Research.

Protect Your Pipeline with Go-to-Market Security

Considering that 41% of all web traffic is invalid traffic, when you advertise on any video-based platform–or any web-based platform, for that matter–chances are good that your ads will be exposed to bots.

Knowing which impressions or clicks are real potential customers, and which are bots, fraudsters, or bad actors is critical to the integrity of your campaigns, data, and go-to-market efforts writ large.

To fight the fake web, marketers need tools that provide end-to-end protection of their go-to-market efforts and customer journey. That’s where Cheq Paradome can help.

Paradome’s bot mitigation engine uses advanced fingerprinting and multi-layered security models to monitor and block bot activity, so you can monitor the authenticity of all traffic entering your funnels and pipelines, whether via paid marketing campaigns, organic search, affiliate programs, or any other method.

With Cheq Paradome, marketers can be confident of the integrity of all data flowing through their advertising audiences, campaign analytics, CRM, DMP, and CDP segments, and business intelligence systems, and rest assured that decisions are made on real, solid data.

Within two months of partnering with CHEQ Paradome, the UserWay team reduced their unqualified traffic by 56% and redirected their ad spend towards real businesses seeking solutions for digital accessibility.

To learn more, and find out what’s real (and what’s not) in your funnel, get started with a demo of Cheq Paradome today.