Skip to content
Ad Fraud Calculator Calculate Your Loss
Have Questions? 888-337-0641
6 min read

Why Is Twitter's Ad Fraud So High?

Featured Image

With the recent news involving Elon Musk’s offer to buy Twitter being put on hold because of worries about fake accounts on the platform, now may be a good time to talk about the problem of ad fraud on social media, fake social media accounts, and what you can do to protect your business.

Safeguard your digital campaigns with Anura's Search and Social Protect

How Does Ad Fraud Affect Social Media?

Bots—autonomous programs designed to carry out specific tasks on behalf of a person—are an ever-present part of the internet. In fact, some estimates state that the majority of the traffic on the internet comes from bots (64%) instead of humans (36%). Many of the bots on the internet are benign—such as the web crawlers that Google uses to find new web pages and index them for online search results.

However, just as with any tool, there are those who would use bots for nefarious purposes. Take, for example, the social media bots used to run fake social media accounts.

Social bots running fake accounts can harm a company’s social media accounts and advertising efforts by:

  • Driving low-quality engagement with social posts.
    Many basic bots only give likes (or another platform’s equivalent) or view posts. Even the ones that leave comments tend to leave low-quality ones like “great stuff!” or “I love this [content type]!” This can be a signal to the platform that the content created is of low quality—keeping it from being recommended to the platform’s users in their main browsing feeds.

  • Skewing the company’s social media subscriber and performance data. 
    Because social platforms often look at who subscribes to your social media channels when recommending content, having an excessive amount of bot followers can be a huge problem. For example, if a bunch of bots that also follow accounts with NSFW content are subscribed to a social media account for a company that specializes in children’s education or entertainment, that could cause the platform to associate the company with the NSFW content (or vice versa). This can result in the organization’s content being recommended to unsuitable users who won’t engage with it—further driving down the channel’s engagement statistics.

  • Bots accounts clicking on social ads drain a company’s advertising budget without providing any ROI.
    Because the bots aren’t real consumers, they’ll never convert on any ads they “click” on—but companies will still end up paying for those clicks.

  • Sharing false or misleading information to hurt the company’s reputation. 
    Bots on social media can be used to spread fake news and negative reviews/comments on a company’s profile. Real users might see this false information and assume that it’s true—hurting a company’s marketing efforts. After a while, real users might start repeating or sharing the misinformation from the bot posts.

How Much Ad Fraud Is on Twitter?

The estimate of ad fraud on Twitter varies depending on who you ask and which profiles you’re checking. An estimate done by SparkToro (a market research company) stated that, in an assessment of 44,058 active Twitter accounts, 19.42% of them were fake. The same source noted that of the 26.8 million active followers Elon Musk’s Twitter account has, 23.42% of them were fake.

Socialmediahq reported that “the political Twitterverse is fake” and that “the median average percentage of fake Twitter followers appears to be 41 percent for political figures of all stripes and ideologies.”

MoPub, Twitter’s mobile app publishing and promotion platform that is meant to help monetize downloadable apps for iOS and Android devices has a major ad fraud problem. A recent test of MoPub identified that nearly 80% of the traffic it generated was fraudulent in nature.

What About Twitter’s Claim to Have Less Than 5% Fraud?

In a CNBC article talking about Elon Musk’s delaying of Twitter's acquisition pending a fraudulent account check, it was noted that there were a few problems with the sampling of Twitter’s traffic proposed by Musk. As University of Washington professor Carl T. Bergstrom stated: “a sample size of 100 is orders of magnitude smaller [than] the nor for social media researchers studying this sort of thing. The biggest issue Musk would face with this approach is known as selection bias.”

In the article, CNBC shared a tweet where Musk stated that “I picked 100 as the sample size number, because that is what Twitter uses to calculate <5% fake/spam/duplicate.” While the article could not confirm if this is accurate, simply stating that Twitter declined to comment, the methodology for the test would be amazingly bad if it were true.

Why such a small sample size?

Because, with a sample size of just 100 out of millions of active users, you don’t have anything resembling a fair and accurate representation of the platform’s users. This is what professor Bergstrom called “selection bias” (which can also be called sampling bias).

Data from Statista puts Twitter’s monetizable daily active users (mDAU) worldwide at about 229 million in Q1 2022. That would be 0.00000044% of the platform’s active users. So, if that tiny percentage of users just happened to all be real, the other 99.99999956% could be fraud and you would never know it based on that sample.

Imagine if the results of a political poll in the USA were to be based off of just 100 random people in a state like Massachusetts—which is only 27% Republican—or Wyoming—which is only 25% Democrat (source: Pew Research)! The odds of only reaching people of one side or the other would be greatly increased with such a small sample size relative to the population of the whole country.

To be statistically significant and avoid selection bias, the sample size of any fraud check needs to be much larger than a hundred, or even a few hundred, representative accounts. Tens or hundreds of thousands of accounts need to be tested to create a sample size that can overcome the risk of bias and be considered a reliable representation of Twitter’s followers.

Why Is Ad Fraud on Twitter So High?

There are a number of reasons why fraud on any social media platform may grow. For example:

1. The Platform Could Have Insufficient Anti-Fraud Measures

When there are millions of accounts to manage on a given platform, policing them all becomes an enormous task—especially if the tools for testing accounts are lacking. Social media platforms are rarely incentivized to accurately test for fraud, either.

One of the major selling points of any social media app or platform (to advertisers) is its audience. Being able to say that they can put your ad in front of millions of potential customers is key for any social network’s ad service. So, why would they want to shrink that number by eliminating fake accounts on their platform?

Case in point: Ghost Data once tested the video-sharing app TikTok to see if it could reliably identify fraudulent video views and other interactions generated by using bots. Across tens of thousands of views, Ghost Data noted that TikTok’s “detection systems have failed to detect the views generated by the software.” TikTok is stated to have hundreds of millions of monthly users, making a prime choice for marketers to promote their company’s goods and services.

In the case of a social network that’s thinking ahead, it would be because they want to advertise the best possible ROI for their social network’s advertising platform. By eliminating bot accounts and making sure they can’t click on marketers’ ads, the platform can ensure that companies who advertise through their social network achieve better results for each dollar spent—incentivizing repeat business.

2. Because Bot Accounts Are So Easy to Set Up

You might think that setting up social media bots to manage fake profiles would be difficult. If so, then you’d be wrong. Bots are incredibly easy to acquire these days—even for people with no expertise in computer programming.

A quick search of the Dark Web using the Tor browser can quickly land would-be fraudsters on hidden websites that sell all of the tools that they need to commit ad fraud on social media. It isn’t even all that expensive—Kaspersky notes that botnets cost an average of $0.50 per bot—so a botnet with 1,000 bots that can run tens of thousands of fake social media accounts might cost $500 or less. This isn’t exactly a high bar to meet for an ambitious fraudster looking to collect money for “generating” ad clicks and app installs.

Anyone with a modest amount of coding experience could easily set up a bot to repeat a basic action like interacting with a post or ad on social media, changing to a different account using a table of stolen or fake account information, and repeating the process endlessly.

Because of this, even when fake accounts created using bots are found, the fraudster can easily set up yet another botnet and keep going like nothing happened.

3. Because Bot Fraud Can Be Profitable

If someone can buy a ready-made botnet for committing social media fraud for $500, how much money could they make in a month using those bots? It depends on the platform and the advertiser being defrauded, but it can be an incredibly high number.

For example, the average cost of an action for a promoted tweet can vary between $0.50 and $2 per action (Source: WebFX). So, 10,000 fraudulent actions could cost you between $5,000 and $20,000.

If the fraudster behind the clicks were part of an affiliate program where you paid them $0.30 per click, then those 10,000 clicks could net them $3,000. That would be six times the amount they invested in a $500 botnet to get started. If the fraudster goes undetected, then they could easily earn tens of thousands of dollars—all while pretending to help your business.

How to Detect Fake Accounts, Influencer Fraud, and Other Social Media Fraud

So, how can you detect and stop fake accounts, instances of influencer fraud, and other fraud-related problems on social media platforms? You could try using a manual tool to verify your followers on social media—but that would be immensely difficult and time-consuming.

The best way to stop ad fraud on social media is to use an ad fraud solution that can check traffic in real time to positively identify fraud before you end up paying money for bad leads and false ad interactions on social platforms.

Anura can accurately detect bots and other forms of invalid traffic while eliminating the risk of false positives. Every time activity is flagged as fraud, you’re provided with all of the data supporting that conclusion—allowing you to confront fraudsters and cut them out of your affiliate marketing campaigns.

Why let fraudsters take your money? Start protecting your marketing budget from fraudsters now!

New call-to-action