Mobile advertising has grown tremendously over recent years and has become a huge business, with hundreds of billions of dollars spent each year. According to the data collected by WARC (World Advertising Research Center), the digital ad spends around the world amounted to $333.5 billion in 2019 and will continue to rise in the years to come.

Unfortunately, wherever there’s money, there will always be those who will try to put their hands on it. Ad fraud is expected to be the second-largest source of income for the criminals by 2025, just behind drug trafficking. This is a worrying statistic and costs the digital advertising industry around $39 billion dollars per year (and counting), according to the WFA (World Federation of Advertisers).

Just like the entire advertising industry that is growing very fast, fraudsters spread their reach to more and more industries every day. What’s more, we not only see an increase in fraudulent activities as such, we also see that fraudsters react much faster now. If in summer/fall 2018 it took them 2 months on average to react to the protection solutions countering their attacks, it now takes them only 2-3 days to do so. This development indicates increasingly larger operations backed by data-driven pros mining the data.

Now, back to the actual fraud. When we talk about fraud, we need to understand that it doesn’t just have one face. Ad fraud is a multifaceted creature spreading its influence into different areas and industries. Each industry has its own “typical” strategies fraudsters use to make money. Let’s have a look at the most “popular” types of fraud in performance marketing:

  • Bot fraud
  • Invalid traffic
  • Spamming
  • Fake clicks
  • Ad stacking

There are, of course, many more but these seem to be the most popular ones. Given that bot traffic continuously comes up as one of the top ones in this list, we would like to explore it in more detail in this article.

What are bots?

Mobile fraud bots are clever programs that often run off servers (or mobile devices), attempting to simulate specific tasks like ad clicks, in-app engagement and installs, disguising themselves as legitimate users. There are also other forms of bots, for example, those that can be identified as malware located on a user’s device. These malware programs seek to generate fraudulent clicks, fake ad impressions and in-app engagement and could even go as far as faking in-app purchases without the user’s knowledge.

Distinguishing bots from humans is a complex task. Bot developers have learnt to create sophisticated and elaborate software, quickly adopting new technologies and deliberately designing their bots to bypass fraud detection solutions.

How they work:

Server-based bots usually operate via emulators (device simulating software) by trying to reproduce an active user’s behaviour, interacting with ads, completing app installation funnels and some are even going as far as a deep in-app event like a purchase or a subscription.

These programs observe and learn user behaviour patterns and then apply them in their activity whilst trying to go under the radar of fraud protection solutions.

The most recently developed bots are already almost indistinguishable from human usage, and it’s impossible to detect them without truly expert bot detection know-how. They have prompted the need for tools and solutions that are able to determine the intent of traffic, instead of simply analysing traffic volume and known bot signatures.

Moving forward, we expect the next generation of bots to make extensive use of artificial intelligence (AI), thus making them even harder to spot. To be able to counter them, the “good guys” will need to step up their game and develop AI-based detection algorithms – this is the only way to really fight bot fraud in the nearest future.

Bot distribution:

Furthermore, we now observe a trend where bots are distributed in increasingly elaborate ways to escape detection. Traditional security solutions mostly rely on IP reputation, assuming that any malicious activity from an IP address means that all activity from that IP address is likely to be hostile. Therefore, IP blacklists were created and bots operating from the blacklisted IPs were blocked.

Today, bots are more and more often distributed via residential IPs, which benefit from excellent reputations, making it very hard to distinguish requests made by bots as opposed to the ones made by real users. This development means that IP-based blocking tactics are no longer effective as bot operators can now easily and cheaply rotate through thousands or even millions of different IPs.

How to protect yourselves

The aforementioned description of the current situation shows very clearly that bots are now perfectly capable of mimicking human behaviour, creating the need for drastic changes in the way we approach bot detection. And this is not necessarily a bad thing. The “good guys” are as intelligent and ingenious (if not more) as the “bad guys” and come up with solutions that not only detect existing fraud but also find ways to prevent it so that it doesn’t reach its destination to start with.

There are already solutions on the market that use AI more and more and will get better and smarter in the future. Organisations are better served using designated fraud detection and protection solutions as they usually have a big volume of data that could potentially be at risk. Do your research and see what is suitable for your organisation. If you need guidance, feel free to contact us at and we will be happy to help.