Computers are an increasingly vital part of everyday life because they accelerate and facilitate the processing of information -- making lives and labor easier. Technological advancements, however, have also led to the creation of Internet bots.
Some of you may be asking, what is a bot, anyway? These software applications empower their users to quickly and efficiently complete various repetitive online and offline tasks.
Similar to other technologies, bots can be used for both beneficial and harmful purposes. In this post, we’ll examine some of the good and the bad bots. Then we will explore a number of measures that performance marketers can take to defend against bad bots that skew campaign results, distort data, and waste ad spend while destroying bottom lines.
Good internet bots can be extremely beneficial to your brand. Legitimate bots lie behind the success of many search engines, including Google and Yahoo. This is because search engines usually create their rankings based on information collected from bots, also known as crawlers, spiders, or spiderbots.
Crawlers allow companies such as Google to collect information on roughly 30 trillion individual webpages. To visualize that, if a web page were represented by an actual page of paper, a stack of those papers would be nearly 2,000 miles tall.
The collected information amounts to about 100 million gigabytes. Collecting this much data over that many pages is something individuals simply could not accomplish given their physical and time limitations, making these bots extremely necessary.
Another important function of good bots is to identify copyright violations. Bots launched by copyright holders allows them to quickly discover illegal copies of their content. Major brands often utilize the power of bots to detect unlawfully uploaded books, music, videos, images and other copyrighted materials.
Bots can scan large volumes of data sources and collect specific information based on predefined criteria. For example, bots are able to simultaneously monitor thousands of websites containing information about weather, news, sports, or road traffic.
Bots can automate not only the collection of information and data sourced from web pages, but can even engage in customer support. More specifically, chatbots are able to automatically answer customer enquiries. Chatbots usually rely on a predefined database of phrases to reply to messages sent to them.
Some chat bots, like Amazon’s Alexa and Apple’s Siri, use artificial intelligence to process increasingly thorough determinations, making their communication with humans more effective and helpful.
Like any good invention, given their effectiveness, bad actors can re-tool bots for malicious purposes in a variety of ways. Here are just a few types of bad internet bots to look out for:
Bad bots can fill out forms with the aim to access restricted content and make this content publicly available. This, in turn, may decrease the income generated through brands relying on premium content or subscription models.
Spam bots will collect email addresses from internet sources and send unsolicited spam emails, links, and other kinds of spam. For example, these spam bots can fill out comment forms and flood inboxes.
These bots contain spyware that collects data about a person or computer without permission. Spy bots are primarily used for data collection and surveillance purposes, and can be difficult to detect, hurting user experience and skewing analytics for marketers.
A collection of zombie computers is called a botnet. Zombie computers or botnets are compromised computers that hackers have access to and can control from anywhere in the world. While hackers control these computers, users usually have no idea.
Bad chatbots emulate human interaction by engaging in conversation. These chatbots attempt to acquire personal information and often reside on service websites like dating sites, messaging apps, or chat rooms.
But a more costly way nefarious individuals can utilize click bots is to generate impressions and/or clicks on advertisements, like digital banners and videos. This forces brands or performance marketers to pay for engagement on advertisements that have never actually been seen by human eyes.
If you’ve used pay per click ads, you’ve probably run into click bots. These bots flow through the web, clicking on ads without converting, raking up higher ad spend. A click bots’ only agenda is to click on your ads making their creators money while costing you money.
Identifying malicious internet bots, of course, is only half the battle. How brands and performance marketers actually thwart the bad bots they’ve identified is the difference between a valuable campaign that drives results and leads versus a campaign that spends money with little or no return.
CAPTCHA stands for “Completely Automated Public Turing Test,” which tells apart computers from humans. Most bots can’t fill out today’s advanced CAPTCHA forms, but can disrupt the user experience. By placing a CAPTCHA on your site, marketers can make it harder for bots to “do their job.”
While geo-targeting can help brands choose which locations see ads, it’s not full-proof. Marketers should consider blocking IP addresses from countries and locations they don’t serve. Additionally, one can block the known addresses of bad bots to keep them from coming back for another attack.
A lot of businesses make the mistake of enforcing a blacklist for fraudulent accounts. While this might seem to work initially, a watchlist is the better option.
Perhaps an IP address is setting off a red flag, but previously there were no issues. Companies may not want to write them off completely, in case, for example, malware infected the computer of a good prospect. By keeping an eye on accounts like this, marketers can allow them back into work flows once they’ve corrected.
By continually testing site speed, media planners and performance marketers can recognize when sites slow down. Slow site speed can often be a sign that bad bots are infecting a site’s code. However, without continual monitoring or routine testing, there’s no benchmark to measure against that speed.
Traffic filtration helps block the bad bots before they interact with a site. By blocking fraudulent traffic before they hit the site, they can save performance marketers time and money by ensuring that these ads are appearing to real people and not just bots.
Based on hundreds of points of data per visitor, Anura can distinguish between clicks generated by bots, malware and human fraud and clicks generated by legitimate website visitors. Anura achieves this through a machine learning functionality that validates lead conversations through the traffic data.
Anura does not use ineffective vanity metrics, such as NHT (Non-Human Traffic) and viewability. Just a black-and-white indicator of web traffic as real or fraudulent.
Subscribe to Email Updates