Protecting Applications Against Malicious Bots



With the growth of the Internet, businesses have become increasingly reliant on their websites as a means for interacting with their current and potential customers. Instead of interacting in person or on the phone, the average consumer will often visit an organization’s website as their first (and possibly only) interaction with an organization. On today’s Internet, consumers can make online purchases, manage their bank account, and take a variety of other actions via web applications.

One significant threat to application security is the malicious bot. Automated scripts designed to interact with an application can have benign or malicious intentions. Identifying and blocking malicious bots is important to protecting applications against a variety of different attacks; however, it is often difficult to differentiate bot vs. human traffic and benign vs. malicious bots.

Introduction to Bots

A bot is an automated program that is designed to interact with an application or a human being. These bots can be designed for either benign or malicious purposes. An example of a benign bot is the chatbot that pops up in the bottom corner of many websites, offering assistance or to answer questions. In many cases, these windows aren’t designed to connect you directly to a customer service agent.

Instead, the chatbot uses machine learning and pre-programmed responses to try to answer the customer’s question before passing them on to a human agent. This allows the organization’s customer service department to be more scalable since “free” bots deal with the majority of questions and human operators are only needed for more complex support issues.

Not all bots are designed for benign purposes. Cybercriminals have embraced the use of automation to make their attacks more scalable and effective. Instead of manually testing a web page for vulnerabilities or testing a set of user credentials to determine if they are legitimate, a cybercriminal automates this process. With access to a botnet of compromised machines, which are becoming more easily acquired due to the proliferation of insecure Internet of Things (IoT) devices and cheap cloud computing, the cybercriminal doesn’t even need to pay for the infrastructure needed to run their bots and can run their attack from a variety of IP addresses, making attacks more difficult to identify and block.

The Growth of the Bot

Over time, bot traffic has made up an increasing percentage of network traffic on business networks. A recent study has determined that 45% of all network traffic aimed at a web application or service is likely bot traffic. However, not all of that bot traffic is malicious. In total, about 17% of traffic is associated with benign bots, while a larger percentage (28%) is malicious bot traffic.

These malicious bots are not all created equal. In general, bots can be classified into four generations:

  • Script bots (16%): The simplest type of bot, easily detectable
  • Headless browsers (46%): A bot that can store cookies and run JavaScript code
  • Human-like bots (23%): Bots that attempt to mimic human keystrokes and mouse movements but lack “humanlike randomness”
  • Distributed bots (15%): Bots that can realistically mimic mouse movements and use many different User-Agents (identifiers for different browsers/programs) to mimic different users

In the beginning, bots were easy to differentiate from humans since they lacked the randomness that characterizes human mouse movements in particular. However, over time, bots have become more sophisticated, making them more difficult to detect and block.

The Challenge of Bot Detection

When classifying traffic to an organization’s web application or service, there are three main categories. Human and benign bot traffic should be encouraged, while malicious bot traffic should be identified and blocked. However, it is not always easy to correctly differentiate between these three types.

The area of bot detection is an active one. Since bots can accurately mimic human behavior, websites have turned to increasingly difficult CAPTCHAs, like Google’s image recognition ones, to differentiate human from bot traffic. However, it has been demonstrated that machines are often as good as, if not better than, humans at solving these challenges, meaning that they have only limited effectiveness for detecting and blocking automated visitors to a service.

Bots and Application Security

Bots account for 45% of the traffic to many businesses’ web applications, and over a quarter of all traffic to these applications is malicious bot traffic. These malicious bots have a variety of different purposes, including Distributed Denial of Service (DDoS) attacks, scanning applications for exploitable vulnerabilities, and testing stolen or breached passwords to determine if they are legitimate for a certain application.

As bots have grown more sophisticated, many organizations struggle with differentiating between benign and malicious bot traffic and even bot vs. human traffic. As a result, these organizations may unintentionally block legitimate traffic or allow malicious connection attempts to reach their web application. Since it is difficult to determine if traffic originates from a human or a bot, organizations need to combat the threats associated with bots to their applications by focusing on dealing with the potential implications of an attack.

For example, by deploying a strong DDoS protection solution, an organization combats one of the main threats associated with bots. By applying behavioral analytics to user behavior, an organization can identify when account credentials are being used maliciously. As bots become more sophisticated and the upcoming “fifth generation” of AI-enabled bots becomes a reality, organizations will become less able to correctly identify bots. Protecting the organization requires a focus on detecting the effects of attacks by malicious bots rather than the delivery mechanism.