Good bots vs. Bad bots

  • Bots mutate as cyber-criminals play a game of cat and mouse with security solution vendors
  • Over 40% of consumers will not make online transactions with a website that has been breached


Good bots vs. Bad bots


BOTS, short for robots, have officially overrun the Internet. More than half of website visits comes from bots, a type of software application or automated scripts that perform repetitive tasks that would be too mundane or time-consuming for humans, according to a recent report by Statista.

While bots generate a large amount of internet traffic, it’s important to note that more than half of all bots are malevolent. You may have heard of botnets too – this refers to a collection of computers infected by bots.

While some bots are necessary to power search engines and digital assistants, others sniff out vulnerabilities, infect and control vulnerable machines, launch denial of service attacks, steal data, commit fraud among others.

In 2016, Cybersecurity Malaysia reported a whopping 27 million cyber-security cases involving botnets in the country. With more local businesses turning to e-commerce as a key marketing and sales channel, it’s important for them to ensure bad bots are stopped and good bots are facilitated to sustain revenue generating web traffic.

Distinguishing the bad, the good & the ugly

Identifying bots is not the only challenge for website owners. For starters, web servers need protective measures that can tell a good bot from a bad bot. For example, servers might interact with a shopping bot unaware that it’s a fake and meant to steal customers’ credit card numbers and other personal information.

If customers find out that their financial or personal information has been compromised, an online retailer could receive a substantial blow to its brand reputation.

What’s worse, bots mutate constantly as cyber-criminals play a game of cat and mouse with security solution vendors. As soon as a security vendor detects one type of bad bot, hackers come up with new ways around that protection.

Bots have become progressively more sophisticated to circumvent detection algorithms used to uncover them. Some of the common attacks include:

  • DoS/DDoS attacks — Bots and botnets are often used to launch network-layer denial of service (DoS) and distributed denial of service (DDoS) attacks. These attacks flood a website with requests that impact performance and can even bring the site down.
  • Spam Bot attacks — Bots collect email addresses and hit them with tons of spam emails. Alternatively, they gather user names and passwords, employing these credentials to take over the account and use it to spread malware.
  • Injection attacks — Injection attacks, such as Cross Site Scripting, insert malicious scripts into trusted websites which in turn deliver the scripts to the victim’s browser.

Criminal bots often start with “reconnaissance missions” that look for unprotected computers to attack.

Bots research targets, learning what browsers and third-party apps they use to understand the environment and its vulnerabilities.

On the other hand, we have the good bots that perform vital functions on the internet. This means it’s not enough to block bots, cyber-security solutions must also facilitate good bots. Good bots include:

  • Search engine bots that crawl websites, check links, retrieve content and update indices
  • Commercial enterprise bots that crawl websites and retrieve information
  • Feed fetcher bots that retrieve data or RSS feeds that can be displayed on websites
  • Monitoring bots that monitor various performance metrics on websites

Protect web applications from bad bots without impacting performance

 The good news? There are bot manager solutions out there – that can easily keep ecommerce and other sites securely up-and-running to sustain revenue generating web traffic by stopping bad bots and facilitating good bots.

They also help ensure fast customer experiences by enabling ongoing monitoring and tuning of bot management policies to protect web applications without impacting performance.

At Limelight Networks, we have the Limelight Web Application Firewall (WAF) Advanced Bot Manager that is part of the Limelight Cloud Security Services architecture, which delivers a defense-in-depth strategy to protect websites from cyberattacks. This helps in several ways:

  • Protect brand reputation — Security breaches have a lasting impact on brand reputation, with more than 40% of consumers saying they will no longer make online transactions with a website that has been previously breached. Protect your brand reputation by strengthening web application security by identifying and eliminating bad bots and protecting customer data from intrusion.
  • Keep customers coming back for more — Consumers have higher engagement with web sites that offer faster performance. Improve user experience by blocking resource-draining bots and providing the fastest online experiences.
  • Defend against emerging security threats — Ongoing monitoring and tuning of bot management policies ensures an optimal security profile to protect web applications against new and emerging threats.

A solid bot manager can help detect bots that are infecting and controlling vulnerable machines. Once malicious bots find a vulnerable compute resource, they can infect that machine and report back to a Command and Control System (CnC) on the internet.

The CnC system uses the victim compute resource to carry out various automated tasks. The type of compute resources that are often easy to compromise and used in botnets are home internet routers, connected cameras, and other Wi-Fi-enabled home internet devices.

That’s not all. Once a bot has infected a host machine, it can steal personal and private information such as credit card numbers or bank credentials and send them back to the hacker.

These attacks can damage brand reputation as we have seen happen to big brands like LinkedIn, for one. In 2016, the professional networking company suffered a huge bot attack resulting in the loss of its members’ personal data. That’s 400 million people worldwide who may have been affected by this bot attack.

Data thieves are also using bots for brute force attacks in which they automatically attempt thousands of potential user name/password combinations until they find the right one to break into a website and wreak havoc.

Not only that, bots are used to targeting content publishers and ecommerce sites, web scraping bots steal, exploit and sometimes republish content without authorization.

For example, online retailers display prices, inventory availability, product reviews, custom photography, and product descriptions to educate customers and encourage them to purchase. Competitors might use content gleaned from web scraping to undercut prices and attract customers.

Distinguish bots from humans without demanding further actions from the end-users

Because bots are automated scripts, many bot protection methods start by determining whether the entity requesting a connection or accessing/posting content is human.

Developers have created the human interaction challenge, which distinguishes humans from bots by observing various behaviours indicative of human/bot traffic.

The system monitors events such as where the mouse is going, where in a box the user is clicking, how much time the user spends on a site or a page, and other activities to use those observations to determine the source of the activity. For example, a bot might click on the exact same pixel in a search box each time, something humans would never do.

Today, JavaScript challenges and device fingerprinting can be used to distinguish bots from humans without demanding any action by the user or interfering with normal page operation. These solutions do not negatively impact page latency or load times.

Having a variety of bot detection mechanisms that include human interaction challenges (CAPTCHA, behavioural usage patterns) as well as machine-based challenges (JavaScript, device fingerprinting, traffic shaping) is the optimal way to separate good bots from bad bots.

Bot management: Best practices

In addition to managing bot traffic, bot managers such as the Limelight WAF Advanced Bot Manager offer capabilities that make the solution simple to implement, deploy and administer. Hosted in the cloud, this flexible solution eliminates the need for IT organisations to install and manage hardware and software.

Capabilities that enable ongoing monitoring and tuning of bot management policies ensure you always have the optimal security profile to protect your web applications without impacting performance.

A real-time dashboard, reporting, analytics and alerts notify your security personnel of any bot attacks, so they can quickly remediate the situation.

Jaheer Abbas is the regional director for SE Asia and ANZ at Limelight Networks.


Related Stories:
Cyberwarfare: Machine vs Machine
Visa turns a problem upside down to find malware
ThreatMetrix: Cyber-attacks more complex, more frequent and global in nature


For more technology news and the latest updates, follow us on Facebook, Twitter or LinkedIn

Keyword(s) :
Author Name :
Download Digerati50 2020-2021 PDF

Digerati50 2020-2021

Get and download a digital copy of Digerati50 2020-2021