Bots and their impact on Cyber Security

Bots now make up around half of all internet traffic. But what are they?

A bot, short for ‘web robot’, is simply a software application that runs automated commands over the internet.

The first bots were created in the late 1980s but have become much more common as the internet has developed in scale and maturity.

At first, they were intended as a way to save human labour when performing fairly routine, monotonous tasks.

Good bots

Google’s web crawler, which ranks pages on the internet, is a bot. This piece of software is an integral part of how SEO or search engine optimisation works. It visits all the websites on the internet and makes a note of how well or how badly a page performs in relation to the information listed on it. The pages with the best SEO appear higher in Google’s search results.

An email out-of-office reply is another example of a bot. Instead of having to email every person who sends you a message while you are on holiday, you can instead use an email service’s built-in bot to send a professional, automatic reply.

Now, any business can create their own bot in a matter of minutes.

The most common user-created examples are chatbots on websites where customer service is required 24/7.

Chatbots usually operate on a limited set of guidelines or rails, like an automated answering machine that can detect certain types of language and respond with text to the person asking a question.

Instead of employing customer service agents around the clock, a chatbot can save a business serious money, so what was once a frightening prospect is now widely accepted.

83% of customers say they are happy to shop online where a chatbot is in use, and the technology is expected to save companies more than $8bn by 2022.

Bot, or human?

Amazon’s Alexa smart speaker is another example of a bot. When you say ‘Alexa’ or ‘OK Google’, you’re talking not to a human being, but a machine. This is a piece of software housed in a friendly-looking metal box that is entirely automated and designed to respond in a certain way to human speech.

There are also secondary bots (called ‘Skills’) which you can set up on your Alexa to create things like a round-up of the day’s news, order your groceries, or give you an update on the amount left in your bank account.

Voice assistants like Alexa, however, flag up the inherent fragility of bots. In September 2017, viewers of the long-running American cartoon series South Park were shocked when their Google Home and Amazon Echo devices responded to on-screen commands, adding a series of scatalogical items to their shopping lists.

While these pieces of automated software can make make a passable attempt at mimicking human language they can usually only operate in narrowly-defined ways. Programmers can script a greeting response like ‘Hi,’ or ‘Hello’, and a text-based chatbot can recognise a question when it sees a question mark, but chatbots do not have any critical thinking functions and for the most part cannot improvise.

As Artificial Intelligence develops, we may see bots than can improvise, or use slang terms as their dictionaries expand to better copy how a human might write or speak. The most human-like chatbot in the world is currently Mitsuku, a four-time winner of the Loebner Prize Turing Test.

When bots go bad

As with any technology, bots can be employed for crime just as easily as they are used for handy day-to-day business tasks.

Twitter bots, for example, can run and control their own social media feed. These programs can post tweets automatically, ‘like’ other posts, follow, or send direct messages to other accounts.

As many as 48 million Twitter accounts – some 15% of the total – are thought to be bots rather than humans.

While most activity is benign, bots can be used to spam links to harmful websites designed to steal your identity, post fake reviews or comments on your website, or even carry out co-ordinated denial-of-service attacks to stop users accessing popular sites.

Social media bots can also be employed to bully or harass companies online by responding negatively to every post they make, distorting the general public’s view of a service or product.

Botnets and worse

Spam comments and dodgy pranks are just the tip of the iceberg. From 2016 onwards, the Mirai botnet launched a devastating attack on large portions of the internet.

This was a worldwide network of infected machines which had a portion of their running power diverted to launching DDOS attacks.

Unfortunately, botnets are growing more popular among cyber criminals because they are very cheap to set up and can offer extremely lucrative ransom or theft rewards.  

Even worse, bots can be used to post political messaging designed to affect the way people vote.

Most famously, Russia’s shadowy troll farm, the Internet Research Agency, employed hundreds of people creating thousands of bot accounts on Youtube, Facebook and Twitter to post made-up, extremist, or intentionally divisive fake news with the intent of influencing the 2016 US presidential election.

Automate your defences

The threat from malicious bots is severe and ever-growing.

The National Cyber Security Centre warned in 2018 that hostile hackers paid by Russia were exploiting basic and widespread weaknesses in UK networks to enact ‘man-in-the-middle’ attacks to steal intellectual property, automatically divert computing power, and sometimes inject viruses or scripts that lie dormant until they can be used in massive co-ordinated cyber attacks.  

If your managed security services provider is not watching for bot attacks, then there are serious holes in your defences. Remember, bots don’t discriminate. If you or your business has something worth stealing, you’re automatically a target.