How to protect your team against AI phishing emails

Artificial Intelligence (AI) is already making our lives easier in so many ways, but of course it is also being used against us

How to protect your team against AI phishing emails

In the 1440s the invention of the printing press sparked fears of false prophets. In 1825, when the Stockton-Darlington railway opened, there were fears that travelling at such a speed would cause the human body to melt or be ripped apart. In the late 1800s, people were scared the invention of the telephone would destroy society. As radio and TV dawned in the 1900s, the concerns were that we would all be brainwashed or that society would degrade.

Technology is a tool, developed by people, for people. The human development and use of that technology defines whether it is a force for good or for bad. Artificial Intelligence (AI) is already making our lives easier in so many ways, but of course it is also being used against us.

For cyber criminals, it is a force multiplier, adding speed, scale, and sophistication to their cyber attacks.

For a long time, cyber security advice has warned people to look out for poor spelling and grammar as a red flag for scams and phishing emails. Armed with LLM assistants, the native language of an attacker is no longer relevant in their ability to craft sophisticated phishing emails. They can now conduct social engineering attacks at greater speed, scale and with a new level of sophistication.

Email is still the most common way in which cyber criminals try to socially engineer us, manipulating us into clicking phishing links, downloading malicious documents, giving away our credentials or pushing us into unknowingly transferring money into the hands of fraudsters. However, for some time now, criminals have been broadening out their ways of contacting us. Knowing that we are savvier to phishing emails (and have better technical defences in place), they increasingly use phishing SMS texts, social media messages and phone calls.

They are now using Artificial Intelligence to make these scams more convincing, too. Deepfake technology enables voice and face swapping in ways that science fiction writers of the past could have only dreamt of.

The term ‘deepfake’ was coined in 2017 by a Reddit user of the same name. At that time, creating convincingly swapping faces and voices with deepfake technology took technical skill, time, and a lot of voice or image data. Now, the barrier to entry to create deepfakes has been lowered. Multiple websites and apps have become available to make deepfakes without skill, time or even much data. View here

While the level of sophistication varies, deepfakes are already having an impact on cyber security.

When it comes to defence, we are at a challenging time in the AI era, partly because of the exponential growth of the abuse of AI. We need to raise awareness of how cyber criminals are using AI to help our families, friends and teams understand just how convincing some of these AI-enabled attacks are becoming.

With deepfakes, there are currently some common tell-tale signs of a deepfake video. Physiological factors can be a giveaway, including if the subject is not blinking, not turning their head or if there are distortions around the face, especially if something (such as their hand) goes in front of their face.

Ultimately, verifying the identity of those we are communicating with is our best line of defence. We cannot trust based on sight and sound alone. Instead, we need to develop – and encourage – digital critical thinking. Be tuned into whether a communication is unexpected or unusual, be aware when your emotional buttons are being pressed and take a pause to verify identities and information before trusting what you are seeing or hearing.

AI shows how cybercriminals are using technology to evolve their tactics, and we must do the same to advance our defences. The standard advice to check spelling and grammar as a way of spotting social engineering is increasingly unreliable and, even worse, it can give a false sense of security. 

When we can’t believe our eyes and ears, an anti-scam mindset becomes even more critical.

Dr Jessica Barker
Dr Jessica Barker

Share via
Copy link