Deepfakes becoming indistinguishable from reality

Tony Anscombe, Chief Security Evangelist and Cameron Camp, Security Researcher at ESET explains that as deepfakes become indistinguishable from reality and the potential for the misuse of synthetic content is virtually endless, what can you do to avoid falling victim to deepfake fraud?

A deepfake rendition of a loved one saying they’ve been kidnapped paints a grim picture of what future deepfakes – specially constructed videos from real data – purport to bring next to technology. After machine learning ingests the droves of images being created every day à la Instagram selfies and sound tracks from webinars, conference presentations or the narrated commentary of vacation videos from YouTube, it can paint a very clear image, video and voice of almost anyone, but with specially crafted fake communication mimicking that the person is in deep trouble.

Technology wasn’t supposed to do this – it was supposed to help.

Starting with fake phone calls, synthesized by processing audio clips of your boss that ask you to wire large sums of money, the next generation of deepfakes promises voices too clear and convincing to be disputed.

Feed enough data into a machine learning system and that voice becomes scarily close to reality, as was witnessed in 2019 in an audacious real-time audio-based attack on a UK-based energy company, duping them out of US$243,000.

Presenting on the subject at Black Hat USA 2021, Dr. Matthew Canham, Research Professor of Cybersecurity at the Institute of Simulation and Training, University of Central Florida, stated there has been an 820% increase in e-giftcard bot attacks since the COVID-19 lockdown began, often impersonating the boss instructing a worker to order the cards. The attack starts with a generic opening ‘are you busy’ and when the victim responds, the perpetrator moves the discussion to another channel such as email and away from the automation of the bot.

The example of gift cards and text and email messages represents a basic social engineering attack; when layered with deepfake technology allowing the malicious actor to spoof video and audio to impersonate a boss or colleague, requesting an action can cause a more significant problem. The prospect of a phishing attack taking the form of a video conversation with something you think is a real someone is becoming a very real prospect. The same goes for a deepfake video of a supposedly kidnapped loved one.

Dr. Canham also pointed out that deepfake technology can also be used to accuse people of something they never did. A video showing someone behaving in an inappropriate manner could have consequences for the person despite it being forged. Imagine a scenario where a colleague makes an accusation and backs it up with video or voice evidence that seems to be compelling. It may be difficult to prove it’s not real.

This may sound out of reach of the normal person and today it may be challenging to create. In 2019 journalist Timothy B. Lee, for Ars Technica, spent US$552 creating a reasonable deepfake video from footage of Mark Zuckerberg testifying to Congress, replacing his face with that of Lieutenant Commander Data from Star Trek: The Next Generation.

Trust your own eyes and ears?

Dr. Canham suggested a few very useful proactive steps that we can all take to avoid such scams:

  • Create a shared secret word with people that you may need to trust: for example, a boss who may instruct employees to transfer money could have a verbally communicated word only known between them and the finance department. The same for those at risk of kidnap … a proof of life word or phrase that signals the video is real.
  • Agree with employees about actions that you will never ask them to do; if ordering gift cards is a ‘never-do’ action, then make sure everyone knows this and any such request is a fraud.
  • Use multi-factor authentication channels to verify any request. If the communication starts by text, then validate by reaching out to the person using a number or email that you know they have and not being requested in the initial contact.

Technology being used to create malicious deepfake video or audio is an opportunity that cybercriminals are unlikely to miss out on, and as witnessed in the example of the UK-based energy company, it can be very financially rewarding. The proactive actions suggested above are a starting point; as with all cybersecurity, it’s important that we all remain vigilant and start with an element of distrust when receiving instructions, until we validate them.

Lost Password