Is Deepfake Ransomware a Real Concern?

deepfake ransomware

What is ransomware and why should you be sure you are protected?

What’s your IT department’s top priority this year? If you’re like most businesses, you’re prepping for the Google Core Updates rolling out in May. Google hopes to further improve visitor’s experiences on websites with these changes.

Your team has no doubt worked to reduce your website’s LCP and First Input Delay. In this respect, Google’s goals are aligned with business goals.

It makes sense. With six out of ten clients leaving a brand due to one bad experience, you cannot allow any customer service element to slip.

The question is, “Has your team gone far enough?” Are you prepared for the latest cyber threats out there? We’re confident that you’ve done an outstanding job with recognized threats, but what about the latest strategies?

In this post, we’ll look at one of the most concerning of those threats, deepfake ransomware.

What is Deepfake Ransomware?

What Are Deepfakes?

Deepfakes can be audio, visual, or video images that are altered. The criminal draws the pictures from social media and online. The most common goal is to discredit the victim or to blackmail them.

Most of us are familiar with the deepfake pornography scams. These attackers compile videos from images of the victim and stock pornography. Initially, it was pretty easy to see that the video was doctored. Advances in artificial intelligence have made it far more difficult to detect the difference.

The advances have led to criminals becoming bolder. The first reported use of this technology in a fraud case was in 2019. In this incident, the attackers used AI to spoof the voice and melody of the company head.

The “Head” contacted the CEO requesting a transfer of around $243,000. The CEO complied because he recognized the voice and accent. Had the fraudsters not asked for more money, it would have taken some time to pick up the fraud.

The point is that these kinds of attacks combine well with other malicious acts like whaling and blackmail.

A 2020 study by Crime Science shows the potential for abuse of this technology is growing. The researchers listed the following abuses as causing the most harm:

  • Tailored phishing
  • Video or audio impersonation
  • The disruption of AI-based systems
  • Blackmail
  • Weaponized driverless vehicles
  • AI-generated fake news

How Does Ransomware Factor Into Things?

To understand that, we may need to shift our perceptions of ransomware overall. When considering this concept, we automatically think about malware that locks up your computer.

There’s a lot more potential for the software, however, especially in the Zoom age. Let’s go over a few examples to see what might happen.

Ransomware Example 1

An attacker access the testimonial videos on your website. They alter what the client said so that they appear to be supporting a competitor. You receive a sample video via email with the ransom demand.

If you don’t pay, the “client” will disseminate the video decrying the one on your site as a fake. It won’t matter if it’s true or not; it’ll still plant the seed of doubt in those that see it.

Example 2

Say that your company now provides video support. Learning a trick from DDoS attacks, the attacker creates a series of bot-operated video calls to your support desk. It wouldn’t be difficult using AI and fake videos. Even if the calls are complete nonsense, they could tie operators up for a minute or more, flood the call center, and prevent legitimate clients from accessing support.

Unlike with DDoS attacks, however, it is difficult to mount a defense against these bogus calls.

Ransomware Example 3

Our third example is a twist on the current pornography blackmail scam. A bad actor accesses all the photo and video evidence they need to compile a video of you online. They could use it to make it appear that you’re committing a crime or using racial slurs on company time.

If you don’t pay the ransom, they’ll release the video to the media. Even if you’re a junior consultant, the fallout can be dramatic.

As a twist, they may create footage of a family member and threaten to release it if you don’t comply.

Example 4

This is probably the most likely scenario. Bad actors will create a video or audio message from someone high up in the company. They may use this in a whaling attack like the one we discussed earlier.

For maximum damage, however, they’re more likely to use the video to spread ransomware. The message may urge the recipients to complete a document online or to upload and run a file. When they do, they load ransomware onto their computers.

The individual will then have to pay a ransom to regain access.

Automation and AI Make for Frightening Bedfellows

Have you ever wondered why we see more generic ransomware attacks rather than lucrative whaling attacks? The reason is that the latter requires far more work. Your potential for reward is higher, but you’ll have to research the victim carefully.

Ransomware, as it stands now, can be set on autopilot. The program spreads until it finds a vulnerability to exploit.

Now let’s bring an AI-based ransomware program into the mix. Not only may this change the attack vector, but it improves as it goes along.

In this scenario, the software learns how to select promising targets and check their systems for vulnerabilities. The program may scan the victim’s computer for audio and video footage. The AI component can then create new content based on stock footage to which it already has access.

It may then send the “incriminating” footage to the victim with a ransom demand.

What makes this more alarming is that there’s no need for human intervention at any stage. AI drives everything from the victim selection to the creation of the footage.

Is Deepfake Ransomware a Legitimate Concern?

The answer is, undoubtedly, yes. Not only is it a concern, but it’ll redefine ransomware as we know it. As ransomware can learn and adapt to its surroundings, it’ll become more deadly. Paired with automation, it’ll increase the victim pool significantly.

What scares us the most, however, is that defending against this type of attack is very difficult.

Share
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Veronika Filipkova

Veronika Filipkova

Veronika is a professional content editor and CCO @SupportYourApp. While working @SupportYourApp, she got to know many aspects of customer support and likes to share her knowledge with everyone around the world. Veronika is passionate about business, customers and has published many articles in this area.