en

June 13, 2024

Lie to Me, AI

The Way Deepfakes Are Used for Fraud, Hooliganism and Revenge

Altego Digital Avatar

Platform for creating an audiovisual copy of a person. It helps generate video content quickly and cost effectively

While there is debate about what strong or general AI shall be called, and when it will surpass humans, systems based on machine learning have already reached a level that allows them deceiving people efficiently. Scammers use AI to fake someone else’s identity using synthetic photos, audio and video files, that is, deepfakes. Deepfake attacks reached 3000% over the past year, and alerts from law enforcement agencies confirm that these techniques are efficient. How are AI fakes used and how to protect yourself from them?


Deepfakes: What Are They

The very term deepfake means synthetic photos, audio and video files created by generative neural networks and realistically depicting people selected by the authors of the fake. The word itself is constructed from the words deep learning and fake, meaning neural network learning technology and fake.

The term became popular in 2019, when a reddit user of the same name published a large selection of celebrity porn where the faces of famous people were superimposed on the existing porn films using publicly available neural network algorithms.

Deepfakes can help create a video or audio in which a person says the right words and performs certain actions. To this end, the neural network has to receive a video or audio with the required actions performed by another person, as well as samples of photos, audio and video files, from which it can identify the characteristic features of the appearance, voice, facial expressions and pronunciation of the person who is being faked. Depending on the technology, learning may require a very small set of data: 60 seconds of voice recording and ten high-quality photos to generate photos and videos. The more samples, the more convincing the fake will be.

Fake Celebrities, Friends and Colleagues

Deepfakes quickly found use in advertising pyramid schemes. The delinquents created videos in which dubious investment was advertised by Elon Musk, Oleg Tinkov, as well as famous singers and actors. Many fakes had unconvincing facial expressions and not very natural voices, but this did not stop them from deceiving investors. Over the past year, damage done by such schemes, for example, in Australia, amounted to $8 million.

Already this year, the scammers in Russia are calling their victims on behalf of relatives, friends or direct superiors. To this end, they create an account in the messenger, upload the name and photo of the simulated person to their profile and then send voice messages or make audio calls to the victims from this account. A fake boss may instruct you to make an urgent bank transfer to a counterparty, while friends and relatives ask for money for urgent needs.

For now, delinquents are trying to limit themselves to voice spoofing, since it is easier. However, technology is developing rapidly, and face replacement to fake video calls is already possible. The most famous case showed an employee of a financial institution who was convinced to make a $25 million payment using a “video call with colleagues” in which everyone except the victim was a deepfake.

Blackmail via Deepfakes and Attacks on Reputation

Deepfakes have made it easier to produce compromising photos, thus, scammers can send a letter to the victim’s work address containing synthetic photos of defamatory content and demanding a ransom for not posting them. In March-April, the Singapore police received 70 complaints about such crimes, one of the victims transferred almost $15 thousand to the criminals.

Up to 98% of the deepfakes created are porn content. Many of these videos are not harmless as they unwittingly involve celebrities in the fields of art, sports and politics. This negatively impacts their image and career. But you don’t have to be a celebrity to fall victim to a deepfake as more than 30 teenage girls were victims of fake photos created using a public stripping app. To create such an image, a couple of photos of the victim’s face taken from social media are enough.

Various passports and IDs can be generated online in one minute and only for $15. The neural network provides the fake with not only realistic data, a photo and a signature, but also gives the image the appearance of a real ID photographed on a table or mat. This is primarily used for fraudulent registrations with financial institutions, but is also used in fraudulent charity collections to make the request more credible.

Altego Personal Assistant

Personal digital twin on the artificial intelligence basis. For all occasions and for any online communication channels

Protecting Yourself from Deepfakes

To avoid becoming the deepfake “protagonist”, there is only one defense: avoid publishing your photos and videos, which can be used to train a neural network, primarily on social media. Group photos and low-resolution pictures with small faces are not suitable for training. However, portraits, galleries of many photos, long audio and video recordings in which speech can be heard can be useful to scammers.

As for detecting deepfakes sent to you, sometimes they can be revealed by minor generation errors: unnatural facial expressions and strange background in the video, metallic overtones, lack of breathing and natural speech micro-pauses in the audio. Yet, this very much depends on the neural network used, so you shouldn’t completely rely on your ability to recognize a deepfake. There are already special services for analyzing videos for “fakeness”, but their authenticity verdict is not a guarantee either. Therefore, it is better to use general anti-fake advice, both new and old:

  • double-check unexpected requests from your colleagues and relatives by urgently calling them through another communication channel or by meeting in person;
  • don't trust celebrity recommendations when making investment decisions;
  • double-check worrisome news and resonant information in authoritative sources;
  • protect your privacy on the Internet comprehensively so as not to give scammers any information about yourself.