Deepfakes (a portmanteau of the words deep learning and fake) started in 2017 as a series of seemingly innocent antics of the Reddit community r/deepfakes – a group gathering internet pranksters producing a completely new-level fake, hyper-realistic videos which took the internet by storm.

What are deepfakes?

Deepfakes typically involve replacing a person’s face in a video or audio recording with someone else’s face – and celebrities and politicians are the most common victims. While the original intention behind producing deepfakes was rarely malign, it is changing today. The near-impossibility of telling a fabricated image or video from the real thing poses many new ethical and legal challenges that are only beginning to crop up today. Among other issues, people’s likeness in deepfakes is typically used without their consent and knowledge.

Here are the three most common types of deepfakes:

  • Re-enactment – deep learning is used to manipulate the features of someone’s face, with no face swapping
  • Generation – new videos or images of faces which do not reflect a real person are created
  • Speech synthesis – advanced software is used to create a model of someone’s voice.

The advent of the internet has presented us with multiple ways of disseminating false information, but deepfakes take it to a whole new level. The consequences have not fully transpired yet, but the most obvious include public humiliation of celebrities, defamation, misinformation, and a dangerous devaluation of truth, which is especially dangerous in the tense political climate of today’s world.

Combating deepfakes

Even the most elaborate efforts to develop methods of detecting deepfake videos are typically only a cat-and-mouse game. There are currently a couple of popular ways of detection, but whenever the detecting algorithms get better, so do the deepfakes. This is partly because the most popular techniques involve the use of algorithms similar to the ones used to build the deepfake.

Researchers are using automatic systems that examine videos for errors and irregularities that would indicate a fake: irregular eye blinking patterns or unnatural lighting. Computer-generated characters either blink too frequently or don’t at all.

Blockchain has been applied in many new areas of technology, and no wonder researchers are also investigating the possibility to use Blockchain to verify the authenticity of the media. While this method does not involve recognizing deepfake patterns and detecting inconsistencies in the videos, it may help to minimize the spread of fake content. The content is verified through the ledger before getting uploaded to social media platforms. With this technology, only videos from trusted sources would be approved, decreasing the spread of possibly harmful media.

Legislation against deepfakes

Legislators around the world are considering new laws to tame the audio and visual disinformation. New York’s lawmakers have debated a new bill that prohibits certain uses of deepfakes on the basis that “living or deceased individual’s persona is personal property” and cannot be manipulated. A similar bill is being discussed in the US Congress – the proposed sanctions for people producing deepfakes include jail.

Among voices that deepfakes should be recognized as a separate kind of crime, creating and disseminating them can be prosecuted in the UK for harassment.

United States

Certain states (i.e. Virginia) have updated their revenge pornography law to include deepfakes. Texas has a law which regulates the manipulation of elections. In 2018, the Malicious Deep Fake Prohibition Act was introduced to the US Senate, and in 2019 the DEEPFAKES Accountability Act was introduced in the House of Representatives. Several states have also introduced relevant legislation, including Virginia, Texas, California, and New York.


In China, as of January 2020, it is required that deepfakes should bear a clear notice, and failure to comply is persecuted. Importantly, this applies both to users and online video platforms failing to abide by the rules. Such an approach puts a bigger responsibility for spreading fake content on platforms like Facebook.


In 2017, the German government introduced fines on tech companies that don’t take down racist or threatening content within 24 hours of it being reported – the law has been updated to include deepfakes.

Sceptics believe that using legislation to control it simply does not cut it, as it might be harder than combating online piracy. But while it may be relatively easy one day to automatically detect deepfakes, there are a few problems:

  • It is impossible to stop deepfakes from cropping up again
  • It is difficult to identify the makers of deepfakes

This is why fighting deepfakes must involve the collaboration of the world’s major tech brands. In the past, mobilization for combating online piracy was higher and more effective because it involves the loss of dollars and more parties essentially fight in their own interest.

Other implications

United States, where many new types of crime like identity theft and revenge porn have already been pursued, laws are being actively adjusted to help combat deepfakes.

In addition to updating or creating laws, there is a consensus that big tech companies must step in to support the governments in the fight. For example, it is worth considering shifting the burden of censorship on various tech companies like Facebook.

For instance, Facebook did not decide to remove the deepfake video of Nancy Pelosi although people knew it to be fake. All Facebook did was alert users that the video was fake. But it’s really hard to blame anyone – weighing in on such issues often entails balancing on the brink of censorship and may violate someone’s freedom of speech.

Ethical aspects

The emergence of deepfake technology has corrupted many areas of today’s world but primarily affected journalism. The bleak vision of living in a world where almost everything we see can be fabricated to manipulate our sentiment is becoming real.

Deepfakes, while an impressive feat of today’s technology, are on the verge of becoming the number one threat to truth and – by extension – democracy. Because deepfakes will make it easier to manipulate facts, they can further radicalize political views as opposing camps can fabricate fake news and manipulate the public sentiment. Without proper measures to identify and curtail deepfakes, audiovisual disinformation will spread, putting societies at risk of devaluation of trust and truth.