Deepfake: A new formula for Phishing?

by Ashwin Anupam Dalela

Phishing is the activity of a site appearing as another, and trying to deceive the user of the site into mistaking the attacker’s site as the one the user wants to use. This has caused an infinite number of fraudulent transactions and other criminal activities.

Now think what happens if the person that you think you are looking at in an online video, is not the same person at all. It is a digitally rendered copy of the person, however, this time it’s not just a still, it’s a moving, talking video of the person with features almost indistinguishable from the person that it is supposed to be. Sounds scary right? Read along to find more on what I’m talking about.

What is Deepfake?

Deepfake is a subgroup of Artificial Intelligence technology that is used to create indistinguishable, convincing videos, usually involving a person of considerable public stature and influence. The word is a culmination of two words, “Deep Learning” & “Fake”. This should give us a clue that the hoax creation involves deep learning algorithms.

The term originated around 2017 on Reddit, wherein a user named “Deepfakes” used to post morphed images and videos of public celebrities. The Reddit community r/deepfakes started sharing deepfakes which usually involved celebrities’ faces swapped on the bodies of actresses in pornographic videos. Although, there were a significant number of Nicholas Cage deepfakes thrown around as well. Other communities also started popping up with Safe(r) For Work content which had politicians and other celebrities involved in various scenarios.

Companies and organizations soon realized the potential of such an idea and soon some apps allowed users to easily swap their faces with each other. Larger companies with the resources needed to better the technology, such as Momo, came up with advanced algorithms that allowed the users to merge their faces with celebrities in popular movie clips and videos, sometimes with just a single picture. DataGrid, a Japanese AI company came up with its version which was able to create full-body deepfakes which they “intended” to use for fashion & apparel.

Some have gone as far as to resurrect their dead loved ones (at least on screen). Kim Kardashian, in October 2020, posted a video of her late father, Robert Kardashian, which was created by a company called Kaleida, using a combination of deep learning, motion tracking, SFX & VFX.

How does it work?

Let’s say that you want to create a video of Beyonce’s “Single Ladies” song, but PSYCH, it’s you performing in it & not Beyonce. This would imply that your face is the target while the original video is the latent space.

Source: https://www.alanzucconi.com/2018/03/14/understanding-the-technology-behind-deepfakes/

How deepfakes worked during their inception was that they would rely on a neural network (to know more about neural networks, click here) called an autoencoder, which is a fancy word for a decoder & an encoder working in tandem. The encoder breaks up a frame (from the original video) into lower, 2-D data models (most commonly used in CGI). The decoder re-encodes this data model back to an image, but with bits & chunks from the target image (you) back into the frame. The result is that the frame will have some information incorporated from the target image. Repeat this process, maybe a million times, and the target’s detailed features and terrains will be superimposed on the frame from the original video.

This process, however, is becoming obsolete due to the extensive resources and time required to create a video that is somewhat similar to the target.

The new runners in the town do things by using two complementing AI models, the generator & the discriminator. The generator creates morphed, fake content and relays it to the discriminator, which gives feedback on whether the content is real or morphed. Together, these models form a Generative Adversarial Network (GAN). In every cycle, the discriminator relays back crucial information that is used to identify the content as real or fake, and this information is used by the generator to perform small increments in the content that it produces.

Without going into many technicalities (because I don’t understand them), picture an amplified feedback loop. Once the generator model is trained sufficiently to generate an acceptable level of deepfakes, the discriminator is fed original videos and content so that it becomes better at identifying inaccuracies. This information is continuously being fed to the generator, which becomes better & better at generating fake images. So, as the discriminator becomes better at identifying inaccuracies, the generator becomes better at creating imperceptible deepfakes.

Now the curious ones among us (or thieves with a code) might have their minds running off, wondering if this technology can be used to create faces that do not exist. ThisPersonDoesNotExist.com is a website that does exactly this. If you want to know more about how this works, here is an excellent article featured on The Verge which tells us exactly this.

Applications

Aside from the obvious applications that the ones with vices are quick to jump at, you will be surprised at the applications of this technology. Following are just some of those:

Blackmail

Deepfakes can and have been used to generate incrementing material to blackmail a person. However, this comes right back at you. Guilty suspects may claim that the evidence has been falsified using the technology and can claim plausible deniability.

Politics

Misrepresentation of well-known politicians and public figures is something that has dated long back. Now, however, it’s harder than ever to distinguish the real Trump from the Fake.

  • In the 2020 Delhi Legislative Assembly election campaign, BJP distributed a Haryanvi version of speaker Manoj Tiwari, who in the original video spoke in English. Although the voiceover was done by someone else, AI was used to lip-sync the video and was pretty convincing. One party member stated it as a positive use of the technology. Click here to watch The Quint’s coverage of the incident. (I am neither an affiliate nor a promoter of any political party. I have made no political endorsements in this blog).
  • Bruno Sartori, a popular content maker has published numerous parodied videos of politicians such as Donald Trump & Jair Bolsonaro.
  • In 2018, Jordan Peele in collaboration with Buzzfeed created a video where Barack Obama is seen hurling slurs and vulgarities, at the same time pointing out that the real Obama would never say these things. This was to create awareness, informing the viewers of the potential pitfalls of the technology. Yes, Obama wasn’t exactly amused. Watch it here.
  • YouTube creators Ctrl-Shift Face used DeepFaceLab and StableVoices, both AI models trained on real speech and video samples to create a parody of Donald Trump in the popular show Better Call Saul. You can watch the video for yourself here.

In an effort to prove that Deepfake is no more effective than your regular run-of-the-mill fake photo & videos creators, a group of Social experimenters set out to create deepfaked videos of Political Elizabeth Warren, wherein she can be seen calling Biden & Trump paedophiles and exhibiting racial and transphobic behaviour, all of which could be potential controversies for the politician. In their efforts, we end up with numerous videos of an almost indistinguishable fake Elizabeth Warren saying out some really nasty things. The complete research paper can be downloaded here.

Source: https://www.niemanlab.org/2021/01/yes-deepfakes-can-make-people-believe-in-misinformation-but-no-more-than-less-hyped-ways-of-lying/

Sock puppets

Imagine making a friend online, and gradually building a rapport with the person for an extended period. Everything about the person points towards them being real, Facebook, Instagram, Linked In, and all other conceivable social media sites, until one day you learn that the profile has been suspended because the person represented in the profiles does not exist. And I don’t mean a fake profile. The profile is not stolen or created using stolen images, just the person is imaginary.

In 2019, an account by the name of Oliver Tyler popped up, appearing to be of a university student of the United Kingdom. This person, or rather profile, regularly attacked a British Legal & his wife for being terrorist sympathizers. Having been fed up with the regular threats and slurs thrown at him, the Legal one day decided to file an official complaint, only to find out that the person, who had published a large number of opinion pieces in online media, was nonexistent. This Oliver guy had even filed a lawsuit in Israel against NSO, a surveillance company, on behalf of Hispanic people, claiming that NSO had used its phone-hacking technology to wiretap the Mexican population in the region.

Internet memes & social media

The most popular meme utilizing the technology is that of people singing to the chorus of a song from a popular video game series titled “Yakuza”. The original uploader of the meme, Dobbsyrules uses the song as a template to allow others to morph their faces and voices over it. I just hope he’d be as good at spelling characters from Harry Potter, as he is at singing.

Social media began flooding with deepfakes which were based on faces of users morphed over iconic scenes from films and movies. A notable contributor in this domain is the Chinese app Zao, which allows its users to do just that.

Art

The most noticeable instance where AI was used to create art is of Joseph Ayerle’s Un’emozione per sempre 2.0, which is based on a deep-faked version of 80’s movie star Ornella Muti, who travels in time from three decades back to 2018. MIT commented on this art as Creative Wisdom.

Acting

Deepfake has already established a place amongst the niche artists in the film industry, with Solo: A Star Wars Story using Harrison Ford’s young face onto Han Solo’s face, & for the acting of Princess Leia in Rogue One. Bet you didn’t know that Leia in Rogue One was fake.

Legalities

In India

As of now, no law bans deepfakes in India. Some sections that come close to dealing with such issues are: Sections 67 & 67A of the IT Act, 2000 (punishment for publishing sexually explicit material in digital form) & Section 500 of IPC 1860 (punishment for defamation)

Needless to say, these sections are currently inept in tackling any form of infringement against a citizen inflicted using deepfakes. Although the right to privacy is considered a fundamental right, the bill places restrictions on the processing of data of a person who is directly or indirectly identifiable, but only for unlawful acts. The bill enforces that entities that are using the data must ensure that the data is not misleading and is accurate. It specifically emphasizes that ignorance should not be used as an excuse. The ramifications of not adhering to these laws are that the entities may face defamation cases and be forced to take down the falsified materials. However, common sense dictates that in today’s digital world, information, especially information in demand and published online is immortal.

These laws, however, only pertain to living people and do not apply to the deceased. However, the content of political leaders, spiritual leaders, and other public figures can be circulated even after their death to aggravate and agitate a group of people with malicious intent.

Around the World

In the USA, the FTC strictly enforces the truth in advertising law but has not taken many initiatives towards photo retouching or video doctoring. The UK is considering legislation where social media platforms could be fined for hosting doctored content with malicious intent. However, this is easier said than done. The pace at which technology is evolving is unmatched by the slow evolution of laws governing it.

The state of California is already trying to implement a law that makes deepfakes of politicians illegal to be created or distributed within 60 days of elections. But the expansion of this law is very difficult, in part due to the first amendment which protects citizen’s rights to freedom of expression.

Only progressive, developed nations have had some success in implementing these laws, and that is very meager. You may be wondering why this section is so short, but the truth of the matter is that there is nothing apart from this and some other countries and their specialized laws which may prevent and contain the ramifications of deepfakes.

How to spot Deepfakes

So now that you know that there may be a video of you circulating riding a horse buck naked, what can you do to identify such videos? Hopefully, the following pointers will help you in doing just that.

  • Lack of emotions

Morphing is often unable to accommodate natural facial expressions and substitutes them with image stitches & preloaded facial templates.

  • Awkward body or posture

Deepfake technology of today has been engineered to focus more on facial features and hence it is common to see misaligned and unnatural body posture and features.

  • Out of place facial expressions

Again, facial stitching & morphing often renders facial expressions unnatural and the human brain does a splendid job of detecting such anomalies.

  • Unnatural movement of eyes

Unnatural or absent eye movements are almost always a sign of doctoring of video footage, aside from the fact that it is very difficult to replicate natural eye movements.

  • Unnatural facial feature positions

So remember next time when you see a photo of Joe Biden with his eyes pointing one way and nose the other, that the video could be deepfaked.

  • Unreal teeth

An absence of the outline of individual teeth, or very white or unnatural-looking teeth could be a sign that the video is morphed. Algorithms haven’t gotten a knack for creating a perfect set of natural-looking dentures yet.

  • Hashtag discrepancies

Creators that are worried about the misuse of their content may use a cryptographic algorithm to embed the content with hashtags in different points of the content. If you try to morph the video or alter it in any way, these hashes would change, thereby rendering the content as doctored.

  • Lower FPS videos

Since Deepfaking is a process that does the scrubbing frame by frame, often people with little patience or lesser resources favor creating a video with a significantly lower fps count than the original video.

  • Unnatural skin tones

Discoloration, lighting, and shadows, along with unnatural skin tones are indicative of alteration of the footage.

  • Unreal hair

Who wouldn’t want perfect hair right? Often, however, it is not easy to imagine one with different hair than what they are used to having. Such inconsistencies are relatively easy to spot in an altered video and are something that you should be mindful of.

So now that you hopefully know a lot more about the deep fake technology that we initially started with, I hope you will be more mindful of what you find online and give a second thought to that extremist controversial video of your favorite political figure. This is not to say that deepfake is a bad thing. I feel that almost every technology today can be seen as a double-edged sword. So go ahead, have fun. Download Reface and have fun morphing your friend’s face onto yours and doing some really stupid stuff. But do understand that it is just the tip of the iceberg and the technology and its implications go deep down.

Thanks for reading.

Sources

CoffeeBeans helps small, medium and large businesses unlock the true potential of technology and AI to solve some of their most pressing challenges