Deepfakes: A Modern Day Nuclear Weapon or a Passing IT Prank?

Updated on Apr 20th, 2021

Deepfakes: A Modern Day Nuclear Weapon or a Passing IT Prank?

In the era of AI and ML, filtering truth from lies is a difficult task. As technology matures, so do the ways in which it can be misused. In the past year, we have all heard of something called – deepfake. And there is hardly anyone who hasn’t seen the video of Mark Zuckerberg in which he admits to stealing the data of millions of users and his world domination ambitions. At first, it dropped like a bombshell. Everyone who saw it was left speechless, because nobody at the time could tell that this wasn’t really Mark Zuckerberg talking, but a Mark Zuckerberg voice impersonator who was made to look like him. Later it was found out that the deepfake video, posted originally on Instagram, was created by two artists – Bill Posters and David Howe along with an advertising company as a part of a series of AI generated fake videos for a project called Spectre.

video of Mark Zuckerberg

What was really shocking about this video was how authentic it seemed at the time. To a person watching it for the first time, there could be no ambiguity that this was the real Mark Zuckerberg talking; from the voice acting to his hand gestures and not to forget his face digitally placed on top of this actor, all it was very convincing. No one had seen anything like this before. Once the dust had settled and people figured out what the real deal behind this video was, they started to find it very amusing and some went as far as calling it a commendable piece of art and Facebook even decided to leave it up on the platform so more people could see it. 

The fact that such high quality fake videos could be created amused some and scared the life out of the rest. While the whole deal was fun while it lasted, we shouldn’t forget that this was not the first deepfake example to exist and go viral. A couple of days before the Mark Zuckerberg video, a fake video of house speaker Nancy Pelosi was uploaded by a conservative Facebook page in which the voice from the original video clip was slowed down to make her sound drunk.

While others found it amusing too, the fact that it came from a conservative political group was terrifying because this implied that this technology could very easily be used for something way more sinister than swapping your face with a famous celebrity- They could be used to spread fake news, sway elections, even go as far as spreading civil unrest in which lives could be lost.

Apart from amusing us, the Zuckerberg video accomplished three things-

1. Make the deepfake technology mainstream
2. Show how sophisticated it could be
3. And how it can be used to spell chaos

Since June 2019 when Zuckerberg’s video was uploaded, countless other deepfake videos have been uploaded on the internet. Nobody can forget Obama’s legendary and insanely funny deepfake video, acted in by Jordan Peele, in which he warns us against trusting everything that we see on the internet and how President Trump is a d*****t. What was mind boggling about it was that even though this fake Obama was telling us that he wasn’t Obama and he won’t ever say the things that he was saying, it was hard to see anyone on the screen but Obama.

And this creates a serious cause for concern among many political leaders and activists. We as humans have a horrible record of taking something that is meant to be fun and entertaining and turning it into a weapon. Look what happened to Facebook.

So authorities are nipping deepfake in the bud and are trying to make sure that they put a stop to these videos, or at least try to find a way to moderate this content properly, at an early stage. The need is now more severe than before because lately, this technology has been used in many smartphone apps that let you swap your faces with celebrities and animals and make deepfake videos of your own.

Take Doublicat for Instance, It is an app that lets you swap your face with the subjects of popular GIFs. While at the moment it is only allowing people to swap faces in pre-recorded videos, the makers say that soon they will allow their users to do this with any footage they like. This list goes on, FaceSwap from a couple of months ago was also in the same league as Doublicat and Snapchat even has its own face swapping feature called Cameos in its app. 

By the way things are going, it is clear that this technology is going to get more and more sophisticated and when whatever minor give-aways that the current AI and ML algorithm has won’t be there anymore, there won’t be an easy to distinguish fact from fallacy. 

That is why Facebook and Reddit both recently released their policies regarding how they will moderate AI-generated fake videos on their respective platforms and nobody is particularly impressed by them, but we’ll talk about that later. So in light of its recent stardom, today we want to talk about what are deepfakes and tell you how they work and what can be their possible implications.

What is Deepfake?

What is Deepfake?

The easiest possible definition of deepfakes is that they are AI or ML algorithm birthed fake videos or voice recordings made to look or sound like someone else, saying and doing something the real person would never do or say. But this doesn’t really make things easier, because now we have called all the AI-doctored videos and voice recordings as deepfakes. And it is hard to find a Snapchat Cameo deepfake having a world shattering consequence. 

So we are back to square one- What does deepfake mean?

Due to the vast swath of content that can fall under the deepfake category, which also includes harmless content made for fun, it is not possible to moderate this type of content with just one set of rules. 

So how do you differentiate fun from danger? According to Tim Hwang, former director of the Harvard-MIT Ethics and Governance of AI Initiative, “the way we should think about deepfakes is as a matter of intent. What are you trying to accomplish at the sort of media that you are creating?” 

What Hwang says lands things in grey instead of black and white, if we are to follow what he suggests, then we would fall into the same debate of freedom of speech and right to express which usually leads nowhere.

A lot of people think that any type of deepfake media that spreads misleading and deceptive information should be banned from social media platforms. But unfortunately, there is no simple algorithm which can help us in filtering through all the deepfakes that are posted on the internet and pick out the ones that deceive. 

So for now, the best possible answer that we can give is that a deepfake is any form of AI and ML generated bit of media that looks to mislead and manipulate a group of individuals.

How Deepfakes are Created? 

Humans have a horribly tendency of seeing what they believe and ignoring everything else. There is no shortage of evidence to prove this statement. Fake news and viral conspiracy theories are a testament that humans crave farfetched lies told as truths. Deepfake technology exploits this human condition with the help of GANs, that is generative adversarial networks. Creating a deepfake requires three things – two ML machines and a massive influx of data. One ML machine constantly works on creating fake videos, constantly learning using the video input that it has been provided with, while the other ML machine tests its authenticity till it fails to call out this video as a fake. The sophistication of a deepfake depends on how well the forging ML can do its job, and that further depends on how much input it has to learn from. That is the reason why most of the deepfake videos that we have seen, at least the best ones, are all of famous celebrities because there is a treasure chest of their footage on the internet.

But quality is hardly a concern when intentions are evil and your subjects are willing. Even a poorly build deepfake can cause a lot of trouble if it gets the subject, the time and the content right. For instance, if a deepfake video of a presidential candidate is released a few days before the polling day, then it can spread misinformation like a wildfire and can very well sway elections.

How Dangerous are Deepfakes Really?

In today’s world when we are all so thoroughly connected all the time, a world where every new thing that happens in the world of sports, politics or movies gets notified to us mere minutes after it has happened. A social order in which all of us have turned google notifications ON for one thing or another or one person or another, deepfakes can work as a weapon of mass chaos with maddening potency. 

A single deepfake video can go around the world in mere minutes, millions even billions of people can see it and take that video in countless different contexts. This is the perfect situation to create mass hysteria but still, how much damage can it really cause? It’s not like only the danger is getting more sophisticated and not the counter-measures, because we are hard at work trying to find a way to identify deepfakes and moderate them properly. 

People have different opinions about that though. Rebulican senator from Florida and 2016 presidential candidate Marco Rubio calls deepfakes modern day nuclear weapons. 

In the old day,” he said to an audience in Washington. “If you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply” 

Now it is hard not to chuckle at this because it sounds more like a conspiracy theory than the words of a senator. But still, there is a threat, it may not be as massive as Rubio seems to think but a threat nonetheless.

When asked about this, Hwang said- “As dangerous as nuclear bombs? I don’t think so. I think that certainly the demonstrations that we’ve seen are disturbing. I think they’re concerning and they raise a lot of questions, but I’m skeptical they change the game in a way that a lot of people are suggesting.”

But even if Hwang says that it will possibly not have some monumental political implication, at least not just yet with the state this technology is in, there is however the possibility of deepfakes being used for crimes directed towards individuals or a group of individuals. 

These videos can very well be used to spread malicious and explicit content about people which can lead to a breakup and even falling of a marriage. This can also be used to damage someone’s reputation. There are deepfake videos of a tremendous amount of celebrities online which are, let’s just say something you won’t allow your kids to watch. Celebrities are not the only ones who are under this threat, any common man or woman can fall prey to this.

And this doesn’t just stop there. Deepfakes can also be used to create fake speeches, announce nationwide emergencies, war declaration, and much more.

How to Detect Deepfakes?

This is one problem we are yet to find an answer to, at least a concrete one. While amateur videos can very easily be spotted by naked human eyes, and so can any video made using countless face swapping apps in the market. But videos that are meant to do harm, videos that took effort and intent, they are very hard to spot, sometimes even impossible. 

And the way this technology is improving, it won’t be long till they reach a level of sophistication that will compel us to rely on digital forensics to identify fake videos. While we are making more efforts into devising a mechanism to counter deepfakes, like DARPA (Defense Advanced Research Projects Agency)which is investing heavily in finding ways to authenticate videos, the chances of something turning up are pretty slim. 

The prime reason behind it is that GANs can train themselves to circumvent that mechanism as well. Using this new method, the ML which tries to authenticate a fake video created by the other ML can very easily use it on a fake video and won’t let it pass till that video cheats its way past that method as well. 

“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques. We don’t know if there’s a limit. It’s unclear.” said David Gunning, the DARPA program manager in charge of the project.

Now when the people who are tasked to find a cure for a disease seem unsure if there can ever be a cure, then chances are that we are all doomed. If no way to authenticate videos surface soon, then this can have lasting impacts on our internet based society. Because there is no telling if what we are seeing on the internet is the truth or not, we cannot trust anything on the internet anymore. From news to podcasts, everything can be tampered with and because of that, there will be no circulation of truth.

But Hwang is very skeptical about it too, 

“This is one of my biggest critiques. I don’t see us crossing some mystical threshold after which we’re not going to know what’s real and what’s not.” 

Can We Moderate Them and If We Can, Should We?

At the moment, there is no one answer to either of these questions. Social media giants like Facebook and Reddit have recently tried to moderate AI generated content on their platforms. Facebook’s norms say that it will now remove videos or voice recordings edited or created using AI or ML which threatens to mislead someone into thinking that a subject of the video said words that they did not actually say. 

Doesn’t sound quite right, does it? Because it’s not. Facebook confirmed that it will allow satire and parodies to stay on as well as videos edited using traditional means, like Nancy Pelosi’s drunken speech.

So the problem is still there, if the video was not edited using AI or ML, then it stays on, no matter what is happening in it. Furthermore, they will only be able to detect deepfake videos with speech, that is if a deepfake video arrives in which the subject is not saying anything but doing something, then it too will be left on.

Reddit doesn’t go to AI, it says that it will remove the media that impersonates individuals or entities in a misleading or deceptive manner and that it will take context into account. This too doesn’t sound strict enough because context can is a broad spectrum with mods having a lot of freedom to operate.

What we need are guidelines that clearly lays out what is okay to be shared and what is not. But you can’t get a lot of people to agree to that because humans love to challenge authority and the moment you try to tell them what’s right and what’s not, they ask you who you are why should they listen to you? 

But there is some hope though. Deepfake’s popularity is its own deterrent, the power that keeps it in check. Because everybody knows what it is capable of doing, they are on the look out. And as soon as something major drops, like a video declaring a nationwide emergency out of the blue, people are bound to get skeptical and not react instantly, they will ask doubt its authenticity. So in the end, the solution to a problem created by man turns out, is just man, a slightly more skeptical and cynical man, but man nonetheless. 

Conclusion

The fact of the matter is, there is no way to properly moderate AI and ML generated videos on these social media platforms. While a lot of people think that a deepfake specifically means a form of media that spreads false political news, others see it as an evil that has far greater reach. And because the technology is so popular everyone is using it, to have fun, to be sarcastic, to harass, to take revenge. So how do you take out the harmful content and leave the rest of it up? And how do you even authenticate a piece of media when the fake becomes as good as the real? These are the questions we need immediate answers to because while a global catastrophe set into motion by these deepfakes is not a very likely scenario, it can still, however, breed a lot of chaos.

Enquire now

Give us a call or fill in the form below and we will contact you. We endeavor to answer all inquiries within 24 hours on business days.