“Deepfake” is seen as a deeper trouble than Fake News. But there’s a silver lining.
A computer-generated video of former President Barack Obama, by University of Washington made headlines in 2017 for he was seen speaking from old audio and video clips. The following year a fake video of Donald Trump offering advice to Belgium citizens on Climate Change, attracted wrath of millions. Is the next wave of misinformation and disinformation warfare approaching soon?
Watch: Trump Deepfake video offering advice to Belgium on Climate Change
The lightning speed at which social media circulates the news has had both dangerous and productive past precedents. Nevertheless, the news which is often assumed as authentic has been used for nefarious purposes to malign somebody’s reputation and even livelihood – better known as fake news.
Watch: Trump responds to question on Fake News
As if damages arising from the fake news problem were not enough, there is more trouble brewing in the form of a technology known as “deepfakes”. And harm done by fake news can be insipid when compared with deepfakes. A recent preapproved ad campaign using deepfake technology, suffered a last minute rejection by multiple news agencies.
Any sufficiently advance technology is indistinguishable from magic.
Arthur C. Clarke
In two deep fake videos, representations of Russian president Vladimir Putin and North Korean dictator Kim Jong-un are seen telling Americans, that they are ruining their democracy themselves. The advert. was initially aimed at raising awareness for the voters to check up their respective polling station and registration details. The advertisements were created through a creative agency Mischief @No Fixed Address that also campaigned to generate awareness towards voting rights of the Americans. Both these advertisements included disclaimers at the end stating, “This footage is not real, but the threat is”.
Is the next wave of misinformation and disinformation warfare approaching soon?
The technology and rise of deepfakes
Deepfakes utilize Artificial Intelligence technology that produces videos impersonating an individual, depicting actions or conveying statements which he/she has never said or done. The technology as it is known today may just be the beginning. Doctoring a video used to be a herculean task. Now its is a matter of child’s play that can be done in three user-friendly steps. While its tough to pinpoint who invented deepfake, videos can now be created using a machine learning technique called a “generative adversarial network”, or a GAN, a technology that was limited to the AI research community until 2017.
The “Deepfake” technology problem started to attract eyeballs around the end of 2017, after Vice’s Samantha Cole published a deepfake porn video that featured “Wonder Woman” actress Gal Gadot. We all know how false claims, even utterly absurd ones, can be smoothly peddled with unprecedented success today thanks to the social media’s virality and ubiquity.
Fake videos are now created using “generative adversarial network”, a technology that was limited to the AI research community until 2017.

Doctored videos depicting actress Emma Watson were among the most popular on deepfake communities. Actress Kristen Bell‘s face was used in a pornographic deepfake video without her consent. “I was just shocked, because this is my face… belongs to me,” she said. “It’s hard to think about that I’m being exploited.”
Michelle Obama, Ivanka Trump and Kate Middleton have all been targeted. So, it is undeniable that deepfakes can certainly have deep psychological damages.
Michelle Obama, Ivanka Trump and Kate Middleton have all been deepfake targets.
A threat no one knows how to stop
In recent years, deepfakes have disrupted political and social structure, maligned credentials of celebrities and created a void in the minds of their followers and supporters. The sophistication of the technology has had serious repercussions. A host of harms from these fake replicas range from sabotaging and exploiting individuals to social harms like erosion of trust in institutional bodies, undermining credentials of reputable media houses, endangering public safety and manipulation of elections. Deepfakes and its not too distant cousins such as fake news have the power to do all of this. They betray two of our most cherished and natural senses; sight and sound.
Watch: Deepfake technology aligned to lip sync fake content with Obama’s speech style
Deepfakes and its not too distant cousins such as fake news have the power to betray two of our most cherished and natural senses: sight and sound.
It Is Not All Bad
Even beyond the beneficial uses in education, research etc., the problems brewing from the deepfake technology certainly has some silver lining – an upside! An effective way to respond to the scourge of deep fakes might be to strengthen the social and political institutions they aim to disrupt instead of targeting its creation itself. Deepfake, in a manner, is a symptom of a deeper problem and isn’t it ironic that the symptom is scarier than the disease itself?
For example, over the 20th century, with the exponential advancement of technologies and the internet, the audience has learnt to distinguish the made-up from the real and fiction from fact. People eventually learnt that People’s Court, The Apprentice, Big Boss, The Bachelor were scripted performances aimed to entertain them. So, essentially technologies could be used to evaluate facts and rather challenge accounts to examine alternate perspectives. Stories, fake or real, will never get eliminated but what we need to learn is to have the narrative right!
Watch: Top 10 Deepfake Videos on the Internet
It is a widely known fact that news organizations are pressured for funds and need eyeballs to cover-up the costs. News agencies should not be driven to depend on bottom-line driven platform like Facebook etc. Advocacy organizations such as WITNESS ought to be empowered that aims to “help people use video and technology to protect and defend human rights”. Investing in trusted journalism ought to be the solution as not every paper can have a benefactor like Jeff Bezos to ensure that “Democracy (doesn’t) die in darkness”.
News agencies should not be driven to depend on bottom-line driven platform like Facebook.
Coming back to the democratic institutions, if the ultimate aim to maintain electoral integrity can be restored in all possible sense, then the problem technology of deepfakes could serve as a measure to foster credible changes to such institutions. After all, the aim of the aforementioned deep fake videos was to promote the electoral integrity.
If the ultimate aim to maintain electoral integrity can be restored in all possible sense, then deep fakes could serve as a measure to foster credible changes to such institutions.
As Claire Wardle writes, “When humans are angry and fearful, their critical thinking skills diminish”. A sustainable vaccine to the problem of deep fakes can only begin with authentication that is achieved through a process sustained by resilient and inclusive social and political institutions.
