Deepfakes: No Longer a Laughing Matter
October 30, 2019
Deepfakes: No Longer a Laughing Matter
Deepfakes, a portmanteau of “deep learning” and “fake,” are videos that use deep learning tools to superimpose someone’s face over another’s in a hyper-realistic way. From videos of Bill Hader’s face morphing into the celebrities he impersonates to Tom Cruise to Jordan Peele’s comedic rendition of eerily realistic Barack Obama—deepfakes originally made waves as funny pieces of digital entertainment, but, when put into the wrong hands, they are no laughing matter.
The implications of deepfake usage stretch far beyond the realm of comedy and eye-catching viral videos. As the technology used to make increasingly more realistic deepfakes progresses, so too does the necessity for regulations and assurances that deepfake content cannot be used for malicious means such as blackmail, identity theft, and more.
Without regulations keeping deepfakes in check, how do we keep people safe from the dark side of this increasingly popular types of technology? The answer: robust identity verification. Flagging potential deepfakes will become increasingly more important as facial recognition becomes more normalized in daily activity—from unlocking your phone to boarding planes. Furthermore, being able to distinguish what is and isn’t a deepfake will help curb the possibilities of harassment and coercion that are already happening thanks to this growing digital trend.
A core issue in the deepfake conundrum is the dividing line between entertainment and personal infringement. The general public sees the advent of deepfake technology as another means of entertainment while those who are more tech-savvy understand the darker implications. And usually, one does not know a deepfake is being weaponized against them until it’s too late, which is why steps must be taken to ensure that if deepfakes are going to continue to be seen as a mode of entertainment, everyone can be protected against potential harassment and defamation that can result from deepfake technology.
Being able to suss out a deepfake has become more important in our ever-modernizing world. Beyond harassment and shaming, deepfakes can also be used for various forms of fraud, including financial and identity. Thanks to technological advancements, our faces have become passwords in and of themselves, doubling as “keys” to unlock our cell phones, smart homes, and more. The ethics of facial scanning is another matter entirely, but when deepfakes start being used more often by cybercriminals to unlock facial scanning systems, facial recognition technology will quickly come under fire despite its obvious benefits. Until these two technologies are regulated, cybersecurity measures will have to stay ahead of it and prepare for potential deepfake threats.
So, how do we combat deepfake technology that’s being used maliciously? For starters, lawmakers are starting to take their threat quite seriously. The House Intelligence Committee met in June to discuss the national security challenges posed by deepfakes, A.I., and manipulated media. Representative Yvette Clarke of New York introduced the Deepfakes Accountability Act, the first attempt by congress to to criminalize fake media used to defraud, undermine, and lie to the public. Other state lawmakers in Texas and Virginia have also enacted their own legislation in order to get ahead of the threat of deepfakes, but a clear solution has yet to be introduced. Also in June, Jake Clark, the Policy Director at OpenAI, testified on Capitol Hill about the deepfakes problem. The threat is very real and is starting to be taken a lot more seriously.
It’s not just government entities that are fighting back against deepfakes—the tech giants are getting involved, too. Google created over 3,000 deepfakes in an attempt to get ahead of the problem. In creating such a large and accessible database, Google hopes to give cybersecurity companies the research necessary to combat, categorize, and label deepfakes. And as the 2020 election season rolls around, such security measures will be critical. This past August, during the Black Hat cybersecurity conference in Las Vegas, NV, the Democratic National Committee raised awareness of the dangers of deepfakes in the political spectrum. By bringing attention to the matter, they also flagged ways to identify deepfakes—from inconsistent blinking to pixel disparities in the video image, primarily the corners of objects in the frame and around the edges of the frame, itself.
Deepfakes are fantastic for entertainment, but it’s of utmost importance that government and cybersecurity professionals educate the public about tampered elections, identity theft, and harassment that can result from deepfake technology. The fact that state and federal governing bodies are starting to take them seriously is considered great progress, yet, deepfake technology is so prevalent and so easy to access that it will take more than a few laws and regulations to offset their negative effects. As technology continues to pervade consumers’ everyday lives, it’s important that we have the right security processes in place, like identity verification, that help us retain our digital identities and our sense of self.