Deepfake is the latest trend in artificial intelligence that has been spreading across social media like wildfire. Deepfake has become a tool for revenge porn, fake news, and more recently used to create videos of people saying things they never said or doing things they never did. What are deepfakes? How do deepfakes work? And what can be done about them?
What are deepfakes, and how do they work?
Deepfakes– a portmanteau of deep learning and fake— are synthetic media that faces from existing images or videos are replaced with someone else’s likeness. While faking content isn’t new, deepfakes use powerful techniques from machine learning and artificial intelligence to generate deceptive visual and audio content. The primary methods of creating deep fakes are based on machine learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks.
Using an autoencoder will generate a deep learning AI program tasked with studying video clips to understand the person’s features from various angles and environments. It will then map that person onto the individual in your target video by finding common features.
Intelligence researchers use a new type of machine learning known as Generative Adversarial Networks (GANs) to identify deepfakes. These networks analyze complicated algorithms on both sides to find flaws.
GANs are also used as a popular method for creating deepfakes, relying on the study of large amounts of data to “learn” how to develop new examples that mimic the real thing, with painfully accurate results.
How are deepfakes used?
Several software and apps have created ways for beginners to generate deepfakes with relative ease, such as the Chinese app Zao, DeepFace Lab, FaceApp (a photo editing app with built-in AI techniques), Face Swap, and DeepNude.
While the ability to automatically swap faces to create a high-quality, realistic fake video of anyone has some interesting harmless uses (such as filmmaking and gaming), this is obviously a dangerous technology with some worrying implications. One of the first real-world applications for deepfakes was to create pornography.
In 2017, a Reddit user named “deepfakes” created a platform(subreddit) for porn that featured face-swapped actors. Porn (particularly revenge porn) has caused significant damage to celebrities and public figures. According to research from the Deeptrace company, pornography made up 96% of deepfake videos found online in 2019.
Deepfake has transitioned from being a tool for porn to fraud and political propaganda. Imagine you’re watching the evening news and see a press conference of your Prime Minister urging people to rebel, but everything is a deepfake. Without any authentication, it is impossible to distinguish between a legitimate video and an artificial intelligence-generated false one.
“In 2017, researchers from the Washington University at St Louis released a paper describing how they had created a fake video of President Barack Obama to highlight research on generative technology and what may happen if users put it to nefarious use google’s chief executive, Mark Zuckerberg, has also been the target of a deepfake video that appeared to show him credit a secretive organization for the success of the social network.
One potential issue with AI-generated deepfake content is scams. It isn’t easy to know whether an imposter has replaced the person on the other side of a phone call or video conferencing session because they could be mimicking in great detail their voice and how they would have responded to certain questions or statements. Scammers used a deepfake of the voice of a tech CEO to convince one employee at the company to transfer money to their account. And this is not the first time: last year, scammers managed to defraud another company using the same trick, netting $240,000.
Deepfake technology can revolutionize history lessons by making them interactive and much more engaging for students. It may also preserve historical stories since deepfakes could depict firsthand accounts from protagonists of large-scale events like World War II.
For the David Beckham malaria announcement, a YouTube video of him appearing to speak nine languages was created using AI. Deepfakes allow for translated films that use the original actors. The voices sound like the original ones, and, crucially, the lip movements match up with words spoken. Deepfakes will make video and audio more accessible by bridging language barriers.
The technology can synthesize realistic data to help researchers develop new ways of treating diseases without being dependent on actual patient data. Mayo Clinic, the MGH & BWH Centre for Clinical Data Science, and NVIDIA collaborate to create synthetic brain MRI scans using generative adversarial networks (GANs).
The team trained their GAN on data from two brain MRI datasets – one containing approximately 200 brain MRIs showing tumors, the other containing thousands of brain MRIs with Alzheimer’s. The conclusion drawn by the researchers from this is that deepfake algorithms have become just as proficient at cancer detection as algorithms trained with only real images.
How can we tell fake from real?
Deepfakes are becoming a common occurrence, but it’s inevitable for people to get used to spotting them. This will help prevent the fallacies of fake news from entering society and being believed as authentic.
Outlined below are a few indicators that give away deep fakes:
- Some deepfakes have trouble realistically animating faces, and the result is a video in which the subject never blinks or blinks far too often or unnaturally.
- When examining a deepfake, there are occurrences of blurred backgrounds while other parts of the photo or video seem sharp. The focus might also appear soft with insufficient lighting on certain areas of the image.
- You can also tell a deepfake if someone looks like they’re not expressing the appropriate emotion for what they are talking about.
- If someone’s movements don’t look natural, or if they appear to shake and turn jerky between frames, you should suspect the video to be a deepfake.
- If you watch a video on a screen larger than your smartphone or can use video editing software to slow the playback, you will be able to perform a close-up analysis of the image. For example, if zooming in on lips in an audio clip makes it easier to see that they are actually talking or bad lip-syncing.
What does the future of deepfakes look like, and will they threaten our trust in media?
The competition between the creation and detection of artificial intelligence fake videos and pictures is becoming increasingly fierce. Though deepfake content is easy to create, it’s hard to find, not to mention difficult for us humans to distinguish from reality. The latest deepfake algorithms can now deliver more realistic images and videos, and the new technology is making it possible to produce these images in much less time than before. A recent paper from researchers at McGill University showed that a specialized neural network could be trained in just two hours to generate smooth facial animations. Experts predict that these artificial intelligence fake videos and photos may become nearly impossible to distinguish from authentic content as time goes by.
Deepfakes can be used for many good things. For example, some people use deepfakes for art and education, but deepfakes will also bring more threats to public safety, like crime, fake information, and election interference. Another area to keep in mind is that deepfakes can make a person do something they have never done, abusing their identity. Once the video goes online, you no longer have control over how it is used or interpreted.
Ironically, the software that can expose videos or pictures as forgeries is artificial intelligence. AI has already been used to catch fake content. However, this system still performs better when authenticating famous people since it is trained on hours and hours of footage readily available online.
Twitter, Google, Facebook, and an Amsterdam start-up firm are developing detection systems that aim to identify fakes whenever they appear. However, Deepfake is an adversarial technology and will continue to evolve. No single organization can solve these challenges singlehandedly. As a result, Facebook created a Deepfake detection challenge, an open initiative to advance artificial intelligence detection systems and create a community to help legitimate the content they view online.
Two other strategies to verifying the accuracy of media. One is introducing digital ownership watermarks in video and other media, which can be hard to tamper with but not impossible. The second strategy focuses on recording data on a blockchain ledger system (store video, image, and text) which is tamper-proof.
If we take time to research where our information is coming from, we might find more applications for deepfakes than evil uses. Increased awareness of deepfakes could help us to limit their negative impacts, find ways to live (co-exist) with them and continue to reap the benefits from now into the future.