Could New DeepFake Technology Cause an Information Apocalypse?

Author: Michael Crenshaw

Published:

Imagine a video where a Fortune 500 company's CEO confesses to fraud, and his company’s stock falls by 50%. Only to find out that the video was fake. With the advance of deepfake technology, things like this could occur. Scenarios like this could have huge ramifications and they are not far from our reality, artificial intelligence and deep learning technology is making it easier to make fake videos that look realistic.

DeepFakes, are videos that are created by a machine learning algorithm and can closely replicate an actual person’s speech and mannerisms. The algorithm goes through thousands of pictures and videos of the same person, then approximates that person's face and voice in different scenarios. Eventually, after so much approximation, the results look eerily similar to the person.

Researchers worry that there aren’t checks and balances ready to combat the potential for deception. There may be no way to differentiate between faked videos and real ones once they are released. Look at this video of Former President Barack Obama and other individuals. It is becoming increasingly easy to make these. In three days, the developers were able to create Obama's realistic replication.

With the spread of “fake news”, the very existence of this technology can pose a threat to politics and elections around the world. It could also be used by dictators to deny things that are actually true. It could call into question the legitimacy of all videos, real or fake.

In an article for the National Endowment for Democracy, Sam Gregory, a graduate Professor from Harvard, described the danger, “The most serious ramification of deepfakes and other forms of synthetic media is that they further damage people’s trust in our shared information sphere and contribute to the move of our default response from trust to mistrust. This could result from either widespread actual usage of deepfakes or widespread rhetorical usage by public figures who call ‘deepfakes’ on news they don’t like or exercise ‘plausible deniability’ on compromising images and audio.”

Social media giants and governments have been making an effort to combat this technology. Facebook has built a machine learning model to detect fake images and videos and sends what it finds to fact checkers for verification. In Canada, there are already laws against posting deepfakes of a person without their knowledge. Similar to Canada, in Australia, there are fines of up to $105,000.00 for sharing deepfakes. But, since deepfakes are so new, many other social media giants and governments haven’t gotten prepared to deal with the potential fallout of this technology.