Deep Fake Videos on the Rise as Technology Improves

But technology to combat these doctored videos is also growing in sophistication.

In May 2019, in the midst of a feud with Congresswoman Nancy Pelosi, President Trump tweeted a video of the Speaker of the House that made her look like she was stammering.

However, as it turns out, the realistic-looking video was a fake. And not just any fake, but a “deep fake.”

In recent years, with artificial intelligence and deep learning tools improving considerably in quality, the technology can be used to doctor legitimate audio or video recordings to create false ones that look authentic enough to pass for real—even among some of the most discerning experts. Deep fake recordings have been released that made it appear as though famous celebrities were in porn videos, and that politicians, including Pelosi, Democratic National Committee Chair Tom Perez, and even former President Barack Obama said or did things that never occurred. Understandably, as believable as these deep fake recordings appear, they can be quite damaging to the subject in question before it is ever determined  they are bogus.

Want the latest institutional investment industry
news and insights? Sign up for CIO newsletters.

And deep fakes are also increasingly appearing in cybercrime scams. A deep fake audio message was used recently in at least one wire fraud scam to provide further evidence to convince the employee-victim to put through an ACH transfer to crooks, according to a recent blog post from Symantec. At last month’s Black Hat USA conference, (a computer security symposium for hackers, corporations, and government agencies around the world,) two sessions covered deep fake concerns: Detecting deepfakes with mice and Playing offense and defense with deepfakes.

However, even when deep fakes are proved to be doctored, the stigma of these fake videos often remains, and can affect public discourse, attitudes, and even elections and stock prices. Indeed, in August, Rep. Adam Schiff, chairman of the House Intelligence Committee, expressed concern that leading online companies Google, Facebook, and Twitter currently have no clear plan to deal with such videos or files when they appear online.

Although the technology is good, there are a few giveaways to deep fake videos. For example, while neural networks do a good job at mapping the features of a person’s face, it may be given away by the fact that video subjects may not blink. Since most pictures used in training artificial intelligence systems have open eyes, the neural network tends to create deep fakes that don’t blink, or that blink in unnatural ways. When it comes to “face-swapped videos,” researchers at UC Berkeley says that head and face gestures can be more difficult to imitate since every person has unique head movements such as nodding when stating a fact, and face gestures like smirking when making a point.

But, just as AI facilitates making better deep fakes, it can also support efforts to find them. Using real videos of world leaders, the UC Berkeley researchers were able to train a neural network to suss out deep fake videos with  92% accuracy. And, in the UK’s University of Surrey, a project dubbed Archangel is launching to establish a smart archive for storing videos so that they can be used to legitimate such video, still, and audio files. Archangel trains a neural network on various formats of the video, and said the neural network will then be able to tell whether a new video is the same as the original video or a tampered version.

Related Stories:

Fintech Can Improve Returns, But Asset Owners Must Also Protect Data Privacy and Cybersecurity

Cyber-insurance sees an uptick as businesses prepare for potential breaches

Organizations Relying on Ethical Hackers to Improve Security Postures

 

Tags: , , , , , ,

«