The CEOs of Twitter and Facebook got dragged before Congress this week to be shouted at, accused and interrogated. It’s called “techlashing,” and it’s becoming a ritual in Washington.
Members of the Senate Judiciary Committee are concerned about political bias, addiction, hate speech and, above all, the disinformation crisis, where foreign state actors, domestic conspiracy theorists and propagandists easily game social algorithms to spread fake news from fake people with fake user profiles.
A year ago, many expected the use of deep-fake video to fabricate a political scandal coming into the most recent US election. That didn’t happen. And one of the reasons is that deepfake video is still detectable by the human eye.
Deepfake pictures, however, have been perfected to the point where people can’t tell the difference between a photo of a real person and a fake photo of a fake person.
And that’s what did happen during the election.
A “dossier” compiled in the months before the election on Hunter Biden, president-elect Joe Biden’s son, alleging wrongdoing by the businessman, compiled by Typhoon Investigations and led by Swiss security analyst Martin Aspen turned out to be fake. The information was fake. The company was fake, the analyst was fake. And even his profile picture was generated by deepfake technology.
Essentially, it was the kind of AI-augmented political hit job everyone feared would take place with deepfake video, but it was mostly text information that tried to appear legitimate with the help of one deepfake photo. Ironically, it was a flaw in the photo that led journalists to look into the whole dossier more thoroughly.
Separately, a pro-China disinformation campaign was recently uncovered by the research company Graphika, which they called “Spamouflage Dragon,” in which fake users with AI-generated deepfake profile pictures on Twitter and YouTube sought to influence public opinion about threatened bans on TikTok. The fake users promoted propaganda videos.
Part of the problem is that deepfake technology is getting easier to build for creators and find for users. Consumer-level, easy to create deepfakes are called “cheapfakes.”
A deepfake bot on the Telegram message app was discovered recently by the visual threat intelligence company Sensity. The company claims that the bot is responsible for a “deepfake ecosystem” that has created more than 100,000 images that “digitally undress” people based on normal photos as part of a series of extortion-based attacks.
A kind of arms race is taking place on YouTube, where deepfake creators try to out-do each other putting one celebrity’s face on another celebrity. The latest puts actor Jim Carry’s face on Joaquin Phoenix’s character in, “The Joker.”
The creators of “South Park” have even launched a comedy channel on YouTube called Sassy Justice, where they use deepfake videos of famous people, especially President Donald Trump, in satire.
They’re even using deepfake technology to create fake songs with the sound and in the style of real performers, such as Frank Sinatra. The sound is eerie and the lyrics are crazy, but you can tell they’re getting there. It’s only a matter of time before deepfake technology will be able to churn out an infinite number of never-before heard songs from any famous singer.
Right now, we think about deepfakes in terms of social issues. But as I’ve argued in this space before, AI-generated fakeness is becoming a growing business problem.
The use of deepfake technology in social engineering attacks is already becoming well established. Deepfake audio is already being used in phone calls that involve the use of a voice designed to sound like the boss requesting money transfers and the like. And that’s just the beginning.
The solution to this technology is more technology
Researchers at universities and technology companies are working hard to keep up with emerging deepfake tools with deepfake detection tools.
Binghamton University’s “FakeCatcher” tool monitors “bloodflow data” in the faces in videos to determine if the video is real or fake.
The University of Missouri and the University of North Carolina at Charlotte are working on real-time deepfake picture and video detection as well.
The Korea Advanced Institute of Science and Technology created an AI-based tool called “Kaicatch” to detect deepfake photos.
But more importantly, technology companies are working on it.
Microsoft rolled out a new tool recently that tries to detect deepfake pictures and videos. It’s not perfect. Rather than labing media as real or fake, Microsoft’s Authenticator tool gives an estimate, with a confidence score for each artifact.
Facebook launched last year what it called its “Deepfake Detection Challenge,” to attract researchers to the problem of detecting deepfake videos.
With every advance in the creation of more convincing deepfakes, new technology is being developed to make them less convincing — to computers.
Why social networks will become a safe space for reality
Today, the problem of AI-generated fakes is associated with pranks, comedy, political disinformation and satire on social networks.
We’re currently living in the last months of human existence where computer-generated pictures and videos are detectable by the human eye.
In the future, there will be literally no way for people to tell the difference between faked media and real media. The progress in AI will make sure of that. Only AI itself will be able to detect AI-generated content.
So the problem is: Once deepfake-detection technology exists in theory, when and where will it be applied in practice?
The first and best place will be on social media itself. The social nets are eager to apply and develop and evolve such technology for the real-time detection of fake media. Similar or related technology will be able to do fact-checking in real-time.
That means: It’s inevitable that at some point in the near future, the information you see on social networks like Twitter, Instagram and Facebook will be the most reliable, because AI will be applied to everything uploaded. Other media won’t necessarily have this technology.
And so the crooks and propagandists will turn to other media. Fake sources will socially engineer journalists with fake content to trick them into printing lies. Crooks will increasingly fool business people with deepfake calls, photos and videos in social engineering attacks.
Sometime soon, the new normal will involve trusting the information you read on social networks more than any other place. And won’t that be something?