Elgan on Tech

Only AI can save us from a world of fakes (a world AI is also creating)

Deepfake video and audio. AI-generated texts, poetry and lyrics. Fake sites. Fake influencers. Fake news. Will life ever be real again?

deepfakes fake news grunge sign doctored malicious personal attack video audio
Getty Images

Elgan on Tech

Show More

The truth is out there.

When I was a kid, my brother and I made a UFO out of paper plates, tin foil and marbles. Then we got on the roof with our spaceship dangling from a fishing line and, using our mom's plastic 110 camera, captured incontrovertible proof that we are not alone in the universe.

Fakery has always existed. But when everything went digital, creation and distribution both got easier, faster and more broadly distributed. And artificial intelligence (AI) will take it to a whole new level, enabling anyone to create perfect fakes of difficult-to-fake media, like video and audio.

How will we function with a world full of fakery?

Fake news (or, news about fakes)

Media reports obsess over the fear that deepfake videos will be used in politics. And just this week, it's happened. Bharatiya Janata Party President Manoj Tiwari used deepfake videos to speak to supporters in different languages. It was not used to falsify the comments of an opponent. Still, it's a milestone that deepfakes are now officially being used in politics.

Another demonstration of authorized AI fakery is a new song written for, and in the style of, rapper and songwriter Travis Scott. The song, called Jack Park Canny Dope Man, has AI-generated music and lyrics and was performed by Scott himself in a music video. The project was headed by Los Angeles-based agency, space150.

An unrelated website lets you create your own AI-generated song lyrics.

OpenAI GPT-2 is a language creation AI that can write English in just about any style. It can imitate writing styles, like in The New Yorker magazine, and even write poetry. OpenAI is a non-profit research organization founded by Tesla CEO Elon Musk, and others.

One real fear is that AI-generated fakery will be used for crime. And that's already happening, too.

Symantec has verified three separate cases of audio deepfakes used to socially engineer employees by impersonating the voice of the CEO. Harvesting samples from YouTube speeches and earnings calls, CEO voices were simulated in calls to finance employees to urgenty wire money to the crooks.

How AI will make perfect fakes

There are two general ways to create deepfake videos. The first is a face-swap, which uses an encoder and decoder to match and replace just the face on a person in a video frame-by-frame.

The second way is called a generative adversarial network, or GAN. With this method, which can be used to create all kinds of convincing but fake data, uses two AI algorithms. One creates fake data and the other judges it, providing feedback for the creator algorithm. This is repeated on a huge scale, with both algorithms improving their ability. Eventually, the creator algorithm gets so good it can churn out fake data -- video, audio, text, fingerprints -- you name it.

The creation of fake video is getting super sophisticated.

A Hong Kong-based startup called SenseTime, working with the Nanyang Technological University and the Chinese Academy of Sciences’ Institute of Automation, developed a framework they call SenseTIme, which edits video automatically given spoken audio.

The Korean company Hyperconnect built a tool called MarioNETte, which can map the face movement of one person to the face of any other person -- say, a politician or celebrity -- in real time.

Deepfake technology is simultaneously becoming more capable, and also becoming more democratized, now showing up in consumer apps. It's only a matter of time before anyone can create perfectly convincing deepfake video and audio.

And that's why it's imperative that we figure out how to detect fakes.

Fake-finding tech is coming online

Technology incubator Jigsaw, which started out as a division of Google, was moved to parent company Alphabet, and this month moved back to Google, created a platform called Assembler to help fact-checkers verify images. The tool combines different techniques and algorithms developed at seven universities, plus one created by Jigsaw itself, to process images by discovering evidence generated by known methods of creating fake photos.

Google is also using a combination of AI and humans to find and remove fake, unethical or malicious reviews on Google Maps. The company claimed this month they've removed more than 75 million policy-violating reviews and 4 million fake business profiles. The AI takes a first pass on every review and business profile, auto-deleting clear violators. The iffy ones are then passed along to human moderators to decide their fates.

University of Washington and Allen Institute for Artificial Intelligence researchers invented an algorithm called Grover, which can identify deep-fake generated writing with 92% accuracy, according to the researchers.

Facebook recently agreed to let independent researchers access some of its information about user activity in a program called Social Science One.

Harvard and MIT-IBM Watson AI Lab scientists released what's called the Giant Language Model Test Room, which is a web-based space for figuring out whether any given text was written by AI.

A Canadian startup called DarwinAI, created by researchers at the University of Waterloo, created technology they call DarwinAI, which uses deep learning to detect fake news. Their technology now compares the content of the headline with the article. In future iterations, DarwinAI will compare the text of an article against other articles.

One of the most bonkers ideas for challenging our divisive, fake-news fueled politics, is to "resurrect" founding fathers Adams, Hamilton, Jay, Jefferson, Madison and Washington by simulating their thinking in AI. Using their voluminous writings and speeches,

Some have proposed blockchain as the solution to verifying content. But that will require an unlikely degree of buy-in by just about everybody.

No, the future of fake detection is clearly AI, as these early products, technologies and trials demonstrate.

The arms race between fake-creation and fake-detection AI

Some estimates claim that one-quarter of all new social media accounts are fake or fraudulent. Fake accounts sound harmless, but they tend to be closely correlated with fake news, spam and cybercrime. Current systems for detecting fake accounts don't work well, according to reports. A NATO trail found that 95 percent of the fake profiles they created were still online weeks after they published their report.

This is an astonishing fact, when you consider that Twitter, Facebook and other social sites have invested billions and worked for many years on systems for detecting fake accounts. Each time they make another advancement in detection, they toss out millions of fake accounts -- which means the fakesters are staying ahead of detection.

In a way, the cat-and-mouse contest between creation and detection mirrors GANs. The fakers create fakes. The detectors detect, which provides information to the fakers about what's being detected. The fakers then adjust their methods and come back with better fakes.

It's very likely that this contest will continue indefinitely. Human perception will be completely left behind, and the competition will exist almost entirely between AI systems.

Yes, the truth is out there. But now we'll need machines to find it.