Deepfakes get real (and real easy)

As if deepfake technology wasn’t scary enough. Expect the panic to reach new heights now that it’s easily accessible to everyone. Prepare for deepfakes to revolutionize social engineering.

1 fake profile

Call it "deepfake panic." The world is waking up to the fact that artificial intelligence (AI) will soon enable anyone to produce fake photographs, videos and audio transcripts that look and sound real.

The panic is misplaced.

Deepfake panic centers on the fear that some famous person, such as a politician, will be blamed for saying or doing things they never said or did.

A bigger risk is that notable people actually caught transgressing on video or audio

Call it "deepfake panic." The world is waking up to the fact that artificial intelligence (AI) will soon enable anyone to produce fake photographs, videos and audio transcripts that look and sound real.

The panic is misplaced.

Deepfake panic centers on the fear that some famous person, such as a politician, will be blamed for saying or doing things they never said or did.

A bigger risk is that notable people actually caught transgressing on video or audio recordings will be able to convince the public that the authentic media is actually a deepfake. In other words, where pictures and videos and audio recordings once served as proof, deepfake technology will enable people to believe or not believe in the authenticity of any media based on their biases or preferences.

Deepfakes represent a further slipping away of the world of shared truths and toward a world where everyone has their own truth and all sources of information are suspect.

The biggest risk of all, however, is not in deepfake media that's published or mass-distributed, but in the one-on-one use of deepfake fraud in social engineering attacks.

What is a deepfake, anyway?

Deepfake is a portmanteau of "deep learning" and "fake." The creation of deepfakes uses a system called a generative adversarial network (GAN). The product of a GAN process is an algorithm that's very good at producing fake data.

Just about any data can be faked with GAN technology. But the urgent concern is with three types of data: fake video, fake photography and fake audio.

GANs start with a dataset -- say, a large number of face photos. Then, one neural network process attempts to use that data to create a fake face photo. The fake photo is presented to a second neural network process, which judges the quality of the fake against the database of actual photos.

The two neural nets go back and forth thousands of times, creating or judging, with each improving its ability.

At the end, the judge algorithm ends up being very good at judging fakes. And the creation algorithm ends up being very good at creating fakes.

One important quality of this process is rapid and constant improvement. That's why it's safe to say that perfect deepfakes will appear and it's only a matter of time.

Fakes are getting real

The most sophisticated technology for making convincing fakes is being used by Hollywood. The movie industry can now de-age actors, re-create deceased actors and invent actors from scratch.

A new dystopian sci-fi thriller called "Gemini Man," starring 51-year-old actor, Will Smith, pits a 23-year-old version of Smith against the middle-aged one. It's like the real Smith is being hunted by the Fresh Prince of Bel-Air. Dystopian indeed.

Reportedly, the movie was under consideration for two decades, as studios waited for technology to advance to the point where they could realistically de-age an actor on screen. What's interesting is that they did not de-age Smith. Instead, they created a fully digital young Smith from scratch. The younger smith is basically a cartoon, but looks like a human actor.

Another new movie, called "The Irishman," involves a young Robert De Niro, who was digitally de-aged, which is to say that De Niro played his younger self, then special effects technology made him look younger after the fact.

"The Irishman" points to a future where actors can play any age, gender or even play historical figures with the actual person superimposed on the actor's performance.

"Gemini Man" points to a future in which movies don't need actors at all, but will be essentially CGI animations that look perfectly real.

You can see the capability to create convincing fakes showing up everywhere. YouTube, for example. The "Derpfakes" YouTube channel offers up deepfakes that map the faces of famous people onto actors in iconic scenes. This one adds multiple people into the roles of James Bond and other characters in that series.

Why are deepfakes suddenly a big problem?  

What's new is not the existence of deepfakes or even the prediction that deepfake technology will soon enable perfect fakes.

What's new is the existence of easy-to-use tools for creating deepfakes. The tools are out there, available to everybody. So a tsunami of deepfakes is coming our way.

Google this week released some 3,000 or so deepfakes to the public, and promised more in the future. Google created them using publicly available creation methods. And the purpose of the database is to give test examples to anyone trying to create deepfake-detection tools.

Facebook is a major platform for fake news in all its variety. So the company is partnering with Microsoft and a few universities on an initiative called the "Deepfake Detection Challenge" that should result in a database of deepfakes to be used for detection research.

The need is urgent, because in the last few weeks, new tools have emerged that point to a radical democratization of deepfake creation.

A company called Icons8 this week announced the uploading of 100,000 deepfake photographs of people who do not exist. Anyone can download and use any of these pictures free. The company gets to provide people photos, without the complications of people (model release forms, payment, etc.)

Interestingly, the site's ultimate goal is to use its deepfake algorithm to make fake faces on the fly, after users input criteria like age, gender, hair color and other options. Users will then drop their creations into any pose or position, superimposed on an environment, such as an office or park or nightclub. Icons8's algorithm was trained with 29,000 pictures of 69 models over three years.

Previously, a similar database was posted on a site called That site is more of a demo on what's possible with fabricating human face photos.

I'm not worried about Icon8 itself. But it's dream tool that enables the custom creation of humans on the fly will be replicated at some point and made available on the dark web.

Deepfake pioneer Hao Li, who works as a professor of computer science at the University of Southern California, predicts that we're less than a year away from functionally perfect deepfakes -- deepfakes that cannot be identified as fake by humans -- and that these will be creatable by anyone.

His prediction was partly based on the existence of Zao, a deepfake smartphone app that emerged last month that lets anyone convincingly put their own face into a pre-selected, pre-processed assortment of TV shows and movies. (Here's a nice demo of Zao posted on Twitter.)

Another interesting consumer application of deepfake technology is an avatar generator iPhone app called Pinscreen. The resulting avatars don't look convincingly real, but they do look extremely identifiable.

Consumer tools for deepfake audio are coming online, too. A startup called Descript last week announced new features for a podcast production studio in software that lets you record your voice, then edit the auto-generated transcript.

But here's the amazing part. You can also type new words that you never said into podcast segments, and the software will create a deepfake of your voice to add the typed words in place. (The Overdub feature was made possible by that company's acquisition of Lyrebird AI.)

To avoid the use of Descript for fake news snippets, the company lets you dub in only your own fake voice, not the fake voice of others.

The technology promises a world in which writers can become podcasters by simply deepfaking their typed words into words spoken in their own voices.

All these examples show the mainstreaming of deepfake creation tools. It's all fun and games until cyber criminals start using them.

Here come the deepfake social engineering attacks

Of course, deepfake technology will be used for fake news. The public will adjust for this by simply growing skepticism about visual and audio media the way it's already skeptical about the written word.

The real risk is the perfection of social engineering attacks. In fact, the deepfake attacks have already begun.

We learned late last month that cybercriminals have used deepfake audio to impersonate a CEO's voice on the phone, ordering the CEO of a subsidiary company to transfer $243,000 to the crook's bank account.

It's only a matter of time before algorithms are applied to real-time impersonation, and also such fraud will involve video deepfakes, limited only by the creativity of scammers.

Already, more than 99 percent of all cyberattacks involve "human interaction," according to a report by the cybersecurity and compliance company, Proofpoint.

Deepfakes will only increase the types and effectiveness of these "human interactions," thus accelerating social engineering attacks on enterprises.

This new threat comes not from the perfection of deepfakes -- something we've expected for years. What's new is the democratization of deepfakes. Now anybody can do it.

So brace yourself for a new world of deepfake social engineering attacks.

Until now, we've been living in a world where we can trust requests because we recognize the face or the voice of the person making those requests.

Say good-bye to that world. It's been real.