https://www.inverse.com/article/53414-this-person-does-not-exist-creator-interview

‘This Person Does Not Exist’ Creator Reveals His Site’s Creepy Origin Story

“I’m basically at the point in my life where I’m going to concede that super-intelligence will be real and I need to devote my remaining life to [it].”

Phillip Wang is the 33-year-old software engineer responsible for creating the artificial-intelligence powered website This Person Does Not Exist that recently went viral. Each time the page is refreshed, an algorithm known as a generative adversarial network (GAN) (originally coded by Nvidia) renders hyper-realistic portraits of completely fake people.

The stunt was designed to call attention to A.I.’s ever-increasing power to present as real images that are completely artificial. But as Wang tells Inverse, this stunt has ramifications that spread far beyond, “hey, look at this real-looking fake person.” In a society where pictures and images are the standard surrogates for “proof,” GANs — by automating the work that once required painstaking labor on the part of imaging experts — will soon allow anyone to furnish “proof” that any imaginable person did any imaginable thing.

“I’m basically at the point in my life where I’m going to concede that super-intelligence will be real and I need to devote my remaining life to [it],” he explains. “The reaction speaks to how much people are in the dark about A.I. and its potential.”

The site struck a chord. The former Uber-software engineer says that since its launch, This Person Does Not Exist has been visited about 4.2 million times, not bad for a one-off site originally posted to a closed Facebook Group. Wang initially used it as a way to convince a few friends to join in on the independent A.I. research he’s currently working on. But within a day, he decided a wider audience could benefit from learning about the potential of GANs. He said the reaction echos how important it is to inform people about about how this type of technology could be both revolutionary and dangerous.

nvidia stylegan
These people are not real – they were produced by our generator that allows control over different aspects of the image.

Why Fake Faces Represent a Scary Breakthrough

Wang’s site makes use of Nvidia’s StyleGAN algorithm that was published in December of last year. The potential, in his view, ranges from the helpful-but-mundane (think: streamlining dental crown implantation) to the more far-out, for example enabling the imagination of entirely new molecules to serve in future drugs. But this revolutionary technology will also make deception and misinformation easier than ever before.

The reason the use-cases are so multi-faceted is there are many, many ways to apply GANs, which are trained by pitting two networks against each other: the generator and discriminator. The generator is given real images which it tries to recreate as best as possible, while the discriminator learns to differentiate between faked images and the originals. After millions-upon-millions of training sessions, the algorithm develops superhuman capabilities for creating copies of the images it was trained on.

It’s the same method that creates deepfakes, or computer-generated images superimposed on existing pictures or videos, and which are often used to push fake news narratives or other hoaxes.

And while Wang is fascinated by the innovation it will bring across many businesses, he also wants people to be more aware of the potential damage it could cause.

As an example, a nefarious actor could spread a GANs-generated video or image depicting a bogus event to incite riots, protests, or other potentially violent reactions online.

Since the process is all automated, all someone would need would be access to an array of graphics processing units (GPUs) or graphics cards — which power machine learning — and a data set of images to begin cranking out fakes like clockwork.