The interviewer asked, “Is that image real”? There’s no way to tell. “What about this video?” It’s hard to say. “You can’t tell if that’s real or fake either.” No. Not without analyzing it. “Well, what about the prose? Did a human write this sentence?” It’s possible, but it would not surprise me to learn that it was machine generated. “What about the voice in the video?” It’s sonically perfect. I’d say 60/40 it’s a machine, but I’d need to hear a few more sentences to be sure. “So, what you’re saying is that I can’t trust my eyes or my ears?” Yes. That’s what I’m saying. “Is this the end of truth?”
This is an excerpt from a conversation I had with a trade publication columnist last week about deepfakes. We looked at random samples of still images and video clips. We looked at a few paragraphs of text written in various literary styles. And finally, we listened to some spoken-word audio files and a couple of fully mixed soundtracks. The samples were not labeled, and neither of us had seen them prior to the interview. In every case, it was extremely difficult to tell (with certainty) whether the material was manipulated or computer generated or if we were seeing (and hearing) unaltered recordings of human beings.
Deepfakes Are Getting Easier to Make
Like most digital technologies, the quality of deepfakes is increasing at an alarming rate, and it is clear that even the most complex deepfake tools will be as easy to use as Instagram filters in the very near future. To prove my point, here are a few apps that will give you a deeper understanding of deepfakes.
Zao – is the best of the lot, but you need a Chinese phone number to use it. Android / iOS
Deepfakes Web is a website that is self-explanatory.
And while we’re at it, go visit thispersondoesnotexist.com for some GAN (Generative Adversarial Networks) fun. Every time you refresh the page, you’ll see a new image of a person who does not exist. The images are computer generated, and they do not repeat.
How far has Deepfake tech come? Read Deepfakes 2.0: The Sequel Is Even Scarier
While there are all kinds of great film and video production uses for this technology, there are also some exceptionally bad uses. Revenge porn is at or near the top of the list. It has been outlawed in many states (as it should be). But as I often say, “Technology is neither good nor bad – it’s how you use it.”
The Tech Is Not the Problem
There is a famous video clip of Nancy Pelosi from May 2019 that was manipulated to give the impression that the Speaker was drunk or worse. The viral clip is not a deepfake; it was simply slowed down with common video editing software.
At the time, Facebook refused to delete the clip – although it was proven to be edited. YouTube did remove it. You can Google the incident if you want to go down the terms & conditions rabbit hole. That said, you will have no trouble finding a copy of this clip on any number of websites today. This raises many questions. Is the tech to blame?
This fake news was created with the same tech you would use to edit your kid’s birthday video. We are not going to regulate video-editing software.
How about the person or persons who created the fake news? Are they to blame? Yes. But they didn’t break any laws. This type of political speech is 100 percent protected by the First Amendment. So why bring this up?
People Believe What They Want to Believe
This is an example of an extremely low-tech use of video manipulation. It required almost no skill to accomplish. It looks and sounds real unless you are shown the original video. Then, it is clear – beyond the shadow of a doubt – that the viral video was doctored and is fake.
When presented with the facts, generally speaking, those who wanted to think ill of Nancy Pelosi were unmoved. Excuses like, “Yeah, but she’s got Alzheimer’s” or “This might be fake, but you know she drinks” or other similar whataboutisms were used to ignore the truth. Unsurprisingly, supporters of the Speaker were happy to learn that it was fake, then angered about the political attack.
This test/focus group/poll was performed numerous times by an uncountable number of pollsters, YouTubers, and TikTokers and even a few reputable journalists. The most important point is that the fake news didn’t change any minds. And, worse, the facts didn’t change minds either.
In very short order, this technology is going to get so good you really won’t be able to tell what is real and what is fake. We are about to enter a world of metaverses, mixed, virtual, augmented, and extended reality. But technology is not going to be the problem. Dogma is the deepest deepfake of all, and facts are no match for dogma.
Have some suggestions for Deepfake apps or want to chat about the tech?
If the form is not visible, click here.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.