AI Deepfakes: Have We Entered The Matrix?

by · Forbes
When people can’t tell what’s real and what’s fake they lose the ability to trust the information they consume. And when they lose trust, they can be manipulated.getty

By Alan Stafford, SAP Insights

In 1999’s The Matrix, no one knows what’s real and what’s computer-generated, simulated reality. A quarter-century later, a flood of sophisticated deepfakes created with powerful generative AI-enabled applications are making it increasingly difficult—in real life—to tell what is genuine and what is phony. But tools do exist to help exert some control over AI.

Deepfakes—images, video or audio created or edited to portray false or misleading information—have been around for a long time, but the further back in time, the more people had to suspend their perception of reality to give any credence to an altered creation.

For example, in George Méliès's 1902 movie A Trip to the Moon, you’d have to be pretty gullible to believe that a rocket had lodged itself in the moon’s eye. But now, gen AI deepfakes are so much more convincing that the opposite is often true: People’s perception of reality is easily compromised, especially if a deepfake combines elements that are true with others that are fake.

Put your trust in AI?

That distinction is important, because when people can’t tell what’s real and what’s fake they lose the ability to trust much of the information they consume. And when they lose trust, they can be manipulated.

A deepfake video of Ukrainian President Zelensky urging Ukrainians to surrender appeared in 2022, and a fake robocall using an AI-generated voice resembling President Joe Biden urged people to skip voting in the January 2024 New Hampshire Democratic primary. In October 2024, deepfake images of victims of Hurricane Helene were used in phishing and other scams. Deepfake videos of Elon Musk continue to be used to scam viewers out of vast sums of money.

But are they really so slick that they’ll convince anyone? The New York Times periodically posts online tests that ask users to see if they can tell what AI-generated content is and is not. These tests, which show doctored and non-doctored portraits, photographs, and video, show how far AI has come: If you carefully study the images or video, you might be able to tell what’s fake and what’s not, but it can be difficult.

These tests tell you up front that some of the content is AI-generated, too; if you don’t have that heads up, it’s much easier to be fooled by a well-crafted fake. Viewing that content on a tiny phone screen makes it even easier to be fooled. And that’s not even considering that some people don’t even care that content is being faked, as long it jibes with their point of view.

Many existing AI-enabled applications can create convincing fakes and many new AI applications that will soon become available will make it even easier. These upcoming apps include Meta Movie Gen, from the owners of Facebook; it will let you create video from just text (OpenAI’s Sora will do this too). Google’s upcoming NotebookLM will allow you to use AI to create a podcast from mere text.

New controls to reign in AI

Many of these applications encourage users to “create responsibly,” and some platforms have implemented new rules in an attempt to control AI content. For example, OpenAI’s lengthy content policy prohibits “generating or promoting disinformation, misinformation or false online engagement.” A new YouTube policy says, “To help keep viewers informed about the content they’re viewing, we require creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic.”

When users upload content to YouTube, they will have to confirm or deny the use of altered content. Many platforms use AI to police AI: In October 2024, Google’s Gemini AI tool told me, “I can't help with responses on elections and political figures right now” when I tried to create a deepfake image of a politician.

But none of these measures has slowed the deluge of deepfakes.

In response, some state and federal lawmakers have passed legislation to exert some control over deepfakes and other AI-generated or -altered content. California passed a law in September 2024 to require online platforms to “remove or label deceptive and digitally altered or created content related to elections during specified periods,” and requires them to provide mechanisms to report such content (It was subsequently blocked by a federal judge, however).

An Alabama law that took effect in October 2023 makes it a crime for a person to distribute election-related “materially deceptive media.” Laws recently passed in Wisconsin, Florida and Arizona mandate disclaimers on AI-generated content, and the Federal Communications Commission is proposing a rule to require political advertisers to disclose when their content is AI-generated, though the rule would cover only television and radio, not online outlets such as social media sites, which are frequent traffickers of unverified, false information.

Many of these laws focus on political content, but malicious AI content extends far beyond that. The Department of Homeland Security’s analysis of deepfake threats describes corporate sabotage scenarios, in which companies use AI-generated content to spread misinformation about their competitors’ products, its executives, and its business activities. A low-level example: fake, AI-generated negative reviews on Amazon.com.

Businesses also need to be aware of the potential for deepfaked audio, using real-time voice alteration to conduct scams via the telephone—for example, to fool employees into turning over confidential information or transferring funds.

Using AI fire to fight AI fire

Countering these risks requires a combination of standard business best practices and, possibly, some more innovative approaches. Bank of America says, “Deepfakes rank as one of the most dangerous AI crimes of the future,” and it recommends the usual tactics, like educating employees and partners about AI-related threats and maintaining cybersecurity best practices, such as strengthening identity verification and validation protocols.

Businesses can experiment with reputation-defense products, such as Norton Reputation Defender. These products use tactics like generating lots of positive reviews with AI tools to drown out negative reviews, or suggesting—or even pursuing on your behalf—legal action. Deepfake detection software or services, such as Sensity.AI or McAfee Deepfake Detector, might help you find faked content related to your brand. But in a 2020 competition sponsored by Meta, the best-performing entry flagged only 65 percent of fake videos.

Deepfakes aren’t harmless fun

Meta’s challenge took place four years ago, and certainly AI-powered software has improved. Nevertheless, deepfakes are an ever-increasing risk to people and businesses alike. Putting all your faith in AI to protect you from AI may be risky.