It's now easier to spread election misinformation in California

by · Android Headlines

We’re only one month away from the U.S. presidential election, and anticipation is at an all-time high (and so is misinformation). Several politicians are trying to mitigate misinformation in the light of AI-generated and AI-manipulated content, and one of them is California Governor Gavin Newsom. He brought a law that would combat AI deepfakes, but a California judge blocked it.

Newsom was in the news recently for shooting down SB 1047, the bill that would have brought some major regulation to major AI companies. He didn’t pass the bill for a handful of reasons including a lack of specificity regarding the AI applications and a disproportionate focus on larger companies. There’s no telling if there will be a similar bill coming in the future, but hopefully, there will. If Newsom is right, then the majority of the major AI companies reside in California. So, we would welcome a California-based law.

A California judge blocked a law against AI deepfakes

The fact that Newsome shot down the bill doesn’t mean that he isn’t concerned about the negative effects of AI. Earlier, he signed AB 2839 into law. This law targeted people who make deepfakes of political candidates. If someone knowingly makes a deepfake of a candidate within 120 days of an election in an election state, they’re liable for civil action. The distributor could face a civil suit from anyone who sees it. The law could force them to take the post down or risk monetary penalties.

This law seems firm but fair, as we’re getting closer to the election. Anti-left and anti-right propaganda are being peppered all through the internet and social media, and we can bet that AI had a hand in some of it. Be that as it may, Judge John Mendez doesn’t seem to think that AI deepfakes are much of an issue.

A while back, there was a bit of drama involving a person named Christopher Kohls. He distributed a deepfaked video of Vice President Kamala Harris which paints her as an incompetent candidate. It gained the attention of the internet after a retweet by Elon Musk, and this prompted Newsom to sign the bill into law.

However, Kohls filed a lawsuit to keep the law from becoming official, claiming that the video was satire and that the law violates his freedom of speech. Well, Judge Mendez sided with Kohls and signed an injunction to temporarily block the law. The judge gave a lengthy statement explaining why he came to this decision. In the statement, he said “[W]hile a well-founded fear of a digitally manipulated media landscape may be justified, this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.”

Too late?

So, the judge sided with Kohls on this matter, but this is only a preliminary injunction. We don’t know how long the block will remain in effect at the moment. Since the election is only a month away, there’s not much reason to hope for it to come back. This piece of California legislation could have an effect on how many AI deepfake could spread.

At this point, much of the U.S. population is set on who they’re voting for. However, the law didn’t only target the quadrennial presidential elections. It seems to target pretty much any election as long as you’re in an election state and within 120 days from the big day. So, it could be any of the smaller elections, which should still be shielded from AI misinformation.

As such, we should still hold out hope that it or a similar law comes into existence. We’re still at the very beginning of this AI era. While deepfakes have been in existence for years before, the current wave of AI tools makes it so much easier to create deepfakes, and the tools are much more accessible. These are the kinds of laws that need to be in place.

So, what now?

This law would have been a major win for people who oppose misinformation. While Mendez mentions that “California has a strong interest in preserving election integrity,” it doesn’t show in his ruling. The law would have established a solid law that would at least help reduce the misinformation floating around on the internet. The amount of false information being spread increases as each year goes by. AI only makes it so much easier to fool the masses.

The judge says that the law is “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” But, we’re talking about deepfakes here, not political cartoons or comedy sketches. If someone links an SNL sketch painting a candidate in a bad light, it’s obviously satire. However, deepfakes are entirely different. The core point of a deepfake is to make it look like a person is doing or saying something that they’re not. Sure, there are deepfakes out there that are meant to be comedic, but there’s a fine line between political satire and misinformation.

Deepfakes shouldn’t be treated with the same set of rules because they’re an entirely different beast. They have the potential to do some major damage.

It’s not too broad

Also, Mendez states that the law aims to stifle satire and humorous expression, but that’s not the case. The law doesn’t target people who create ANY deepfakes. It targets people who create deepfakes of candidates within 120 days of an election in an election state. So, if a person just makes a deepfake of a celebrity quoting a video game character, that won’t be squashed under the law.

However, if someone makes a video of one of the candidates saying they’re going to bomb a country within weeks of an election, and it’s not marked to show that it’s fake, then that’s a different story. That’s not satire, that’s a piece of media made to mislead people. The law would target that, which is completely justified.

What now?

Now, with a solid piece of California legislature in purgatory, what’s to stop people with malicious intentions from posting AI deepfakes in the 11th hour of the election? Judge Mendez’s ruling has a strong focus on the 1st amendment, but it fails to take into account the elephant in the room. With this law being blocked, what’s the next move? Just let misinformation continue to spread. What’s the battle plan?

This whole new AI boom started about two years ago, and as the technology advanced, we all knew that this year would be tricky. With AI able to make deepfakes prevalent, we knew that people would take advantage and pump out deceptive content. Yet, this late in the game, we’re still making moves like these. We’re well past the point where we should have some commonsense laws against spreading AI-generated misinformation. Now that there’s nothing to fill the void, who’s to say what sort of wild deepfakes will come about as we inch closer to the election?