In September, Texas became the first state in the country to criminalize "deepfakes"—video clips created with artificial intelligence that make people appear to say or do something they did not. But legal experts questioned the new law's constitutionality and said the rapidly evolving technology behind deepfakes has the potential wreak havoc on the legal system, particularly when it comes to authenticating evidence in litigation.

Texas Senate Bill 751 (SB751) amended the state's election code to criminalize deepfake videos created "with intent to injure a candidate or influence the result of an election" and which are "published and distributed within 30 days of an election." Doing so is now a class A misdemeanor and offenders can be sentenced to a year in a county jail and fined up to $4,000.

Politicians are calling deepfakes the newest threat to the country's democracy, a dangerous tool that can be used to sway voters in the weeks leading up to elections, and warning that very soon members of the public will not be able to believe their own eyes. But experts said SB 751 might conflict with the First Amendment right to freedom of speech.

It's not clear whether the Constitution would allow deepfakes to be banned outright without being challenged as an infringement on the creator's First Amendment rights, however, SB751 targets deepfakes in terms of election interference and might survive a legal challenge, according to the Texas Senate Research Center.

Although the platforms where deepfakes are published could be held liable under SB751, catching the creators could be a thorny issue since so many deepfakes are produced overseas, which makes it difficult to trace a viral video all the way to its source.

The Technology is Ahead of the Law

The ability to distort reality has taken an exponential leap forward with deepfake technology, said Robert M. "Bobby" Chesney, a professor at The University of Texas School of Law, and co-author (along with Danielle Citron, a professor at Boston University School of Law) of "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security," to be published soon in the California Law Review.

"Fraudulent audio and video are not new," Chesney said in an email interview. "What is new is the invention of a capacity for creating synthetic media that is not only highly realistic but also (relatively) easily spread. With more players, we get more bad actors."

As with most issues where technology runs afoul of the law, deepfake creators currently have an advantage over lawmakers, said Chesney.

"A lot of money is being spent (government, private sector, academia, etc.) attempting to establish reliable and scalable detection technologies," Chesney said. "Technologists currently are divided over whether the defense (detection) has or ever will catch up with offense (creation)."

The real danger is deepfakes could be used to impugn someone's reputation or harm them in their professional life, said Chesney. They might even be used as "evidence" to deliver the wrong verdict in a trial.

"As access to the technology for creating deepfakes spreads, the chances of all kinds of malicious uses increase," Chesney said. "Certainly it's possible fraudulent content of this kind might one day have an unrecognized, but serious impact on a jury."

Those in the legal profession really need to be vigilant, said Chesney.

"Lawyers and law firms will face all the same risks⁠—reputational sabotage, unfair competition, harassment, deepfake-enabled phishing, etc.⁠—as other individuals and organizations," Chesney explained.

"Beyond that, litigation, arbitration and other dispute resolution systems will face increasing challenges with respect to the authentication of evidence, including a growing need for forensic experts," he continued.

There are, however, steps law firms can take to harden their defenses against deepfakes, said Chesney.

"All organizations with reputations to protect already face some degree of fraud risk targeting key personnel (and, hence, the organization itself), and law firms are no different," Chesney said. "The spread of the ability to create deepfakes will accentuate that existing risk, calling for more resources to be committed to keeping an eye out for such gambits, for rapid response and the like."

How Can Deepfakes Be Stopped?

The law surrounding freedom of expression in the United States is simply unable to handle something like deepfakes, said Jared Schroeder, an assistant professor of journalism at Southern Methodist University, specializing in First Amendment law, particularly as it relates to freedom of expression in virtual spaces, how information flows and how individuals come to understandings in democratic society.

"The First Amendment precedential record we have right now just won't allow us to limit people from creating and posting deepfakes," Schroeder said.

But there is still the chance for redress through the courts if someone harms you, said Schroeder.

"For instance, if someone is defamed by these deepfakes—say one portrays you as a politician doing or saying something you never said or did, you could sue them for defamation," Schroeder explained.

A law like the one in Texas basically criminalizes creating a deepfake for political purposes, said Schroeder. Then the second part of it puts expectations on the publishers, meaning a plaintiff can go after the platform publishing or hosting the deepfake that harmed them.

"The first part of this law will get into trouble because political speech is one of the highest forms of protected speech we have," Schroeder said.

"The Supreme Court has been very loathe to allow what many would consider reasonable limitations on this," he continued. "An example would be the Stolen Valor Act—which seems pretty reasonable, and Congress thought so too—a law that considers lying about military honor. But in U.S. v. Alvarez, the U.S. Supreme Court struck it down, saying, 'We can't allow this kind of law to limit what people can say.'"

Deepfakes are not inherently bad, said Schroeder. The law doesn't make allowances for satire but you could try to argue for it on these grounds because deepfakes are, at times, somewhat akin to a new generation of memes.

Schroeder noted, by way of example, the proliferation of deepfakes superimposing the likeness of actor Nicolas Cage onto a host of characters.

Those videos are distorting the truth, "but it's a lie with a wink because, in this case, we're in on the joke," Schroeder said.

So far, we don't have many examples where someone was harmed, but it's coming, said Schroeder.

Deepfakes can easily damage someone's reputation, said Schroeder.

"Lawmakers are trying head this off because this is a threat," he said. "Lawmakers tried, but this law is their first attempt, and as it comes to be tested, the courts will not allow this type of limitation on speech. Plus, it really doesn't address really bad actors, such as the ones from China, Russia and Ukraine, which have been in the news of late."

"They're trying to address a problem but this law won't have any teeth because there is no such thing as a false idea," Chesney said.

"Our theory of freedom of speech has always been market-driven, where we put all the ideas out and you can weigh them and some of them will fail," he continued. "But this law takes the opposite approach, with the government believing people are not rational or no longer able to discern the truth for themselves, so we're going to put it in the government's hands in order to stop it before it comes out. I appreciate their effort, but the law doesn't allow this. I'm concerned deepfakes are a real threat but I don't know how we'll stop them with our current understanding of free expression."