Deepfakes may not presently be a top cybersecurity concern for many lawyers. But as the believable fraudulent and doctored videos or audio are becoming more prevalent, they could cause perilous financial and reputation effects on clients.

Deepfakes use machine learning techniques, feeding a computer real data about images or audio, to create a believable video. In a widely publicized instance,  a video disseminated by the Trump administration of a journalist interacting with the president's staff was found to be doctored intentionally, according to The Associated Press.

But while deepfakes can be extremely realistic, there are limitations. Currently deepfakes can't perform live; they have to be prerecorded.

“These technologies can't produce live fakes,” explained Robert Chesney, who teaches U.S. national security at the University of Texas at Austin School of Law. “I can't Skype with you [through a fake video], that is adapting and speaking on the fly. It has to be a prerecording.”

Still, many in the business world rely on voicemail, marking a possible entrance for a deepfake to disseminate false information.

“By definition, deepfake is a cybersecurity threat because what deepfake represents is a spoof or fake publication of a video or audio recording typically associated to a business leader or political leader, statements that the actual individual didn't make,” explained Fox Rothschild partner Scott Vernick.

“What people are concerned about, given how easy it is to replicate in ways that look quite genuine a business leader or political leader saying or doing something that's not true … [is that] it can move markets and shape or not shape the fortunes of companies,” Philadelphia-based Vernick added. 

Audio deepfakes may pose an immediate risk because they require less robust technology to create a high-end believable fake compared to video deepfakes, noted Ice Miller partner Guillermo Christensen. What's more, Deepfakes used in combination with a successful cyber hack of a widely used infrastructure could cause widespread misinformation before the government can issue a counter message.

Public figure clients of attorneys can also be targeted with deepfakes, causing reputational or worse damage.

“Suppose you have someone like Beyonce, Jay-Z, Elon Musk, Mark Zuckerberg, Kamala Harris, [and someone] managed to put something out and it went viral and took positions that are quite contrary to what their actual position is. [It] could destroy someone's brand, political future, organizations,” Fox Rothschild's Vernick said.

Likewise, divisive statements from a deepfake attributed to a publicly traded company could also be detrimental financially. “You would be sure the stock would take a nosedive, and it would be highly distracting to the company,” Vernick said.

|

Fighting Against Deepfakes  

Training to specifically spot deepfakes isn't given in many industries, according to lawyers contacted by Legaltech News. However, The Wall Street Journal does host training for editorial staff to quickly detect deepfakes, possibly indicating an industry-specific need to train staff not to disseminate false information that harms their customers.

“In the media business, this is a subset of a broader issue of how we verify sources,” explained Reed Smith partner Gerard Stegmaier. For most information-centric businesses, such as news organizations, repeating a lie would have an immediate detrimental public consequence, Stegmaier said.

When an entity or individual does fall for a deepfake, it may be difficult for them to find the creator or have unfair content removed because of the Communications Decency Act, Stegmaier explained. Section 230 of the act holds internet service providers shouldn't be held responsible for content created by others.

But when a deepfake or any cybersecurity event occurs, Ice Miller's Christensen said a plan should already be in place.

“Any unexpected threat or challenge to business, it's much easier to respond to that event when you have responses in place,” Christensen said. “You need to have something, a strong framework of who would get called on in the company if something strange happened.”

How large or small of a threat deepfakes are has yet to be seen, but they should be acknowledged as a hazard to cybersecurity.

“Deepfakes are real and emerging as an issue but they, like certain types of technology, could emerge very quickly; we talk about this today and it could be a very big deal in six months or it could be nothing,” Reed Smith's Stegmaier cautioned. “We simply don't know.”