Artificial intelligence-generated pictures, videos and voices distributed via the internet are called deepfakes. Most agree that internet deepfake (deep learning + fake) content is widespread and may be used to manipulate the public, attack personal rights, infringe intellectual property and cause personal data difficulties. However, little agreement exists as to who is legally liable for internet AI deepfake content.

Since 2017, software has been available to combine AI deep learning capabilities and internet content to create hyper-realistic content, which is completely fake, using algorithms which require as little as a single photo of a source or a sound bite. While some use of such AI deepfake software is relatively harmless, such as fake images of a person posing with a celebrity, other AI deepfakes involving pornography, for example, may be defamatory or criminal. This matter is exacerbated by the speed and low cost of internet distribution. 

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]