By now most of us are used to interacting with synthetic or artificial voices. Just call a customer service help line or summon a digital personal assistant (like Alexa or Siri) and you would expect to hear a computer-generated voice. But what if the synthetic voice sounded exactly like you? Or worse, was then used to say things you would never say?

Several tech companies are making strides training speech recognition tools to mimic the speaker’s voice. And while this can improve user clarity and accessibility for those with physical limitations, there is another, more troubling trend: the prevalence of “voice deepfakes”—i.e., creating synthetic voices from unknowing (or unwilling) participants using generative artificial intelligence.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]