In January 2023, AI speech synthesis company ElevenLabs, Inc. released a beta platform for its natural-sounding vocal cloning tool. Using this platform, a brief snippet of a person's voice could generate audio files of the target saying anything the uploader desired. This release created a spike in misappropriated vocal cloning from viral rap songs to parodies of political figures. Recognizing their software was being widely misused, ElevenLabs installed safeguards to ensure the company could trace the generated audio back to a creator. But it was too late. Pandora's box was already open.

Since then, a wide range of similar tools have been developed to perform vocal cloning, leading to vocal deepfakes becoming a common source of scams and misinformation. And these issues have only been exacerbated by a lack of appropriate laws and regulations to rein in the use of AI and protect an individual's right to their voice.