AI voice tool used to make fake celebrity audio samples


Just a few days after ElevenLabs, a speech artificial intelligence startup, launched a beta version of its platform that allows users to create synthetic voices for text-to-speech audio, the internet has already misused the technology to generate deep fake celebrity audio clips. The company has acknowledged the growing number of voice cloning misuse cases and is contemplating implementing additional safeguards to address the issue.

Reports have emerged of 4chan posts containing voice clips featuring generated voices that sound like celebrities saying or reading controversial things, including violent, racist, homophobic, and transphobic content. It remains unclear whether all of these clips were created using ElevenLabs' technology, but one post included a link to the company's platform.

ElevenLabs is now collecting feedback on how to prevent users from abusing its technology, with possible solutions including adding more layers to its account verification process and requiring users to verify copyright ownership of the voice they want to clone. The company is even considering dropping its tool for public use and having users submit voice cloning requests for manual verification.

While this may be the first time deep fake audio clips have become such a prevalent issue, similar incidents have occurred in the past with the rise of deep fake video clips, particularly in pornography, using the faces of celebrities over existing pornographic materials.
Previous Post Next Post