Home Chat Gpt OpenAI claims it may clone a voice from 15 seconds of audio • The Register

OpenAI claims it may clone a voice from 15 seconds of audio • The Register

0
OpenAI claims it may clone a voice from 15 seconds of audio • The Register

[ad_1]

OpenAI’s newest trick wants simply 15 seconds of audio of somebody chatting with clone that particular person’s voice – however don’t be concerned, no have to look behind the scenes, the biz needs everybody to know it isn’t going to launch this Voice Engine till it may be certain the potential for mischief has been managed. 

Described as being a “small mannequin” that makes use of a 15-second clip and a textual content immediate to generate natural-sounding speech resembling the unique vocalist, OpenAI mentioned it is already been testing the system with a number of “trusted companions.” It has supplied purported samples of Voice Engine’s capabilities in advertising and marketing bumf emitted on the finish of final month. 

In accordance with OpenAI, Voice Engine can be utilized to do issues like present studying help, translate content material, help non-verbal folks, assist medical sufferers who’ve misplaced their voices regain the power to talk in their very own voice and increase entry to providers in distant settings. All these use circumstances are demoed and have been a part of the work OpenAI has been doing with early companions. 

Information of the existence of Voice Engine, which OpenAI mentioned was developed in late 2022 to function the tech behind ChatGPT Voice, Learn Aloud, and its text-to-speech API, comes as considerations over voice cloning have reached a fever pitch of late.

One of the vital headline-grabbing voice cloning tales of the 12 months got here from the New Hampshire presidential main within the US, throughout which AI-generated robocalls of President Biden went out urging voters to not take part within the day’s voting. 

Since then the FCC has formally declared AI-generated robocalls to be unlawful, and the FTC has issued a $25,000 bounty to solicit concepts on tips on how to fight the rising risk of AI voice cloning. 

Most lately, former US Secretary of State, senator and First Woman Hillary Clinton has warned that the 2024 election cycle will likely be “floor zero” for AI-driven election manipulation. So why come ahead with one other probably trust-shattering know-how within the midst of such a debate? 

“We hope to start out a dialogue on the accountable deployment of artificial voices, and the way society can adapt to those new capabilities,” OpenAI mentioned.

“Based mostly on these conversations and the outcomes of those small scale checks, we are going to make a extra knowledgeable choice about whether or not and tips on how to deploy this know-how at scale,” the lab added. “We hope this preview of Voice Engine each underscores its potential and likewise motivates the necessity to bolster societal resilience towards the challenges introduced by ever extra convincing generative fashions.” 

To help in stopping voice-based fraud, OpenAI mentioned it’s encouraging others to section out voice-based authentication, discover what could be carried out to guard people towards such capabilities, and speed up tech to trace the origin of audiovisual content material “so it is all the time clear while you’re interacting with an actual particular person or with an AI.” 

That mentioned, OpenAI additionally appears to just accept that, even when it would not find yourself deploying Voice Engine, another person will doubtless create and launch an identical product – and it won’t be somebody as reliable as them, you recognize. 

“It is vital that folks around the globe perceive the place this know-how is headed, whether or not we in the end deploy it extensively ourselves or not,” OpenAI mentioned. 

So think about this an oh-so pleasant warning that, even when OpenAI is not the rationale, you possibly can’t belief every thing you hear on the web these days. ®

[ad_2]