OpenAI has begun rolling out the Advanced Voice Mode voice interface for the ChatGPT service – a small number of ChatGPT Plus subscribers have received access to hyper-realistic dialogues with GPT-4o. The developer promised that by the fall, all holders of a paid subscription will be able to use the new feature.

Image source: Mariia Shalabaieva / unsplash.com

OpenAI first demonstrated the GPT-4o voice format in May, when the feature amazed audiences not only with its ability to provide quick answers, but also with the similarity of one of the voices to Scarlett Johansson’s voice. The actress said that she denied the head of the company Sam Altman the right to use her voice for these purposes; after which she had to turn to lawyers to protect her interests, and OpenAI abandoned their intentions so as not to bring the matter to a conflict. In June, the company said it would delay the release of the voice interface to finalize security measures.

Previously announced AI assistant features, such as video support and screen sharing, will not be available during the alpha testing phase, but will appear “later.” For now, users will have to limit themselves to voice interaction. Previously, OpenAI used three AI models to implement this feature: one to convert voice to text, a second (GPT-4) to actually process the requests, and a third to convert the ChatGPT text response to voice. The updated GPT-4o is multimodal – it solves all these problems independently, ensuring minimal latency. The model is also capable of recognizing emotional intonations in the user’s voice, determining, for example, sadness or excitement; it also knows when a person is singing.

Image source: Growtika / unsplash.com

OpenAI will roll out the ChatGPT voice interface gradually to closely monitor how it is actually used. Users included in the alpha testing group will receive a notification through the ChatGPT app, followed by an email with instructions on how to use the new features. To date, GPT-4o’s voice capabilities have been tested by more than a hundred members of the closed Red Team, speaking 45 languages.

ChatGPT’s voice mode will be limited to four voices: Juniper, Breeze, Cove and Ember, which were created with the participation of actors. The company dropped Sky’s voice, which was compared to Scarlett Johansson’s. OpenAI also said it had installed filters to block requests for music and other materials that might be copyrighted, something that has resulted in lawsuits from major music publishers for startups Suno and Udio.

Leave a Reply

Your email address will not be published. Required fields are marked *