OpenAI is concerned about people’s relationship with ChatGPT in the new voice mode

Communicating with the ChatGPT AI chatbot in the new voice mode, when its voice is no different from a human one, can lead to the user becoming dependent on interaction with AI, CNN resource writes with reference to an OpenAI report.

Image source: Andrew Neel/unsplash.com

According to OpenAI, which published a security report on Thursday regarding the use of Advanced Voice Mode (AVM) for the ChatGPT service, which has become available to a small number of ChatGPT Plus subscribers, the voice of the ChatGPT chatbot in AVM mode sounds very realistic. The chatbot reacts in real time, can adapt to interruptions, and reproduces the sounds that people make during a conversation, for example, chuckles or grunts. He can also judge the emotional state of the interlocutor by the tone of his voice.

After OpenAI announced this feature in its multimodal generative AI model GPT-4o, it began to be compared to the AI ​​digital assistant from the 2013 film “Her,” with which the main character falls in love.

Apparently OpenAI is concerned that the fictitious story has turned out to be close to reality, after observing users speaking to ChatGPT in voice mode in a language that “expresses common connections” with the communication tool.

As a result, “users can form social relationships with AI, reducing their need for human interaction—potentially benefiting lonely people, but possibly affecting healthy relationships,” OpenAI noted. The report also found that receiving information from a bot in a human-sounding voice may make the user more trustworthy than it should be, given the AI’s fallibility.

OpenAI said how users interact with ChatGPT in voice mode may also, over time, impact what is considered normal in social interactions. “Our models are respectful, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for AI, is not the norm in human interactions,” the company said in a report.

At the same time, OpenAI emphasized that it strives to create AI “safely” and plans to continue studying the potential for “emotional dependence” of users on proposed AI solutions.

admin

Share
Published by
admin

Recent Posts

Microsoft Unveils Redesigned Start Menu in Windows 11 with Automatic Program Grouping

Microsoft has officially confirmed changes to the Windows 11 Start menu regarding the All apps…

5 hours ago

Physicists Doubt Microsoft’s Majorana 1 Quantum Processor’s Performance on Majorana Fermions

There is an opinion among experts that the new topological quantum processor Microsoft Majorana 1…

5 hours ago

Google has begun to disable uBlock Origin en masse in Chrome due to the transition to Manifest V3

Some Chrome users have noticed that the uBlock Origin extension no longer works. The developers…

6 hours ago

Apple CEO Promises Trump to Invest Hundreds of Millions of Dollars in Developing Manufacturing in the U.S.

The directness of the current US President Donald Trump sometimes creates inconvenience for his partners,…

8 hours ago

Apple Confirms It Will Soon Make Vision Pro Headsets More Comfortable and Smarter

Apple has officially confirmed that its generative AI platform, Apple Intelligence, will be coming to…

14 hours ago