OpenAI is concerned about people’s relationship with ChatGPT in the new voice mode

Communicating with the ChatGPT AI chatbot in the new voice mode, when its voice is no different from a human one, can lead to the user becoming dependent on interaction with AI, CNN resource writes with reference to an OpenAI report.

Image source: Andrew Neel/unsplash.com

According to OpenAI, which published a security report on Thursday regarding the use of Advanced Voice Mode (AVM) for the ChatGPT service, which has become available to a small number of ChatGPT Plus subscribers, the voice of the ChatGPT chatbot in AVM mode sounds very realistic. The chatbot reacts in real time, can adapt to interruptions, and reproduces the sounds that people make during a conversation, for example, chuckles or grunts. He can also judge the emotional state of the interlocutor by the tone of his voice.

After OpenAI announced this feature in its multimodal generative AI model GPT-4o, it began to be compared to the AI ​​digital assistant from the 2013 film “Her,” with which the main character falls in love.

Apparently OpenAI is concerned that the fictitious story has turned out to be close to reality, after observing users speaking to ChatGPT in voice mode in a language that “expresses common connections” with the communication tool.

As a result, “users can form social relationships with AI, reducing their need for human interaction—potentially benefiting lonely people, but possibly affecting healthy relationships,” OpenAI noted. The report also found that receiving information from a bot in a human-sounding voice may make the user more trustworthy than it should be, given the AI’s fallibility.

OpenAI said how users interact with ChatGPT in voice mode may also, over time, impact what is considered normal in social interactions. “Our models are respectful, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for AI, is not the norm in human interactions,” the company said in a report.

At the same time, OpenAI emphasized that it strives to create AI “safely” and plans to continue studying the potential for “emotional dependence” of users on proposed AI solutions.

admin

Share
Published by
admin

Recent Posts

Threads gets ‘long overdue improvements’ to search and trends

Meta✴ Platforms, the owner of the social network Threads, announced “long overdue improvements” for its…

50 minutes ago

Ubisoft spoke about the capabilities and innovations of stealth mechanics in Assassin’s Creed Shadows – new gameplay

Image source: Ubisoft Let us remind you that the events of Assassin’s Creed Shadows will…

2 hours ago

Assembly of the second NASA SLS rocket has started – in a year it will send people on a flight around the Moon

NASA announced that assembly of the second lunar rocket, SLS (Space Launch System), has begun…

2 hours ago

The creators of Black Myth: Wukong will surprise players before the end of the year – teaser from the head of Game Science

Co-founder and CEO of the Chinese studio Game Science, Feng Ji, hinted that some surprises…

3 hours ago

Nvidia stock is no longer the best performer – MicroStrategy soars 500% in a year thanks to Bitcoin

Last Wednesday, trading volume in MicroStrategy shares exceeded that of Nvidia and Tesla. The company,…

4 hours ago

Tired of waiting: sales of S.T.A.L.K.E.R. 2: Heart of Chornobyl exceeded one million copies within two days of release

The post-apocalyptic open-world shooter S.T.A.L.K.E.R. 2: Heart of Chornobyl from the developers from the GSC…

5 hours ago