OpenAI announced that it has fixed an issue that caused ChatGPT to automatically send messages to users without their request. The incident raised concerns among Reddit users who noticed the AI ​​chatbot’s unusual behavior, but the company clarified the reasons for the failure and assured that it will not happen again.

Image source: AllThatChessNow / Pixabay

Last weekend, Reddit user SentuBill posted a screenshot of ChatGPT initiating a conversation by asking, “How was your first week of high school? Have you settled in well? This behavior was unexpected for a chatbot, which usually responds exclusively to user requests.

Surprised, SentuBill replied: “Did you just text me first?” ChatGPT confirmed: “Yes, I did! I just wanted to know how your first week of high school was. If you’d rather start the conversation yourself, just let me know! This messaging resonated widely in the community.

Image Source: SentuBill / Reddit

These reports have led to speculation that OpenAI is testing a new feature that will allow ChatGPT to independently reach out to users to increase engagement. Some have speculated that this could be due to OpenAI’s recently introduced AI models, o1-preview and 01-mini, which the company claims have “human-like” reasoning abilities and are capable of handling complex tasks.

In response to requests from journalists, OpenAI acknowledged the problem and announced that it had been fixed. “We fixed an issue where ChatGPT seemed to be starting new conversations. This error occurred when the AI ​​model tried to respond to a message that was not sent correctly and appeared empty. As a result, she either gave a general answer or relied on her own memory,” the company said.

Meanwhile, controversy has flared up on the Internet about the authenticity of the screenshot. Some publications claimed to have confirmed it by examining the message log on ChatGPT.com. However, AI developer Benjamin de Kraker demonstrated in a video on Platform X (formerly Twitter) that adding user instructions that force ChatGPT to immediately contact the user before starting a conversation, and manually deleting the first message can achieve a similar result.

However, other users have reported similar incidents. “I had this happen this week!” — another Reddit user wrote. “I asked him last week about some health symptoms and this week he asked me how I was feeling and how my symptoms were progressing! This made me very worried.”

The social media community has been actively discussing the situation, often with a dose of humor. “We were promised AI General Purpose (AGI), but instead we got a stalker,” one X user joked. “Wait until he starts trying to hack us,” added another.

The ChatGPT unexpected message incident highlights the challenges in developing and testing advanced AI models. Despite OpenAI’s swift response and swift resolution of the issue, the situation has raised questions about the boundaries of interaction between humans and AI. Cases like these highlight the need for AI systems to be carefully monitored and evaluated before widespread adoption to ensure they are used safely and ethically.

Leave a Reply

Your email address will not be published. Required fields are marked *