OpenAI is facing a problem with content moderation in its GPT store, in which users are in full swing creating chatbots that violate the company’s rules. An independent investigation has identified more than 100 tools that allow the generation of fake medical, legal, and other responses prohibited by OpenAI rules.
Since launching the store last November, OpenAI has said that “the best GPTs will be invented by the community.” However, according to Gizmodo, nine months after its official opening, many developers are using the platform to create tools that clearly violate the company’s rules. These include chatbots that generate explicit content and tools to help students fool plagiarism checking systems, as well as bots that provide supposedly authoritative medical and legal advice.
On the main page of the OpenAI store, at least three user GPTs were recently spotted that apparently violated the rules: a chatbot “Therapist – Psychologist”, a “fitness trainer with a doctorate”, and Bypass Turnitin Detection, which promises to help students to bypass the Turnitin anti-plagiarism system. Many of the fraudulent GPTs have already been used tens of thousands of times.
In response to Gizmodo’s inquiries about fraudulent GPTs found in the store, OpenAI said it has “taken action against those who violate the rules.” A combination of automated systems, human review and user reporting is used to identify and evaluate GPTs that potentially violate company policy, according to company spokeswoman Taya Christianson. However, many of the identified tools, including chat rooms offering medical advice and assisting in deception, are still available and actively advertised on the main page.
«It’s interesting that OpenAI has an apocalyptic vision of AI and how they save us all from it,” said Milton Mueller, director of the Internet Governance Project at Georgia Tech. “But I think what’s especially funny is that they can’t enforce something as simple as banning AI porn while at the same time claiming that their policies will save the world.”
Compounding the problem, many of the medical and legal GPTs do not contain the necessary disclaimers, and some are misleading by advertising themselves as lawyers or doctors. For example, GPT called AI Immigration Lawyer markets itself as a “highly skilled AI immigration lawyer with up-to-date legal knowledge.” However, research shows that the GPT-4 and GPT-3.5 models often produce incorrect information anyway, especially in legal matters, making their use extremely risky.
Let us remind you that the OpenAI GPT Store is a trading platform for custom chatbots “for any occasion”, which are created by third-party developers and receive profit from their sale. More than 3 million individual chatbots have already been created.
Xiaomi's Redmi brand is 11 years old. In honor of this event, a new logo…
SpaceX has explained why, during the last test flight of the Starship rocket, it sank…
It is widely believed that the salaries of top managers of tech giants are measured…
Chilean astronomers have captured a close-up photo of the giant dying star WOH G64 outside…
At the SC24 high-performance computing conference in Atlanta, Maxwell Labs, a pioneer in solid-state photonic…
On the eve of the very soon premiere, Polish science fiction writer Andrzej Sapkowski shared…