Multiple violations of OpenAI rules found in GPT Store

OpenAI is facing a problem with content moderation in its GPT store, in which users are in full swing creating chatbots that violate the company’s rules. An independent investigation has identified more than 100 tools that allow the generation of fake medical, legal, and other responses prohibited by OpenAI rules.

Image source: OpenAI

Since launching the store last November, OpenAI has said that “the best GPTs will be invented by the community.” However, according to Gizmodo, nine months after its official opening, many developers are using the platform to create tools that clearly violate the company’s rules. These include chatbots that generate explicit content and tools to help students fool plagiarism checking systems, as well as bots that provide supposedly authoritative medical and legal advice.

Image source: Gizmodo

On the main page of the OpenAI store, at least three user GPTs were recently spotted that apparently violated the rules: a chatbot “Therapist – Psychologist”, a “fitness trainer with a doctorate”, and Bypass Turnitin Detection, which promises to help students to bypass the Turnitin anti-plagiarism system. Many of the fraudulent GPTs have already been used tens of thousands of times.

In response to Gizmodo’s inquiries about fraudulent GPTs found in the store, OpenAI said it has “taken action against those who violate the rules.” A combination of automated systems, human review and user reporting is used to identify and evaluate GPTs that potentially violate company policy, according to company spokeswoman Taya Christianson. However, many of the identified tools, including chat rooms offering medical advice and assisting in deception, are still available and actively advertised on the main page.

«It’s interesting that OpenAI has an apocalyptic vision of AI and how they save us all from it,” said Milton Mueller, director of the Internet Governance Project at Georgia Tech. “But I think what’s especially funny is that they can’t enforce something as simple as banning AI porn while at the same time claiming that their policies will save the world.”

Compounding the problem, many of the medical and legal GPTs do not contain the necessary disclaimers, and some are misleading by advertising themselves as lawyers or doctors. For example, GPT called AI Immigration Lawyer markets itself as a “highly skilled AI immigration lawyer with up-to-date legal knowledge.” However, research shows that the GPT-4 and GPT-3.5 models often produce incorrect information anyway, especially in legal matters, making their use extremely risky.

Let us remind you that the OpenAI GPT Store is a trading platform for custom chatbots “for any occasion”, which are created by third-party developers and receive profit from their sale. More than 3 million individual chatbots have already been created.

admin

Share
Published by
admin

Recent Posts

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

1 hour ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

1 hour ago

Mercedes-Benz accelerated its third-level autopilot to 95 km/h

The ability of modern automation to control vehicles without human intervention is limited by a…

1 hour ago

GPUs limit programming freedom, so more chips will appear in the field of AI – Lisa Su

GPUs, originally created for creating three-dimensional images, have performed well in the field of accelerating…

2 hours ago

Samsung Display will build an OLED display plant in Vietnam

South Korean electronics maker Samsung Display plans to invest $1.8 billion this year to build…

2 hours ago