OpenAI is considered a global leader in the race to create strong artificial intelligence that is as powerful as humans. Meanwhile, company employees regularly appear in the press and on podcasts and express strong concerns about the security of the systems it develops.

Image source: Growtika / unsplash.com

Recently, OpenAI released a product whose safety testing was carried out hastily, but the company celebrated the release of the product itself on a grand scale. “They planned the launch party before they knew if the product was safe. We basically failed the process,” an anonymous source told the Washington Post. This is not the first message of this kind – previously, former and current OpenAI employees signed an open letter calling for improvements to the company’s security and transparency practices shortly after the departure of those responsible for these issues, Jan Leike and Ilya Sutskever.

On paper, however, everything looks completely different. One of the provisions of OpenAI’s charter states that if another player achieves strong AI, the company will assist in ensuring security, rather than compete with a competitor. And the closed nature of OpenAI’s AI models is also due to security concerns. “We are proud of our track record of delivering the most effective and secure AI systems and believe in our scientific approach to solving threats. A robust debate is critical given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world on behalf of our mission,” OpenAI spokesperson Taya Christianson told The Verge.

The company “didn’t cut any corners” on the security side when launching its cutting-edge GPT-4o model, Lindsey Held, another OpenAI spokesperson, assured the Washington Post. But an anonymous source said that the examination period for this product was reduced to just one week. Previously, the company announced the launch of a joint project with Los Alamos National Laboratory to identify potential risks of using AI in scientific work. It also became known about the existence of a system for assessing the progress of AI in the company.

The public and authorities are concerned that the development of AI technologies is controlled by a small handful of companies, and the average person has no leverage over them. If the claims made by numerous anonymous sources about OpenAI’s security are true, then such a company’s control over life-changing technology is indeed somewhat alarming.

admin

Share
Published by
admin

Recent Posts

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

1 hour ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

1 hour ago

Mercedes-Benz accelerated its third-level autopilot to 95 km/h

The ability of modern automation to control vehicles without human intervention is limited by a…

1 hour ago

GPUs limit programming freedom, so more chips will appear in the field of AI – Lisa Su

GPUs, originally created for creating three-dimensional images, have performed well in the field of accelerating…

2 hours ago

Samsung Display will build an OLED display plant in Vietnam

South Korean electronics maker Samsung Display plans to invest $1.8 billion this year to build…

2 hours ago