OpenAI is considered a global leader in the race to create strong artificial intelligence that is as powerful as humans. Meanwhile, company employees regularly appear in the press and on podcasts and express strong concerns about the security of the systems it develops.

Image source: Growtika / unsplash.com

Recently, OpenAI released a product whose safety testing was carried out hastily, but the company celebrated the release of the product itself on a grand scale. “They planned the launch party before they knew if the product was safe. We basically failed the process,” an anonymous source told the Washington Post. This is not the first message of this kind – previously, former and current OpenAI employees signed an open letter calling for improvements to the company’s security and transparency practices shortly after the departure of those responsible for these issues, Jan Leike and Ilya Sutskever.

On paper, however, everything looks completely different. One of the provisions of OpenAI’s charter states that if another player achieves strong AI, the company will assist in ensuring security, rather than compete with a competitor. And the closed nature of OpenAI’s AI models is also due to security concerns. “We are proud of our track record of delivering the most effective and secure AI systems and believe in our scientific approach to solving threats. A robust debate is critical given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world on behalf of our mission,” OpenAI spokesperson Taya Christianson told The Verge.

The company “didn’t cut any corners” on the security side when launching its cutting-edge GPT-4o model, Lindsey Held, another OpenAI spokesperson, assured the Washington Post. But an anonymous source said that the examination period for this product was reduced to just one week. Previously, the company announced the launch of a joint project with Los Alamos National Laboratory to identify potential risks of using AI in scientific work. It also became known about the existence of a system for assessing the progress of AI in the company.

The public and authorities are concerned that the development of AI technologies is controlled by a small handful of companies, and the average person has no leverage over them. If the claims made by numerous anonymous sources about OpenAI’s security are true, then such a company’s control over life-changing technology is indeed somewhat alarming.

admin

Share
Published by
admin

Recent Posts

Nissan Leaf EV to Become NACS-Ported Compact Crossover in Third Generation

Nissan Leaf can rightfully be considered a long-liver of the electric car market, since the…

5 days ago

OpenAI expects to more than triple its revenue this year and then double it next year.

OpenAI, the market leader in generative artificial intelligence systems, remains nominally a startup, its financial…

5 days ago

OpenAI Decides to Hold 4o Image Generation Launch for Free Users

OpenAI has been forced to delay the release of ChatGPT's built-in image generator for free…

5 days ago

1440p and 240Hz for just $200: Xiaomi updates the 27-inch Redmi G27Q gaming monitor

Xiaomi continues to update its Redmi G27Q gaming monitor every year. The model was first…

5 days ago

Beware, Android is shutting down: OS development will cease to be public, but there is no reason to panic

Android device makers can significantly customize the look and feel of the operating system, but…

5 days ago

Fake GeForce RTX 4090s with RTX 3090 chips have started popping up in China — craftsmen are even changing the GPU markings

In China, scammers have started selling GeForce RTX 3090 graphics cards, passing them off as…

5 days ago