OpenAI is considered a global leader in the race to create strong artificial intelligence that is as powerful as humans. Meanwhile, company employees regularly appear in the press and on podcasts and express strong concerns about the security of the systems it develops.

Image source: Growtika / unsplash.com

Recently, OpenAI released a product whose safety testing was carried out hastily, but the company celebrated the release of the product itself on a grand scale. “They planned the launch party before they knew if the product was safe. We basically failed the process,” an anonymous source told the Washington Post. This is not the first message of this kind – previously, former and current OpenAI employees signed an open letter calling for improvements to the company’s security and transparency practices shortly after the departure of those responsible for these issues, Jan Leike and Ilya Sutskever.

On paper, however, everything looks completely different. One of the provisions of OpenAI’s charter states that if another player achieves strong AI, the company will assist in ensuring security, rather than compete with a competitor. And the closed nature of OpenAI’s AI models is also due to security concerns. “We are proud of our track record of delivering the most effective and secure AI systems and believe in our scientific approach to solving threats. A robust debate is critical given the importance of this technology, and we will continue to engage with governments, civil society and other communities around the world on behalf of our mission,” OpenAI spokesperson Taya Christianson told The Verge.

The company “didn’t cut any corners” on the security side when launching its cutting-edge GPT-4o model, Lindsey Held, another OpenAI spokesperson, assured the Washington Post. But an anonymous source said that the examination period for this product was reduced to just one week. Previously, the company announced the launch of a joint project with Los Alamos National Laboratory to identify potential risks of using AI in scientific work. It also became known about the existence of a system for assessing the progress of AI in the company.

The public and authorities are concerned that the development of AI technologies is controlled by a small handful of companies, and the average person has no leverage over them. If the claims made by numerous anonymous sources about OpenAI’s security are true, then such a company’s control over life-changing technology is indeed somewhat alarming.

Leave a Reply

Your email address will not be published. Required fields are marked *