OpenAI announced the reorganization of its safety and security committee into an independent oversight body of the board of directors. The new structure received unprecedented powers, including the right to suspend releases of AI models for security reasons. The decision was made following a 90-day review of the company’s procedures and security measures, reflecting growing attention to the ethical aspects of AI development.
The new structure will receive input from the company’s management about the safety assessment of major releases of AI models and, together with the full board of directors, will oversee their launch, including the right to delay the release until security problems are resolved. The full OpenAI board of directors will also receive periodic briefings on safety and security issues.
The structure of the new committee raises questions about its independence, given that all its members sit on OpenAI’s board of directors. This distinguishes it from the Meta✴ supervisory board, whose members are completely independent from the corporation’s board of directors. The Meta✴ Oversight Board has the power to review content policy decisions and make binding decisions, while the OpenAI committee focuses only on assessing the safety of AI models before they are released.
A 90-day review of OpenAI’s security processes identified additional opportunities for AI industry collaboration and information sharing. The company said it intends to expand the sharing of data about its security work and increase its ability to independently test systems. However, specific mechanisms for implementing these intentions have not yet been disclosed.