OpenAI and Anthropic will provide the US government with access to their latest large artificial intelligence language models before public release, “to improve their security,” The Verge writes. In other words, US authorities will check whether it is safe to release new AI models into the world.

Image source: Andrew Neel/unsplash.com

Both companies have signed a memorandum of understanding with the American Institute for AI Security, under which they commit to making AI models available both before and after their public release. The move will allow joint efforts to conduct security risk assessments and address potential problems, the department said. The US agency added that the UK’s AI Safety Institute will also be involved in this work.

According to The Verge, this week California lawmakers approved the Secure and Trusted Advanced Artificial Intelligence Innovation Act. This law requires AI companies in California to take certain precautions before training advanced artificial intelligence models. Such measures, in particular, include the ability to quickly and completely disable an AI model, ensuring that AI models are protected from “unsafe modifications after training,” and assessing the potential risks of using AI and its components to “cause critical harm.”

The document did not find support from companies involved in AI technologies. OpenAI and Anthropic opposed its adoption, noting that the new law could negatively impact small companies creating open-source AI technologies. The new law has not yet been officially adopted and awaits the signature of California Governor Gavin Newsom.

Meanwhile, the White House is working to encourage companies to independently adopt and expand security measures related to AI technologies. According to The Verge, several leading firms have already agreed to invest in research on cybersecurity and discrimination, as well as develop labeling systems (watermarks) for AI-generated content.

Leave a Reply

Your email address will not be published. Required fields are marked *