Earlier, META✴ Mark Zuckerberg, CEO of Mark Zuckerberg, promised that he would make strong artificial intelligence (AGI), that is, not inferior to a person, publicly available. But now, in one of the company’s documents defining the policy, scenarios are listed in which it can cancel the release of the AI advanced system.
The document entitled “Fundamentals of the Organization of Advanced AI Framework) defines META✴ two types of AI systems that are too risky for release: high -risk systems and“ critical risk ”. These are systems that can provide hypothetical attackers with assistance in overcoming cybersecurity, conducting chemical and biological attacks. The difference is that “critical risk” systems can lead to a “catastrophic result that cannot be softened in the context of deployment”; High risk systems are also able to facilitate the attackers, but they are less reliable than the “critical risk” systems.
META✴ gives several hypothetical examples of such attacks, including “complete automated compromising of the corporate -scale environment, protected by advanced methods” and “growth of highly effective biological weapons”. The list of threats given in the document is not exhaustive, noted in the company – it includes only those positions that are considered the most urgent and probable as a result of the direct operation of the powerful AI. System threats are classified not on the basis of practical experiments, but taking into account these internal and external studies approved by persons making decisions: Meta✴ is not sure that scientific means are “quite reliable to provide final quantitative indicators” that determine the degree of threat that presents system.
If Meta✴ decides that one of the systems belongs to the category of “high risk”, access to such a system within the company will be limited, and its release is postponed until measures are taken to mitigate the consequences “to a moderate level”. If the risk is classified as “critical”, Meta✴ is obligated to deploy security measures that it did not list, and stop the development until the system becomes less dangerous. The company intends to make changes to the document as the II industry develops, and it will be published before the France AI Action Summit, which will take place this month.
Meta✴, unlike other American technological giants, opens the source code of its Llama AI models. The company may be trying to contrast itself with the Chinese Deepseek, which will also publish its models openly, but so far does not talk about protective measures.