Meta promised to cancel the release of AI models that will consider too dangerous for humanity

Earlier, META✴ Mark Zuckerberg, CEO of Mark Zuckerberg, promised that he would make strong artificial intelligence (AGI), that is, not inferior to a person, publicly available. But now, in one of the company’s documents defining the policy, scenarios are listed in which it can cancel the release of the AI ​​advanced system.

Image source: Christopher Burns / Unsplash.com

The document entitled “Fundamentals of the Organization of Advanced AI Framework) defines META✴ two types of AI systems that are too risky for release: high -risk systems and“ critical risk ”. These are systems that can provide hypothetical attackers with assistance in overcoming cybersecurity, conducting chemical and biological attacks. The difference is that “critical risk” systems can lead to a “catastrophic result that cannot be softened in the context of deployment”; High risk systems are also able to facilitate the attackers, but they are less reliable than the “critical risk” systems.

META✴ gives several hypothetical examples of such attacks, including “complete automated compromising of the corporate -scale environment, protected by advanced methods” and “growth of highly effective biological weapons”. The list of threats given in the document is not exhaustive, noted in the company – it includes only those positions that are considered the most urgent and probable as a result of the direct operation of the powerful AI. System threats are classified not on the basis of practical experiments, but taking into account these internal and external studies approved by persons making decisions: Meta✴ is not sure that scientific means are “quite reliable to provide final quantitative indicators” that determine the degree of threat that presents system.

If Meta✴ decides that one of the systems belongs to the category of “high risk”, access to such a system within the company will be limited, and its release is postponed until measures are taken to mitigate the consequences “to a moderate level”. If the risk is classified as “critical”, Meta✴ is obligated to deploy security measures that it did not list, and stop the development until the system becomes less dangerous. The company intends to make changes to the document as the II industry develops, and it will be published before the France AI Action Summit, which will take place this month.

Meta✴, unlike other American technological giants, opens the source code of its Llama AI models. The company may be trying to contrast itself with the Chinese Deepseek, which will also publish its models openly, but so far does not talk about protective measures.

admin

Share
Published by
admin

Recent Posts

Economic uncertainty boosts demand for older Intel processors

Summing up the results of the last quarter and forming a forecast for the current…

6 hours ago

YouTube Reaches $8.9 Billion, Celebrates 20th Anniversary, Announces New Video Upload Record

Video hosting site YouTube's advertising revenue rose 10% to $8.9 billion, helping holding company Alphabet…

6 hours ago

YouTube Reaches $8.9 Billion, Celebrates 20th Anniversary, Announces New Video Upload Record

Video hosting site YouTube's advertising revenue rose 10% to $8.9 billion, helping holding company Alphabet…

6 hours ago

Intel’s Q1 Results Better Than Expectations, But Q2 Guidance Disappoints Investors

Intel managed to maintain revenue at $12.7 billion last quarter, which is the same as…

6 hours ago

Meta to Limit Spammers’ Reach and Monetization on Facebook

Meta✴ has announced new measures against spam and “cheating” on Facebook✴. From now on, accounts…

8 hours ago

Meta to Limit Spammers’ Reach and Monetization on Facebook

Meta✴ has announced new measures against spam and “cheating” on Facebook✴. From now on, accounts…

8 hours ago