The EU has published draft rules under which human-level AI will exist

EU authorities yesterday published the first draft of a Code of Practice for artificial general intelligence (AGI) models, also known as “strong AI” and “human-level AI”. The document, which will be finalized by May next year, outlines risk management guidelines, outlines a compliance plan and outlines ways for companies to avoid large fines.

Image source: Tung Nguyen / pixabay.com

Europe’s “AI Law” came into force on August 1, leaving room for future rules for general AI to be enshrined. This draft document is a first attempt to clarify what is expected from advanced models; Stakeholders are given time to provide feedback and suggest improvements before the regulations go into effect.

General purpose AI models include systems trained with a total computing power of more than 10²⁵ flops. The companies expected to be covered by the document are OpenAI, Google, Meta✴, Anthropic and Mistral – this list may grow.

The Code covers several aspects of general AI developers: transparency, copyright compliance, risk assessment, and technical and management risk mitigation. AI developers should disclose information about the web crawlers used to train models – this is important for copyright holders and content creators. The risk assessment includes measures to prevent cybercrime, inclusion of discriminatory materials in training sets, and loss of control over AI.

AI developers are expected to formulate a Safety and Security Framework (SSF). This document will help structure policies to contain threats and mitigate them in proportion to individual systemic risks. The EU’s proposed regulations also cover technical areas, including protecting these models, ensuring fault-tolerant access control and continually re-evaluating their effectiveness. The section describing the mechanisms of interaction with authorities provides for the implementation of accountability by the developers themselves: constant risk assessment is required and, if necessary, the involvement of third-party experts. Violation of the AI ​​Act carries a serious fine of up to €35 million or 7% of global annual profits, whichever is greater. Parties affected by the bill may submit comments until November 28; the document will be finalized by May 1, 2025.

admin

Share
Published by
admin

Recent Posts

Astronomers have obtained the most detailed infrared image of an active galactic nucleus yet

US scientists have used an innovative method of combining images from two optical telescopes to…

45 minutes ago

It became known what Durov talked about and what he promised during interrogation in a French court

The founder of the Telegram messenger Pavel Durov promised French judges to improve content moderation…

2 hours ago

Donald Trump Posts ‘Official Meme’ – Some Earn Millions of Dollars from It in Minutes

Two days before taking office, US President-elect Donald Trump published an “official meme” on social…

2 hours ago

The new heavy European rocket Ariane 6 has been improved – the next launch will take place in February

The European Space Agency (ESA) announced that the second ever launch of the new Ariane…

2 hours ago

TikTok has warned it will stop operating in the US tomorrow unless Biden intervenes.

TikTok has warned that it will be forced to shut down the short-video platform in…

2 hours ago