The European “AI Law”, based on risk assessment, came into force on Thursday, August 1, 2024. The provisions of the document will be implemented in stages until mid-2026. Within just six months, it plans to enforce bans on several uses of AI in specific scenarios, such as law enforcement’s use of remote biometrics in public places.

«The AI ​​Law uses a gradation of AI applications based on potential risks. Under this approach, most applications of AI are considered “low risk,” so they would not be subject to regulation at all.

The “limited risk” level applies to AI technologies such as chatbots or tools that could be used to create deepfakes. They are required to provide a level of transparency that does not mislead users.

“High-risk” AI applications include biometric processing and facial recognition, AI-powered medical software, or the use of AI in areas such as education and employment. Such systems must be registered in the EU database, and their developers must ensure compliance with risk and quality management requirements.

«The AI ​​Law provides for a multi-level system of penalties: fines of up to 7% of global annual turnover for the use of prohibited AI applications, up to 3% for violation of other obligations, and up to 1.5% for providing false information to regulators.

A separate section of the new law concerns developers of the so-called “General Purpose Artificial Intelligence” (GPAI). The EU has also adopted a risk-based approach to GPAI, with transparency being a key requirement for developers of such systems. It is expected that only a subset of the most powerful models will be required to perform risk assessments and mitigation measures.

Specific recommendations for GPAI developers have not yet been developed, since there is no experience in the legal application of the new law. The AI ​​Office, the strategic oversight body for the AI ​​ecosystem, has launched a consultation and encouraged developers to participate in the process. The AI ​​code should be fully completed by April 2025.

OpenAI’s “Artificial Intelligence Law Playbook” released late last month states that OpenAI expects to “work closely with the EU AI Authority and other relevant authorities as the new law is implemented in the coming months,” which includes producing technical documentation and other guidance for GPAI model vendors and developers.

«

Leave a Reply

Your email address will not be published. Required fields are marked *