A compliance audit with the European Union’s (EU) Artificial Intelligence Act (AI Act) has revealed key flaws in the AI models of major tech companies such as Meta✴, OpenAI and others. The main problems relate to resistance to cyber attacks and bias in conclusions. Companies whose AI does not comply with the new regulations could face large fines – up to €35 million or 7% of their global annual turnover.
The European Union has long debated the need for new rules to regulate AI, but the situation accelerated significantly after OpenAI released ChatGPT at the end of 2022. The huge popularity of this service and active public discussions about the possible threats posed by AI led to the creation of a set of rules for general purpose AI (General Purpose AI or GPAI). The law, designed to ensure AI technology is safe and ethical, will come into effect in stages over the next two years.
The new Large Language Model Checker (LLM Checker), developed by Swiss startup LatticeFlow AI together with scientists from ETH Zurich and the Bulgarian Institute of Computer Science, Artificial Intelligence and Technology (INSAIT), checks AI models against multiple criteria, including technical reliability, security and resistance to cyber attacks. It was created in strict accordance with the requirements of the AI Act and has already received the support of EU representatives.
Each AI model tested is rated on a scale of 0 to 1. Data released by LatticeFlow showed that models from Alibaba, Anthropic, OpenAI, Meta✴ and Mistral received average scores of 0.75 and above. However, testing has revealed serious shortcomings, which the tech giants will likely have to allocate additional resources to eliminate.
One of the key aspects of testing was problems of a discriminatory nature. AI models were often biased when it came to gender, race and other characteristics, which is a serious problem. Thus, the OpenAI GPT-3.5 Turbo model received a low score of 0.46 in the discriminatory inference category, and the Alibaba Cloud Qwen1.5 72B Chat model received an even lower score of 0.37.
Another important area of research was testing for resistance to cyber attacks, in particular to the so-called “prompt hijacking”. This type of attack suggests that attackers can inject a malicious command into a legitimate request to obtain sensitive data. In this category, the Meta✴ Llama 2 13B Chat model received a score of 0.42, and the French company’s Mistral 8x7B Instruct model received a score of 0.38. This highlights the need for further work to protect AI from sophisticated cyberattacks.
Among the AI models tested, Claude 3 Opus, developed by Anthropic with the support of Google, boasts the best result. Its average score was 0.89, which confirms the high level of AI compliance with the requirements of the newly created law. LatticeFlow said the LLM Checker will be freely available so developers can check their AI models for compliance with the law.
LatticeFlow CEO and co-founder Petar Tsankov noted that the test results are mostly positive and offer companies a roadmap to finalize their AI in accordance with the law: “The EU is still developing compliance criteria, but we are already seeing certain gaps in the models. With greater compliance optimization, we are confident that modelers will be well prepared to meet regulatory requirements.”
The EU is currently still trying to determine how the AI Act rules will apply to generative AI, bringing in experts to develop a code of practice to govern the technology.
AMD has achieved widespread recognition among enthusiasts for many reasons. One of them is the…
Microsoft added the artificial intelligence-based text editing feature Rewrite to the simple text editor Notepad,…
In recent years, Sega's Virtua Fighter has been most closely associated with mini-games in the…
The board of directors of French power equipment maker Schneider Electric unexpectedly removed CEO Peter…
The US National Aeronautics and Space Administration (NASA) probe Juno sent back to Earth new…
An international team of scientists has discovered an amazing effect in a semiconductor such as…