The National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, which develops and tests technologies for the American government, businesses and the public, has unveiled an updated Dioptra test bed. It is designed to evaluate how malicious attacks, including those aimed at poisoning the data used to train large language models, impact the performance of AI systems.

Image source: Copilot

The first version of the modular open source web tool Dioptra was introduced in 2022. The updated software should help developers of AI models and the people who use those algorithms assess, analyze and monitor the risks associated with AI. NIST said Dioptra can be used for benchmarking and researching AI models, as well as a general platform for simulating the impact of various types of threats on models.

«Testing the impact of adversary attacks on machine learning models is one of Dioptra’s goals. The open source software is available for free download and can help the community, including government agencies and small and medium-sized companies, conduct assessments to test AI developers’ claims about the performance of their systems,” NIST said in a press release.

Together with Dioptra, the developers have published a number of documents outlining ways to reduce the risks associated with AI models. The software was created as part of an initiative supported by US President Joe Biden that directs NIST, among other things, to provide support for testing AI systems. The initiative also establishes a series of AI security standards, including requiring companies developing AI algorithms to notify the federal government and share security risk assessment data before the AI ​​is made available to the public.

Benchmarking AI is challenging, not least because complex algorithms are currently “black boxes” where the infrastructure, training data and other key details are kept secret by the developers. Some experts believe that assessments alone are not enough to determine how safe AI is in the real world, partly because developers have the ability to choose what tests are run to evaluate their AI models.

NIST is not saying that Dioptra will be able to eliminate any risks to AI systems. However, the developers of this tool are confident that it can shed light on what types of attacks can make a particular AI system less effective, as well as assess the negative impact on the performance of the algorithm. Note that Dioptra can only work with models that can be downloaded to the device and used locally.

admin

Share
Published by
admin

Recent Posts

Manli introduced the GeForce RTX 5090D and RTX 5080 with racing spoilers and RGB-backlight voice control

MANLI introduced its versions of the GEFORCE RTX 5090D and RTX 5080 video cards in…

5 minutes ago

OpenAI: AI-megaproject Stargate will create several COD campus in Texas and other states

The financial director of Openai Sarah Fryar reported that the company intends to build several…

55 minutes ago

Taiwan authorities will help Foxconn and other companies cope with new customs tariffs in the United States

This month, US President Donald Trump raised import duties on goods from Mexico and Canada…

2 hours ago

Cracks on the roads will be delayed by themselves: AI helped create asphalt with the ability to regenerate

Researchers from the Royal College of London and the University of Swanxi (Wales, Great Britain)…

2 hours ago

Windows 11 captured a record share of the market, but she is still very far from Windows 10

Migration on the Windows 11 platform is stable, although they cannot be called breathtaking. Now…

3 hours ago

Datamainer denied the theory of fans around Half-Life 3 and told how far the development has advanced

The blogger and Dataminer Tyler McVaker clarified the state of development of the secret game…

3 hours ago