The National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, which develops and tests technologies for the American government, businesses and the public, has unveiled an updated Dioptra test bed. It is designed to evaluate how malicious attacks, including those aimed at poisoning the data used to train large language models, impact the performance of AI systems.

Image source: Copilot

The first version of the modular open source web tool Dioptra was introduced in 2022. The updated software should help developers of AI models and the people who use those algorithms assess, analyze and monitor the risks associated with AI. NIST said Dioptra can be used for benchmarking and researching AI models, as well as a general platform for simulating the impact of various types of threats on models.

«Testing the impact of adversary attacks on machine learning models is one of Dioptra’s goals. The open source software is available for free download and can help the community, including government agencies and small and medium-sized companies, conduct assessments to test AI developers’ claims about the performance of their systems,” NIST said in a press release.

Together with Dioptra, the developers have published a number of documents outlining ways to reduce the risks associated with AI models. The software was created as part of an initiative supported by US President Joe Biden that directs NIST, among other things, to provide support for testing AI systems. The initiative also establishes a series of AI security standards, including requiring companies developing AI algorithms to notify the federal government and share security risk assessment data before the AI ​​is made available to the public.

Benchmarking AI is challenging, not least because complex algorithms are currently “black boxes” where the infrastructure, training data and other key details are kept secret by the developers. Some experts believe that assessments alone are not enough to determine how safe AI is in the real world, partly because developers have the ability to choose what tests are run to evaluate their AI models.

NIST is not saying that Dioptra will be able to eliminate any risks to AI systems. However, the developers of this tool are confident that it can shed light on what types of attacks can make a particular AI system less effective, as well as assess the negative impact on the performance of the algorithm. Note that Dioptra can only work with models that can be downloaded to the device and used locally.

admin

Share
Published by
admin

Recent Posts

Alibaba Cloud Reduces Data Center Assembly Time by 50% Using Modular Architecture

Alibaba Cloud presented at its annual Apsara conference a modular data center architecture called “CUBE…

11 mins ago

The release has crept up unnoticed: the classic version of Resident Evil 3 will appear on GOG very soon

The original Resident Evil 3: Nemesis turned 25 years old yesterday, and the digital distribution…

40 mins ago

Biden and Modi agreed to build a chip factory in India

The United States and India have reached an agreement under which a new semiconductor manufacturing…

1 hour ago

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

3 hours ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

3 hours ago