The National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, which develops and tests technologies for the American government, businesses and the public, has unveiled an updated Dioptra test bed. It is designed to evaluate how malicious attacks, including those aimed at poisoning the data used to train large language models, impact the performance of AI systems.
The first version of the modular open source web tool Dioptra was introduced in 2022. The updated software should help developers of AI models and the people who use those algorithms assess, analyze and monitor the risks associated with AI. NIST said Dioptra can be used for benchmarking and researching AI models, as well as a general platform for simulating the impact of various types of threats on models.
«Testing the impact of adversary attacks on machine learning models is one of Dioptra’s goals. The open source software is available for free download and can help the community, including government agencies and small and medium-sized companies, conduct assessments to test AI developers’ claims about the performance of their systems,” NIST said in a press release.
Together with Dioptra, the developers have published a number of documents outlining ways to reduce the risks associated with AI models. The software was created as part of an initiative supported by US President Joe Biden that directs NIST, among other things, to provide support for testing AI systems. The initiative also establishes a series of AI security standards, including requiring companies developing AI algorithms to notify the federal government and share security risk assessment data before the AI is made available to the public.
Benchmarking AI is challenging, not least because complex algorithms are currently “black boxes” where the infrastructure, training data and other key details are kept secret by the developers. Some experts believe that assessments alone are not enough to determine how safe AI is in the real world, partly because developers have the ability to choose what tests are run to evaluate their AI models.
NIST is not saying that Dioptra will be able to eliminate any risks to AI systems. However, the developers of this tool are confident that it can shed light on what types of attacks can make a particular AI system less effective, as well as assess the negative impact on the performance of the algorithm. Note that Dioptra can only work with models that can be downloaded to the device and used locally.
Aerospace company Blue Origin successfully conducted its tenth suborbital flight, during which six tourists visited…
After a series of delays, Microsoft has begun rolling out its AI Recall feature to…
ZTE has introduced an inexpensive smartphone Nubia V70 Design. In some markets, the device will…
Baldur's Gate 3 is full of situations where the success of the entire mission depends…
Malcolm Shit, and. O. The Lord of the North, was very tired of hanging and…
Nvidia is heavily dependent on a handful of its largest customers, which actively buy computing…