The cloud division of Amazon Web Services (AWS) presented the Automated Reasoning system at the re:Invent 2024 event. This system is designed to combat “hallucinations” – glitches in artificial intelligence models in which they give answers that do not correspond to reality.

Image source: BoliviaInteligente / unsplash.com

The Automated Reasoning check system aims to reduce the number of potentially dangerous errors caused by “hallucinations” – such failures threaten the safety of AI services and can lead to financial losses for companies. They can arise due to errors in the data sets on which the AI ​​models were trained. AWS describes Automated Reasoning as “the first and only defense of generative AI that helps prevent factual errors due to model hallucinations.” The system tries to solve the problem by cross-checking the responses generated by the model with the information provided by the client. If it can’t determine whether the answer is an exact match, it is sent back to the same model to be checked on its own.

Automated Reasoning is part of the Amazon Bedrock Guardrails suite of tools – a new verification system that tries to track how the model arrived at the answer it was given, and if an error is found in the chain, it is checked against the data set provided by the client. The system offers its own response to the request, allowing the client to see the possible actual gap and, if necessary, make additional adjustments to the model. As an example, AWS cited a healthcare provider that uses a tool to ensure that customer inquiries about company policies are answered in a manner that is not misleading.

Leave a Reply

Your email address will not be published. Required fields are marked *