Microsoft introduced a new Content Safety feature in the Azure cloud infrastructure – it is aimed at combating failures in the operation of generative artificial intelligence. The feature automatically detects and even corrects errors in the responses of AI models.
Image source: youtube.com/@MicrosoftAzure
Content Safety is available in preview in Azure AI Studio, a suite of security tools designed to detect vulnerabilities, detect hallucinations in AI systems, and block inappropriate requests from users. Content Safety scans AI responses and identifies inaccuracies in them by comparing the output with the client’s input.
When an error is detected, the system highlights it, provides information about why the information provided is incorrect, and rewrites the problematic content—all before the user can see the inaccuracy. However, this function does not provide a guarantee of reliability. The Google Vertex AI enterprise platform also has a function for “grounding” AI models by checking answers against Google’s search engine, the company’s own data, and, in the future, third-party data sets.
Microsoft has officially confirmed changes to the Windows 11 Start menu regarding the All apps…
There is an opinion among experts that the new topological quantum processor Microsoft Majorana 1…
Some Chrome users have noticed that the uBlock Origin extension no longer works. The developers…
The directness of the current US President Donald Trump sometimes creates inconvenience for his partners,…
German retail, by the will of fate, found itself at the epicenter of a scandal…
Apple has officially confirmed that its generative AI platform, Apple Intelligence, will be coming to…