Microsoft introduced a new Content Safety feature in the Azure cloud infrastructure – it is aimed at combating failures in the operation of generative artificial intelligence. The feature automatically detects and even corrects errors in the responses of AI models.

Image source: youtube.com/@MicrosoftAzure

Content Safety is available in preview in Azure AI Studio, a suite of security tools designed to detect vulnerabilities, detect hallucinations in AI systems, and block inappropriate requests from users. Content Safety scans AI responses and identifies inaccuracies in them by comparing the output with the client’s input.

When an error is detected, the system highlights it, provides information about why the information provided is incorrect, and rewrites the problematic content—all before the user can see the inaccuracy. However, this function does not provide a guarantee of reliability. The Google Vertex AI enterprise platform also has a function for “grounding” AI models by checking answers against Google’s search engine, the company’s own data, and, in the future, third-party data sets.

admin

Share
Published by
admin

Recent Posts

“James Webb” was the first in history to find the “Einstein zigzag” – a unique curvature of space-time

Gravitational lensing, predicted 90 years ago by Einstein, was confirmed by observation four years after…

28 minutes ago

The second Xiaomi electric car will be released a year after the first and will be noticeably different from it

Xiaomi's efforts to carve out its place in China's highly competitive electric vehicle market are…

1 hour ago

New Google Android feature makes it easier to transfer data when changing devices

Google has introduced a new feature for Android called “Restore Credentials” that will greatly simplify…

3 hours ago

Google is ready to permanently cancel the development of the Pixel Tablet 3 tablet

Google appears to be freezing its market presence in the tablet category again, exiting the…

4 hours ago