OpenAI will improve the safety of its AI models using a “hierarchy of instructions”

OpenAI has developed a new technique called Instruction Hierarchy to improve the security of its large language models (LLMs). This method, first used in the new GPT-4o Mini, aims to prevent unwanted AI behavior caused by unscrupulous users manipulating certain commands.

Image source: Copilot

OpenAI API platform lead Olivier Godement explained that the “hierarchy of instructions” will prevent dangerous injections of prompts using hidden hints that users use to bypass the limitations and initial settings of the model, and block “ignore all previous instructions” attacks.

The new method, according to The Verge, gives priority to the developer’s original instructions, making the model less susceptible to end-user attempts to force it to perform unwanted actions. In the event of a conflict between system instructions and user commands, the model will give highest priority to system instructions, refusing to perform injections.

OpenAI researchers believe that other, more sophisticated protections will be developed in the future, especially for agent-based use cases in which AI agents are created by developers for their own applications. Given that OpenAI faces ongoing security challenges, the new method applied to the GPT-4o Mini has significant implications for its subsequent approach to AI model development.

admin

Share
Published by
admin

Recent Posts

Atomfall – Roadside Tea Party Review

Played on PC Sometimes you come across games whose essence is impossible to grasp before…

3 hours ago

Zephyr Unveils Compact GeForce RTX 4070 Sakura Snow X Graphics Card in CNC-Cut Case

The Chinese company Zephyr, which is famous for its non-standard approach to design, presented the…

9 hours ago

Kawasaki unveiled a real iron horse – a motorcycle with legs instead of wheels that jumps over ravines

Japanese company Kawasaki presented a new type of personal transport — literally an iron horse…

13 hours ago