At the Black Hat USA conference, cybersecurity researcher Michael Bargury demonstrated the vulnerabilities of Microsoft’s artificial intelligence assistant Copilot – potential attackers could use it for cyberattacks. His project indicates that organizations should review security policies when using AI technologies, including Copilot.
Bargouri identified several methods by which attackers can use Microsoft Copilot to carry out cyberattacks. Copilot plugins, in particular, allow you to install backdoors that can be exploited when other users work with AI, and also serve as an aid in carrying out attacks that involve social engineering methods.
Using query injection, a hacker modifies the AI’s responses to suit his goals, allowing him to secretly search and extract data, bypassing standard file and information security measures. AI has also proven to be an effective weapon in social engineering attacks – Copilot can be used to create convincing phishing emails and prepare other methods of interacting with potential victims from whom a cybercriminal is trying to obtain sensitive information.
To demonstrate these vulnerabilities, Bargouri created LOLCopilot, a tool designed for ethical hackers that runs on any Microsoft 365 Copilot-enabled client using default configurations. Cybersecurity specialists can use it to explore scenarios for exploiting Copilot vulnerabilities to steal data and launch phishing attacks.
The developer points out that Microsoft Copilot’s existing default security settings are not effective enough to prevent these attacks. The availability of a tool to access and process large amounts of data demonstrates the risk involved in operating AI systems. The researcher recommends implementing additional security measures, including multi-factor authentication and strict data access control. Additionally, employees need to be educated about the risks associated with AI and have comprehensive incident response protocols in place.