In October, Anthropic introduced an artificial intelligence model, Claude Computer Use, which allows the Claude neural network to independently control a computer based on user requests. An AI security researcher has found a way to turn this feature into an evil one.

Image source: anthropic.com

Cybersecurity expert Johann Rehnberger published a report on how he managed to abuse the Computer Use function: at his request, the AI ​​downloaded and launched a malicious application, after which it contacted the server controlling the malware.

It should be noted that Claude Computer Use remains in beta, and the company that developed it, Anthropic, has warned that the feature may not work as intended: “We recommend taking precautions to isolate Claude from sensitive data and activities to avoid risks related to query injections.” These types of attack patterns against AI remain common.

Renberger called his exploit ZombAIs – with his help, the expert forced the system to load the Sliver remote control environment, which was developed for penetration testing, but cybercriminals adapted it for their needs. He also noted that this is not the only way to use AI for criminal activities – for example, you could force Claude to write a virus from scratch and compile it, because he can write C code.

It has also been established that the Chinese chatbot DeepSeek AI is vulnerable to queries via injection attacks. And large language models were able to output code with ANSI control characters to hack system terminals – this subtype of attack was called Terminal DiLLMa.

Leave a Reply

Your email address will not be published. Required fields are marked *