A new IEEE study has found that hacking artificially intelligent robots is as easy as fooling chatbots. Scientists have been able to force robots to perform dangerous actions using simple text commands.
According to HotHardware, while hacking devices like the iPhone or game consoles requires special tools and technical skills, hacking large language models (LLMs) such as ChatGPT is much easier. To do this, simply create a script that will trick the AI into believing that the request is within the limits of what is allowed or that the prohibitions can be temporarily ignored. For example, a user only needs to present a forbidden topic as part of a supposedly harmless bedtime story for the model to produce an unexpected response, including instructions for creating dangerous substances or devices that should be immediately blocked by the system.
It turned out that hacking LLM is so simple that even ordinary users, not just cybersecurity specialists, can handle it. That is why the US engineering association, the Institute of Electrical and Electronics Engineers (IEEE), expressed serious concerns after the publication of new research that showed that robots controlled by artificial intelligence could be hacked in a similar way. Scientists have proven that cyberattacks of this kind can, for example, cause self-driving vehicles to deliberately hit pedestrians.
Among the vulnerable devices were not only conceptual designs, but also widely known ones. For example, the Figure robots, recently demonstrated at the BMW plant, or the Spot robot dogs from Boston Dynamics. These devices use technologies similar to ChatGPT and can be tricked through certain requests, leading to actions completely contrary to their original purpose.
During the experiment, the researchers attacked three systems: the Unitree Go2 robot, the Clearpath Robotics Jackal autonomous vehicle, and the NVIDIA Dolphins LLM self-driving car simulator. The hack used a tool that automated the process of creating malicious text requests. The result was frightening – all three systems were successfully hacked within a few days with 100% efficiency.
In its study, IEEE also quotes scientists from the University of Pennsylvania, who noted that AI in some cases not only executed malicious commands, but also made additional recommendations. For example, robots programmed to find weapons have also been proposed to use furniture as improvised means to harm people. Experts emphasize that despite the impressive capabilities of modern AI models, they remain only predictive mechanisms without the ability to understand the context or consequences of their actions. That is why control and responsibility for their use must remain in human hands.
There is very little time left until January 19, when the short video service TikTok…
Microsoft has announced the start of testing Edge Game Assist, a new in-game overlay based…
Developers from the Cypriot studio Unfrozen talked about “Arena” - an additional mode for their…
LG shut down its smartphone division in 2021, but that doesn't mean the company is…
[2.21] Played on PC Up to achievements! ⇡#All in one On paper, Little Big Adventure…
Microsoft announced the suspension of the release of the Windows 11 24H2 update on PCs…