AI has been taught to generate thousands of modifications of viruses that easily bypass antiviruses

Security researchers from Palo Alto Networks Unit 42 have discovered that large language models (LLMs) successfully generate modifications to malicious JavaScript code in a virtually unlimited volume, while making the code less recognizable by antivirus programs. As an example, the AI ​​created more than 10 thousand copies of code that evaded detection 88% of the time. However, this coin also has a flip side.

Image source: AI generation Kandinsky 3.1/3DNews

«While LLMs are difficult to create malware from scratch, criminals can easily use them to overwrite or disguise existing malware, making it difficult to detect, researchers from Palo Alto Networks Unit 42 said in a new report. “Criminals can induce LLMs to perform transformations that look much more natural, making such software more difficult to detect.”

This is not to say that the situation has been ignored. LLM developers introduce security measures to prevent models from being used for destructive purposes. However, attackers have already introduced tools such as WormGPT to automate the creation of convincing phishing emails and the development of new malware.

Researchers from Unit 42 of Palo Alto Networks, whose name, by the way, is taken from Douglas Adams’s “The Hitchhiker’s Guide to the Galaxy”, where the number 42 is the answer to “the main question of life, the Universe and everything,” tested the work of LLM as a malicious modifier code. They found that the AI ​​created 10,000 variations of code that tools like Innocent Until Proven Guilty (IUPG) and PhishingJS could not detect. Moreover, the malicious code turned out to be more natural and seemingly harmless than that modified using the standard obfuscator.io mechanism. However, the ability of the code to cause harm after obfuscation with the help of AI did not decrease.

AI is not original in its methods of obfuscating its tracks. It uses many standard techniques: renaming variables, splitting lines, inserting extra code, removing unnecessary spaces, and even completely rewriting the code. And so 10,000 times in a row, which forces antivirus software to consider new versions harmless. This is truly alarming, the researchers note. However, such a mechanism can be useful for training algorithms capable of finding previously unknown modifications of malicious code. Strengthening attacks stimulates the development of defense. One AI creates viruses, the other catches them. In this scheme, it seems, a person can only “offer shells” first to one side, then to the other.

admin

Share
Published by
admin

Recent Posts

The crypto industry is frozen in anticipation of Trump fulfilling his election promises

The crypto industry is waiting for Donald Trump to fulfill his election promises and sign…

7 hours ago

Koreans will set AI on pirate cinemas around the world

The South Korean government, as part of the promotion of Korean content in the international…

8 hours ago

A metastable material for future data storage systems has been discovered – it changes its magnetic properties when exposed to light

Scientists from the Massachusetts Institute of Technology have discovered a material that, when exposed to…

8 hours ago

New reality: the success of S.T.A.L.K.E.R. 2: Heart of Chornobyl will allow GSC to add cut content to the game

Head of development S.T.A.L.K.E.R. 2: Heart of Chornobyl Evgeny Grigorovich and lead producer Maria Grigorovich…

9 hours ago

In 2024, 30% more games were released on Steam than in the previous year – this is a new record

The end of 2024 is getting closer, and the team at SteamDB, the service’s unofficial…

12 hours ago