AI has been taught to generate thousands of modifications of viruses that easily bypass antiviruses

Security researchers from Palo Alto Networks Unit 42 have discovered that large language models (LLMs) successfully generate modifications to malicious JavaScript code in a virtually unlimited volume, while making the code less recognizable by antivirus programs. As an example, the AI ​​created more than 10 thousand copies of code that evaded detection 88% of the time. However, this coin also has a flip side.

Image source: AI generation Kandinsky 3.1/3DNews

«While LLMs are difficult to create malware from scratch, criminals can easily use them to overwrite or disguise existing malware, making it difficult to detect, researchers from Palo Alto Networks Unit 42 said in a new report. “Criminals can induce LLMs to perform transformations that look much more natural, making such software more difficult to detect.”

This is not to say that the situation has been ignored. LLM developers introduce security measures to prevent models from being used for destructive purposes. However, attackers have already introduced tools such as WormGPT to automate the creation of convincing phishing emails and the development of new malware.

Researchers from Unit 42 of Palo Alto Networks, whose name, by the way, is taken from Douglas Adams’s “The Hitchhiker’s Guide to the Galaxy”, where the number 42 is the answer to “the main question of life, the Universe and everything,” tested the work of LLM as a malicious modifier code. They found that the AI ​​created 10,000 variations of code that tools like Innocent Until Proven Guilty (IUPG) and PhishingJS could not detect. Moreover, the malicious code turned out to be more natural and seemingly harmless than that modified using the standard obfuscator.io mechanism. However, the ability of the code to cause harm after obfuscation with the help of AI did not decrease.

AI is not original in its methods of obfuscating its tracks. It uses many standard techniques: renaming variables, splitting lines, inserting extra code, removing unnecessary spaces, and even completely rewriting the code. And so 10,000 times in a row, which forces antivirus software to consider new versions harmless. This is truly alarming, the researchers note. However, such a mechanism can be useful for training algorithms capable of finding previously unknown modifications of malicious code. Strengthening attacks stimulates the development of defense. One AI creates viruses, the other catches them. In this scheme, it seems, a person can only “offer shells” first to one side, then to the other.

admin

Share
Published by
admin

Recent Posts

“What do you see: craters or bulges?” – Japanese probe Resilience photographs the south pole of the Moon

The Japanese private probe Resilience has taken a high-quality photo of the Moon's south pole…

15 hours ago

A database containing data from 184 million accounts of Apple, Google, Microsoft and other services was just lying on the Internet

Cybersecurity researcher Jeremiah Fowler discovered a publicly available database with more than 184 million logins…

15 hours ago

Doom: The Dark Ages Is Rightfully the King. Review

Played on PC In 2016, the Doom series returned to our screens, and did so…

15 hours ago

Apple’s 25% tariffs will affect Samsung smartphones, Trump explains

US President Donald Trump this week said he would impose a 25% tariff on iPhones…

15 hours ago

Thermaltake Shows Off IX700 PC Case with Immersion Cooling

Thermaltake unveiled a prototype of the IX700 system unit with an immersion cooling system at…

15 hours ago

Warhammer 40,000: Boltgun 2 Will Be Released in 2026, and You Won’t Have to Wait for a Free Printed Shooter Based on the First Part

At the Warhammer Skulls 2025 presentation, developers from the British studio Auroch Digital announced a…

2 days ago