AI has been taught to generate thousands of modifications of viruses that easily bypass antiviruses

Security researchers from Palo Alto Networks Unit 42 have discovered that large language models (LLMs) successfully generate modifications to malicious JavaScript code in a virtually unlimited volume, while making the code less recognizable by antivirus programs. As an example, the AI ​​created more than 10 thousand copies of code that evaded detection 88% of the time. However, this coin also has a flip side.

Image source: AI generation Kandinsky 3.1/3DNews

«While LLMs are difficult to create malware from scratch, criminals can easily use them to overwrite or disguise existing malware, making it difficult to detect, researchers from Palo Alto Networks Unit 42 said in a new report. “Criminals can induce LLMs to perform transformations that look much more natural, making such software more difficult to detect.”

This is not to say that the situation has been ignored. LLM developers introduce security measures to prevent models from being used for destructive purposes. However, attackers have already introduced tools such as WormGPT to automate the creation of convincing phishing emails and the development of new malware.

Researchers from Unit 42 of Palo Alto Networks, whose name, by the way, is taken from Douglas Adams’s “The Hitchhiker’s Guide to the Galaxy”, where the number 42 is the answer to “the main question of life, the Universe and everything,” tested the work of LLM as a malicious modifier code. They found that the AI ​​created 10,000 variations of code that tools like Innocent Until Proven Guilty (IUPG) and PhishingJS could not detect. Moreover, the malicious code turned out to be more natural and seemingly harmless than that modified using the standard obfuscator.io mechanism. However, the ability of the code to cause harm after obfuscation with the help of AI did not decrease.

AI is not original in its methods of obfuscating its tracks. It uses many standard techniques: renaming variables, splitting lines, inserting extra code, removing unnecessary spaces, and even completely rewriting the code. And so 10,000 times in a row, which forces antivirus software to consider new versions harmless. This is truly alarming, the researchers note. However, such a mechanism can be useful for training algorithms capable of finding previously unknown modifications of malicious code. Strengthening attacks stimulates the development of defense. One AI creates viruses, the other catches them. In this scheme, it seems, a person can only “offer shells” first to one side, then to the other.

admin

Share
Published by
admin

Recent Posts

The impossible is possible: the eighth patch for Baldur’s Gate 3 will make an affordable players a forbidden “evil” ending

The stressful testing of the eighth large patch for the fantasy role -playing game Baldur’s…

5 minutes ago

Thermal Grizzly introduced Kryosheet graphene heat laying, which will replace thermal paved with CPU and GPU

Thermal Grizzly introduced a new product called Kryosheet - graphene thermal layers for use with…

35 minutes ago

A 23.8-inch transreflective HYBRI Monitor with an electronic paper mode is presented for $ 899

Hannspree introduced a 23.8-inch Hybri monitor, which combines an electronic ink mode pleasant for the…

35 minutes ago

We waited: the whole day after the release of the sale of Kingdom Come: Deliverance 2 already exceeded a million copies

From the release of the medieval role action with the open world of Kingdom Come:…

56 minutes ago