The OpenAI ChatGPT search service has been found to be open to manipulation using hidden content, and can return malicious code from searched sites as responses to queries. The Guardian writes about this with reference to the results of its own investigation.

Image source: Growtika / unsplash.com

The reliability of ChatGPT’s web search function is questionable. The Guardian decided to test how artificial intelligence reacts to a request to compile a summary of the contents of a web page that contains hidden text. It may contain additional instructions that can change ChatGPT responses – unscrupulous resource owners can use this method maliciously, for example, force the system to give a positive review about a product despite negative reviews posted on the same page. The chatbot can also respond with malicious code from search sites.

During testing, ChatGPT was offered a fake resource address that looked like a page describing a camera, after which the AI ​​was asked whether the device was worth buying. The system returned a positive but balanced rating, highlighting points that people might not like. When the hidden text included a direct instruction to ChatGPT to give positive feedback, the chatbot did just that. Hidden text without instructions can also provide a positive rating – just add fake positive reviews about the product.

No less dangerous is the malicious code placed in the hidden text of the page. Microsoft cybersecurity expert Thomas Roccia recounted an incident where a cryptocurrency enthusiast asked ChatGPT for programming help, and some of the code the chatbot provided turned out to be malicious, resulting in the programmer’s credentials being stolen and him losing $2,500 . Today’s large language models that chatbots are built on are extremely trusting; Possessing large amounts of information, they are “very limited in terms of their ability to make decisions,” experts note.

It’s important to answer the question of how these vulnerabilities could change the web landscape, and how the threat landscape for users might change if publicly available large language models start working with search. Traditional search engines, including Google, lower the rankings of sites that use hidden text, so resources counting on high positions in the results are unlikely to actively use this technique. On the other hand, there is such a thing as “SEO poisoning” – powerful search engine optimization of sites with malicious code. OpenAI has fairly strong AI security experts, so it is likely that the likelihood of such incidents will decrease in the future.

admin

Share
Published by
admin

Recent Posts

GTX 750 Ti is no longer enough for the game: Ubisoft announced the system requirements of Rainbow Six Siege X

Publisher and developer Ubisoft has revealed the system requirements for Tom Clancy's Rainbow Six Siege…

23 hours ago

Asus Unveils ProArt GeForce RTX 5080 Graphics Cards with Wood Finish, USB-C, and M.2 Slot

Asus has brought wood grain textures to its graphics card lineup. The company has unveiled…

23 hours ago

AI agents: fas, profile, passwords, appearances

The perception of AI agents, their plans and hopes – as well as their impact…

23 hours ago

Asus Unveils GeForce RTX 5060 Gaming Laptops — $300-400 Cheaper Than RTX 5070 Models

Asus has refreshed its lineup of gaming laptops, introducing new configurations with Nvidia GeForce RTX…

23 hours ago

Microsoft Open Sources WSL, a Subsystem for Running Linux Applications on Windows

Microsoft has opened the source code of a set of tools that provide the Windows…

23 hours ago

Cisco’s quarterly results and outlook beat Wall Street expectations

Cisco Systems, an American supplier of enterprise networking equipment, announced results for the third quarter…

2 days ago