The OpenAI ChatGPT search service has been found to be open to manipulation using hidden content, and can return malicious code from searched sites as responses to queries. The Guardian writes about this with reference to the results of its own investigation.
The reliability of ChatGPT’s web search function is questionable. The Guardian decided to test how artificial intelligence reacts to a request to compile a summary of the contents of a web page that contains hidden text. It may contain additional instructions that can change ChatGPT responses – unscrupulous resource owners can use this method maliciously, for example, force the system to give a positive review about a product despite negative reviews posted on the same page. The chatbot can also respond with malicious code from search sites.
During testing, ChatGPT was offered a fake resource address that looked like a page describing a camera, after which the AI was asked whether the device was worth buying. The system returned a positive but balanced rating, highlighting points that people might not like. When the hidden text included a direct instruction to ChatGPT to give positive feedback, the chatbot did just that. Hidden text without instructions can also provide a positive rating – just add fake positive reviews about the product.
No less dangerous is the malicious code placed in the hidden text of the page. Microsoft cybersecurity expert Thomas Roccia recounted an incident where a cryptocurrency enthusiast asked ChatGPT for programming help, and some of the code the chatbot provided turned out to be malicious, resulting in the programmer’s credentials being stolen and him losing $2,500 . Today’s large language models that chatbots are built on are extremely trusting; Possessing large amounts of information, they are “very limited in terms of their ability to make decisions,” experts note.
It’s important to answer the question of how these vulnerabilities could change the web landscape, and how the threat landscape for users might change if publicly available large language models start working with search. Traditional search engines, including Google, lower the rankings of sites that use hidden text, so resources counting on high positions in the results are unlikely to actively use this technique. On the other hand, there is such a thing as “SEO poisoning” – powerful search engine optimization of sites with malicious code. OpenAI has fairly strong AI security experts, so it is likely that the likelihood of such incidents will decrease in the future.