The OpenAI ChatGPT search service has been found to be open to manipulation using hidden content, and can return malicious code from searched sites as responses to queries. The Guardian writes about this with reference to the results of its own investigation.

Image source: Growtika / unsplash.com

The reliability of ChatGPT’s web search function is questionable. The Guardian decided to test how artificial intelligence reacts to a request to compile a summary of the contents of a web page that contains hidden text. It may contain additional instructions that can change ChatGPT responses – unscrupulous resource owners can use this method maliciously, for example, force the system to give a positive review about a product despite negative reviews posted on the same page. The chatbot can also respond with malicious code from search sites.

During testing, ChatGPT was offered a fake resource address that looked like a page describing a camera, after which the AI ​​was asked whether the device was worth buying. The system returned a positive but balanced rating, highlighting points that people might not like. When the hidden text included a direct instruction to ChatGPT to give positive feedback, the chatbot did just that. Hidden text without instructions can also provide a positive rating – just add fake positive reviews about the product.

No less dangerous is the malicious code placed in the hidden text of the page. Microsoft cybersecurity expert Thomas Roccia recounted an incident where a cryptocurrency enthusiast asked ChatGPT for programming help, and some of the code the chatbot provided turned out to be malicious, resulting in the programmer’s credentials being stolen and him losing $2,500 . Today’s large language models that chatbots are built on are extremely trusting; Possessing large amounts of information, they are “very limited in terms of their ability to make decisions,” experts note.

It’s important to answer the question of how these vulnerabilities could change the web landscape, and how the threat landscape for users might change if publicly available large language models start working with search. Traditional search engines, including Google, lower the rankings of sites that use hidden text, so resources counting on high positions in the results are unlikely to actively use this technique. On the other hand, there is such a thing as “SEO poisoning” – powerful search engine optimization of sites with malicious code. OpenAI has fairly strong AI security experts, so it is likely that the likelihood of such incidents will decrease in the future.

admin

Share
Published by
admin

Recent Posts

The crypto industry is frozen in anticipation of Trump fulfilling his election promises

The crypto industry is waiting for Donald Trump to fulfill his election promises and sign…

7 hours ago

Koreans will set AI on pirate cinemas around the world

The South Korean government, as part of the promotion of Korean content in the international…

8 hours ago

A metastable material for future data storage systems has been discovered – it changes its magnetic properties when exposed to light

Scientists from the Massachusetts Institute of Technology have discovered a material that, when exposed to…

8 hours ago

New reality: the success of S.T.A.L.K.E.R. 2: Heart of Chornobyl will allow GSC to add cut content to the game

Head of development S.T.A.L.K.E.R. 2: Heart of Chornobyl Evgeny Grigorovich and lead producer Maria Grigorovich…

9 hours ago

AI has been taught to generate thousands of modifications of viruses that easily bypass antiviruses

Security researchers from Palo Alto Networks Unit 42 have discovered that large language models (LLMs)…

10 hours ago