Specialists from the Tow Center for Digital Journalism at Columbia University tested the search engine in the popular chatbot based on artificial intelligence ChatGPT from OpenAI. It turned out that the algorithm is not very good at finding news sources and often produces inaccurate answers.

Image source: Mariia Shalabaieva / unsplash.com

OpenAI opened ChatGPT’s search feature to users in October this year, saying it was capable of providing “fast and relevant answers with links to relevant web sources.” However, when testing the tool, it found that it had difficulty recognizing citations from articles, even when they were published by publishers who allowed OpenAI to use their content to train large language models (LLMs).

The study’s authors asked ChatGPT to cite the sources of “two hundred quotes from twenty sources.” Forty of these quotes were taken from publishers who banned the OpenAI search robot from accessing their sites. However, even in these cases, the chatbot responded confidently, giving false information, and in some cases admitting that it was not sure of the accuracy of the information provided.

«

A series of tests also revealed cases where the ChatGPT search tool produced results in which quotes from a letter to the editor of the Orlando Sentinel were incorrectly correlated with materials from Time magazine. In another example, when the chatbot was asked to provide the source of a quote from a New York Times article about endangered whales, it returned a link to a website that simply copied and published the original article.

Leave a Reply

Your email address will not be published. Required fields are marked *