The High Court of England and Wales has warned lawyers that using artificial intelligence-generated information in court filings could result in criminal liability. The announcement comes after lawyers were found to have cited non-existent court decisions and fictitious rulings, The New York Times reports.

Image source: AI

«“There can be serious consequences for justice and trust in the system if AI is used incorrectly,” Justice Sharp said. She stressed that lawyers could be prosecuted or disbarred for providing false data generated by AI. She also noted that tools like ChatGPT “are not capable of conducting reliable legal research” and can produce confident but entirely false claims or refer to non-existent case sources.

In one case, a man demanded millions in compensation for allegedly breached contract terms by banks. He later admitted that he had generated links to the practice through AI tools and online resources, believing the materials to be authentic. His lawyer, in turn, said that he relied on the client’s research and did not independently verify the information.

In another case, a lawyer representing a man who had been evicted from his London home and needed accommodation also used generated links and could not explain their origin. However, the court suspected the use of AI due to the American spelling of words and the template style of the text. The lawyer herself denied using AI technology, but admitted to adding similar false data in another case. She also said that she used Google and Safari with AI summaries of her search results.

Interestingly, Silicon Valley (California, USA) company Vectara, which has been studying AI responses since 2023, found that even the best chatbots make mistakes in 0.7–2.2% of cases. Moreover, the level of “hallucinations” increases sharply when systems are required to generate large text from scratch. OpenAI also recently reported that its new models make mistakes in 51–79% of cases when answering general questions.

While Justice Sharp acknowledged that AI is a powerful tool, she also said its use comes with risks. Sharp also cited examples from the US, Australia, Canada and New Zealand where AI has misinterpreted laws or created fictitious citations.

Leave a Reply

Your email address will not be published. Required fields are marked *