Apple researchers have found that large language models such as ChatGPT are incapable of logical thinking and are easily confused by adding irrelevant details to the task at hand, TechCrunch reports.

Image Source: Dkoi/Unsplash

The published paper, “Understanding the Limits of Mathematical Reasoning in Large Language Models,” raises questions about the logical reasoning capabilities of artificial intelligence. The study found that large language models (LLMs) can solve simple math problems, but adding irrelevant information leads to errors.

For example, the model may well solve the following problem: “Oliver picked 44 kiwis on Friday. He then picked 58 kiwis on Saturday. On Sunday he collected twice as many kiwis as on Friday. How many kiwis does Oliver have? However, if you add the phrase “On Sunday, 5 of these kiwis were slightly smaller than average,” the model will likely subtract these 5 kiwis from the total, despite the fact that the size of the kiwis does not affect their number.

Image source: Copilot

Mehrdad Farajtabar, one of the study’s co-authors, explains that such errors indicate that LLMs do not understand the essence of the task and are simply reproducing patterns from the training data. “We hypothesize that this decline [in efficiency] is due to the fact that modern LLMs are incapable of true logical reasoning; instead, they try to reproduce the reasoning steps observed in their training data,” the paper states.

Another OpenAI specialist countered that correct results can be obtained using prompt engineering. However, Farajtabar noted that complex tasks may require exponentially more contextual data to neutralize distractions that, for example, a child would easily ignore.

Does this mean that LLMs cannot reason? Maybe. No one has yet given an exact answer, since there is no clear understanding of what is happening. LLMs may be “reasoning,” but in a way we don’t yet recognize or can’t control. In any case, this topic opens up exciting prospects for further research.

Leave a Reply

Your email address will not be published. Required fields are marked *