Google DeepMind, the London-based artificial intelligence (AI) research subsidiary of Google, has introduced AlphaProof and AlphaGeometry 2 AI models that can solve complex mathematical problems that current AI models cannot handle.
For a number of reasons, solving mathematical problems that require advanced reasoning abilities is not yet within the capabilities of most AI systems. The fact is that these types of problems require the formation and use of abstractions. It also requires complex hierarchical planning, setting subgoals, backtracking, and finding new paths, which is a difficult issue for AI.
Both new AI models have the ability to perform advanced mathematical reasoning to solve complex mathematical problems. AlphaProof was created using reinforcement learning, gaining the ability to prove mathematical statements in the formal Lean programming language. To create it, we used a pre-trained language model AlphaZero, a reinforcement learning algorithm that previously taught itself to play chess, shogi and go. In turn, AlphaGeometry 2 is an improved version of the existing AlphaGeometry AI system, introduced in January and designed to solve geometry problems.
While AlphaProof was trained to solve problems on a wide range of math topics, AlphaGeometry 2 is optimized for solving problems involving object movements and equations involving angles, ratios and distances. Because AlphaGeometry 2 was trained on significantly more synthetic data than its predecessor, it can handle much more complex geometry problems.
To test the capabilities of the new AI systems, Google DeepMind researchers tasked them with solving six problems from this year’s International Mathematical Olympiad (IMO) and proving the answers were correct. AlphaProof solved two algebra problems and one number theory problem, one of which was the hardest in the Olympiad, while AlphaGeometry 2 solved a geometry problem. Two problems in combinatorics remained unsolved.
Two renowned mathematicians, Tim Gowers and Joseph Myers, tested the solutions provided by the systems. They awarded each of the four correct answers the maximum number of points (seven out of seven), giving the systems a total of 28 points out of a maximum of 42. An Olympian who scored the same number of points would have been awarded a silver medal and would have fallen just short of gold, which awarded to those who score 29 points or more.
For the first time, an AI system was able to achieve medal-level results in solving IMO mathematical problems. “As a mathematician, I find this very impressive and a significant leap over what was previously possible,” Gowers said during a press conference.
Creating AI systems that can solve complex mathematical problems could pave the way for exciting human-AI collaborations, says Katie Collins, a researcher at the University of Cambridge. This, in turn, can help us learn more about how we humans do math. “There’s still a lot we don’t know about how people solve complex math problems,” she says.
Late last year, a lawsuit began in which The New York Times and other major…
Microsoft has released the Bing Wallpaper app, which updates your desktop background daily using images…
While fans eagerly await the next GTA VI trailer, Rockstar Games' ambitious open-world crime thriller…
The third largest manufacturer of flash memory in the world, Kioxia, decided to enter the…
Gravitational lensing, predicted 90 years ago by Einstein, was confirmed by observation four years after…
Xiaomi's efforts to carve out its place in China's highly competitive electric vehicle market are…