Researchers created an analogue of the Openai AI-model in 26 minutes using the distillation method

Researchers from Stanford and the University of Washington have created an AI-model that surpasses Openai in solving mathematical problems. The model, called S1, was trained in a limited set of data from 1000 questions by distillation. This made it possible to achieve high efficiency with minimal resources and prove that large companies such as Openai, Microsoft, Meta✴ and Google may not have to build huge data centers, filling them with thousands of NVIDIA graphic processors.

Image source: Growtika / Unsplash

The distillation method that scientists applied was a key solution in the experiment. This approach allows small models to study on answers provided by larger AI models. In this case, as The Verge writes, S1 quickly improved its abilities using the answers from the artificial intelligence model Gemini 2.0 Flash Thinking Experimental, developed by Google.

The S1 model was created on the basis of the QWEN2.5 project from Alibaba (Cloud) open source. Initially, the researchers used a set of data from 59,000 questions, but during the experiments they came to the conclusion that an increase in the amount of data does not give significant improvements, and for final training they used only a small set of 1000 issues. At the same time, only 16 GPU NVIDIA H100 was used.

In S1, a technique called “scaling of testing time” was also used, which allows the model to “reflect” before generating an answer. The researchers also stimulated the model to double -check their conclusions by adding a command in the form of the word “wait” (“wait”), which forced AI to continue reasoning and correct errors in their answers.

It is claimed that the S1 model showed impressive results and was able to surpass Openai O1-Preview by 27 % when solving mathematical problems. The recently sensational R1 model from DeepSeek also used a similar approach for relatively little money. True, now Openai accuses DeepSeek of extracting information from its models in violation of service conditions. It is worth saying that in the conditions of using Google Gemini it is indicated that its API is forbidden to use to create competing chat bots.

An increase in the number of smaller and cheaper models can, according to experts, turn over the entire industry and prove that there is no need to invest billions of dollars in AI training, build huge data centers and purchase a large amount of GPU.

admin

Share
Published by
admin

Recent Posts

“What do you see: craters or bulges?” – Japanese probe Resilience photographs the south pole of the Moon

The Japanese private probe Resilience has taken a high-quality photo of the Moon's south pole…

13 hours ago

A database containing data from 184 million accounts of Apple, Google, Microsoft and other services was just lying on the Internet

Cybersecurity researcher Jeremiah Fowler discovered a publicly available database with more than 184 million logins…

13 hours ago

Doom: The Dark Ages Is Rightfully the King. Review

Played on PC In 2016, the Doom series returned to our screens, and did so…

13 hours ago

Apple’s 25% tariffs will affect Samsung smartphones, Trump explains

US President Donald Trump this week said he would impose a 25% tariff on iPhones…

13 hours ago

Thermaltake Shows Off IX700 PC Case with Immersion Cooling

Thermaltake unveiled a prototype of the IX700 system unit with an immersion cooling system at…

13 hours ago

Warhammer 40,000: Boltgun 2 Will Be Released in 2026, and You Won’t Have to Wait for a Free Printed Shooter Based on the First Part

At the Warhammer Skulls 2025 presentation, developers from the British studio Auroch Digital announced a…

2 days ago