Now anyone can train themselves a reasoning AI for just $450 – Sky-T1 is open source

This week, researchers from the Sky Computing Lab at the University of California, Berkeley launched the Sky-T1-32B-Preview artificial intelligence model. We are talking about a neural network with reasoning ability that can compete with OpenAI o1 in a number of key indicators.

Image Source: Lee Campbell/Unsplash

Apparently, the Sky-T1 is the first model to support open source reasoning, which will allow it to be replicated from scratch. The developers published the data set that was used to train the algorithm, as well as other data necessary to run the AI ​​model.

One of the main features of the algorithm is that its training does not require significant costs. “Remarkably, Sky-T1-32B-Preview was trained for less than $450,” the developers wrote on their blog. Thus, they clearly demonstrated that it is possible to create an AI model with high-level reasoning abilities without significant financial investment.

Until recently, the cost of training a large language model with comparable characteristics was measured in millions of dollars. It was possible to significantly reduce costs through the use of synthetic data, i.e. data generated by other neural networks. For example, the Palmyra X 004 algorithm recently released by Winter was trained on synthetic data and cost the developers $700 thousand.

Unlike many AI algorithms, reasoning models effectively check facts, which allows them to provide more accurate answers and are less likely to make mistakes that mislead users. In addition, reasoning models typically take longer to generate an answer to a query compared to conventional AI algorithms. However, reasoning models are generally more reliable, especially in areas such as physics, mathematics and science.

According to reports, the developers leveraged Alibaba’s QwQ-32B-Preview reasoning model to create the initial Sky-T1 training dataset. The data was then converted using GPT-4o-mini from OpenAI into a more accurate format. The training process for Sky-T1 with 32 billion parameters took about 19 hours, for which 8 Nvidia H100 graphics accelerators were used.

«Going forward, we will focus on developing more efficient models that maintain strong reasoning performance, as well as exploring best practices to improve the efficiency and accuracy of models during testing. Stay tuned as we make progress on these exciting initiatives,” the developers wrote in a blog post.

admin

Share
Published by
admin

Recent Posts

FTC’s ‘One-Click Unsubscribe’ Rule Delayed Again, But Not for Long

The US Federal Trade Commission (FTC) has delayed a rule that would require companies to…

5 hours ago

KIBORG: left – crown, right – augmented. Review

Played on PC Developers from Sobaka Studio have built a reputation for themselves as authors…

7 hours ago

A simple and reliable speedometer for satellites has been invented in the USA

Satellites move in orbit at speeds of thousands of kilometers per hour, and without precise…

10 hours ago

Schoolchildren are starting to stick metal objects into Chromebook ports en masse for TikTok likes

A dangerous new TikTok challenge has gone viral in which American schoolchildren are deliberately damaging…

10 hours ago

Apple is developing powerful Baltra processors for AI servers, as well as consumer M5, M6 and M7

Apple is developing processors for data centers that will serve requests from Apple Intelligence artificial…

11 hours ago

Despelote — goo-o-o-o-o-o-o-o-o-o-ol! Review

One of my first memories (or perhaps the very first one – is it possible…

1 day ago