The Chinese company Alibaba has introduced an open-source artificial intelligence model called QwQ-32B-Preview. The model is capable of solving complex logical and mathematical problems, contains 32.5 billion parameters and can process queries up to 32,000 words in length. According to tests conducted by Alibaba, the QwQ-32B-Preview outperforms OpenAI’s o1-preview and o1-mini models in a number of areas.

Image source: Copilot

One of the key advantages of the QwQ-32B-Preview, as reported by TechCrunch, is its performance in the AIME and MATH tests. The AIME test evaluates the model’s performance against other artificial intelligences, and the MATH test is a set of word-based math problems. Thanks to its features, this AI bot is capable of solving complex problems and logic puzzles. However, it is noted that QwQ-32B-Preview is not without its drawbacks: it can unexpectedly change the language of communication, get hung up on certain points, and cope worse with tasks when common sense is required.

An interesting feature of the QwQ-32B-Preview is the ability to “check itself.” This helps avoid pitfalls common to other artificial intelligence systems, but slows down the process of finding solutions. Like OpenAI’s o1 models, Alibaba’s new development solves problems in stages, gradually analyzing and planning further steps.

Despite the fact that the QwQ-32B-Preview is available for download on the Hugging Face platform under the Apache 2.0 license (free software), which allows it to be used for commercial purposes, Alibaba has disclosed only part of the model’s components, thereby preventing us from studying the internal working mechanisms algorithm. Thus, despite statements about “openness,” the model remains an intermediate option between closed and open systems (open source license).

The Chinese origin of the model also imposes certain restrictions. Like other developments from China, QwQ-32B-Preview follows the requirements of a local regulator, which tests AI for compliance with “core socialist values.” The model will refuse to answer political questions related to China.

It is worth saying that interest in “reasoning” models is growing against the backdrop of a decline in the effectiveness of the traditional approach to increasing the capabilities of AI, based on simply scaling data and computing power. Companies such as OpenAI, Google and Anthropic are increasingly seeing their technology progress slow, which in turn is encouraging developers to look for new approaches, such as “test-time compute”, which give models additional time to process tasks. At the same time, large laboratories, including Google, are actively investing in the development of such technologies, which confirms their key role in the future of artificial intelligence.

admin

Share
Published by
admin

Recent Posts

Nissan Leaf EV to Become NACS-Ported Compact Crossover in Third Generation

Nissan Leaf can rightfully be considered a long-liver of the electric car market, since the…

6 days ago

OpenAI expects to more than triple its revenue this year and then double it next year.

OpenAI, the market leader in generative artificial intelligence systems, remains nominally a startup, its financial…

6 days ago

OpenAI Decides to Hold 4o Image Generation Launch for Free Users

OpenAI has been forced to delay the release of ChatGPT's built-in image generator for free…

6 days ago

1440p and 240Hz for just $200: Xiaomi updates the 27-inch Redmi G27Q gaming monitor

Xiaomi continues to update its Redmi G27Q gaming monitor every year. The model was first…

6 days ago

Beware, Android is shutting down: OS development will cease to be public, but there is no reason to panic

Android device makers can significantly customize the look and feel of the operating system, but…

6 days ago

Fake GeForce RTX 4090s with RTX 3090 chips have started popping up in China — craftsmen are even changing the GPU markings

In China, scammers have started selling GeForce RTX 3090 graphics cards, passing them off as…

6 days ago