Earlier this week, China’s Alibaba Group Holdings unveiled a new family of flagship language models, Qwen3, that uses a novel “expert blending” technique to achieve results comparable to those of hybrid reasoning systems.

Image source: Alibaba

According to Alibaba, its Qwen3 family of models are on par with or even faster and more efficient than DeepSeek’s developments in a number of areas of application, including solving mathematical problems and writing program code. Scaling these models is also significantly cheaper than most popular analogues. Models of this type try to imitate people in the logic of solving problems; similar systems have already been proposed by Anthropic and Alphabet (Google).

A more efficient solution to a problem is achieved by dividing it into several segments, each of which is responsible for its own piece of code. This is similar to the process of solving a problem by a group of experts, each of whom is assigned a task. Alibaba introduced the Qwen 2.5 family of models back in March, which can work with text, images, audio and video, while being limited by the hardware resources of a laptop or smartphone. The Qwen3 family of models adheres to the principle of open source code.

Under pressure from DeepSeek, US startup OpenAI has also promised to introduce a more open model that mimics human reasoning. Alibaba’s drive to strengthen its position in the AI ​​market has helped the Chinese giant emerge from a crisis caused by founder Jack Ma’s conflict with the Chinese authorities, who several years ago began to seriously regulate businesses in the areas on which Alibaba Group’s fortunes were built.

Leave a Reply

Your email address will not be published. Required fields are marked *