Microsoft has unveiled Phi-4, its latest large generative artificial intelligence language model. The new product is distinguished by its relatively compact size and high performance. It is available in preview mode for research purposes.
Microsoft Phi-4 performs better than its predecessor in a number of criteria, including solving mathematical problems, the developer assures – this is partly the result of higher quality training data. Phi-4 is still available in very limited access on the Azure AI Foundry development platform – in accordance with the license agreement, it can only be used for research purposes.
The new generation small language model has 14 billion parameters – it is positioned as a competitor to such projects as GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. These small AI models are faster, cheaper to maintain, and the quality of their work has skyrocketed recently.
Microsoft attributes Phi-4’s success to the fact that it was trained using high-quality human-generated content along with “high-quality synthetic datasets” and that some refinement of the model was made after training.
It is noteworthy that the Phi-4 was the first model of this series released after the departure of Sebastien Bubeck. He served as vice president of artificial intelligence research at Microsoft and was a key figure in the development of Phi models before leaving the company to join OpenAI in October.