OpenAI is having trouble developing its new flagship AI model, codenamed Orion. This AI model has demonstrated significant success in natural language processing tasks, but its effectiveness in programming remains low. These limitations, along with scarcity of training data and increased operating costs, call into question the profitability and business attractiveness of the mentioned AI model.
One challenge is the cost of running Orion in OpenAI data centers, which is significantly higher than previous generation AI models such as GPT-4 and GPT-4o. The significant increase in costs threatens value for money and could dampen interest in Orion among enterprise customers and subscribers focused on cost-effective AI solutions. The high cost of operation raises questions about the economic feasibility of the AI model, especially given the moderate increase in its productivity.
Expectations were high for the transition from GPT-4 to Orion, but the quantum leap was not as significant as the transition from GPT-3 to GPT-4, which somewhat disappointed the market. A similar trend is observed among other AI developers: Anthropic and Mistral are also recording moderate improvements in their AI models. For example, testing results from Anthropic’s Claude 3.5 Sonnet AI model show that the quality improvements in each new core AI model are increasingly incremental. At the same time, its competitors are trying to shift attention away from this limitation by focusing on developing new features such as AI agents. This signals a shift in focus from improving the overall performance of AI to creating its unique capabilities.
To compensate for the weaknesses of modern AI, companies are fine-tuning the results using additional filters. However, this approach remains only a temporary solution and does not eliminate the main limitations associated with the architecture of AI models. The problem is compounded by restrictions in access to licensed and publicly available data, which has forced OpenAI to form a special team tasked with finding a way to solve the problem of a lack of training data. However, it is unclear whether the team will be able to collect enough data to improve the performance of Orion’s AI model and meet customer requirements.