The AI ​​bubble bursts as OpenAI, Google and Anthropic try to create more advanced AI

Three leading artificial intelligence companies have seen noticeable diminishing returns from their expensive efforts to develop new AI systems. OpenAI’s new model, known as Orion, hasn’t achieved the company’s desired performance, Google’s upcoming Gemini iteration isn’t meeting expectations, and Anthropic is facing a schedule delay for its Claude model, called 3.5 Opus.

Image source: unsplash.com

After years of rushing out increasingly complex AI products, the space’s three leading companies are seeing diminishing returns on the costly efforts to create new models. It is becoming increasingly difficult to find fresh, untapped sources of high-quality data to train more advanced AI systems. And the current very modest improvements are not enough to recoup the enormous costs associated with the creation and operation of new models, as well as to meet the expectations from the release of new products.

Thus, OpenAI claimed that it was on the verge of an important milestone. In September, the initial round of training was completed for a new large-scale model of Orion, which would get closer to creating powerful AI that could outperform humans. But the company’s expectations, according to informed sources, were not met. Orion failed to demonstrate the breakthrough that the GPT-4 model had previously shown compared to GPT-3.5.

Image source: Pixabay

Anthropic, like its competitors, faced challenges during the development and training of 3.5 Opus. According to insiders, the 3.5 Opus model performed better than the old version, but not as much as expected, given the size of the model and the costs of its creation and launch.

These issues challenge the prevailing wisdom in Silicon Valley about the scalability of AI. Global AI advocates must acknowledge that more computing power, more data, and larger models do not yet pave the way for technological breakthroughs in AI.

Image source: NVIDIA

Experts have reasonable doubts about the return on investment in AI and the overarching goal that AI model developers are striving for: artificial general intelligence (AGI). The term is usually applied to hypothetical AI systems that can match or surpass humans in most intelligent tasks. Executives at OpenAI and Anthropic have previously said that AGI could be a few years away.

The technology behind ChatGPT and competing AI chatbots was created using data from social media, online comments, books and other internet sources. This data has been enough to create products that generate surrogate essays and poems, but developing AI systems that will surpass the intelligence of Nobel Prize winners – as some companies hope – may require other data sources beyond Wikipedia posts and YouTube subtitles.

Image source: unsplash.com

OpenAI has been forced to reach agreements with publishers to meet at least some of the need for high-quality data, while also adapting to growing legal pressure from copyright holders of content used to train AI. There is a high demand in the labor market for specialists with higher education who can label data related to their area of ​​competence. This helps make trained AI systems more efficient at answering queries.

Such efforts are more expensive and require an order of magnitude more time than simply indexing the Internet. So tech companies are turning to synthetic data, such as computer-generated images or text that mimic human-generated content. However, this approach has its limitations, as it is difficult to achieve qualitative improvement when using such data for AI training.

Nevertheless, AI companies continue to follow the principle of “more is better.” In an effort to create products that approach human intelligence, technology companies are increasing the amount of computing power, data and time spent training new models, driving up costs. Anthropic CEO Dario Amodei said companies will spend $100 million this year to train the latest models, a figure that could reach $100 billion in coming years.

Image source: unsplash.com

There is certainly potential to improve AI models beyond scaling. For example, for its new Orion model, OpenAI uses a months-long post-training process. This procedure involves using feedback from people to improve responses and refine the “emotional tone” of interactions with users.

AI model developers are faced with a choice: either offer old models with additional improvements, or launch extremely expensive new versions that may not perform much better. As costs rise, so do expectations—the rapid advances in AI in its infancy have created high expectations among practitioners and investors alike.

admin

Share
Published by
admin

Recent Posts

Chinese CATL is ready to start producing traction batteries in the USA if Trump allows it

A very extensive interview with the founder and head of CATL, Robin Zeng, to Reuters…

32 minutes ago

NASA has laid off hundreds of employees at a key space exploration laboratory.

Uncertainty over funding in 2025 and the expectation of new NASA leadership after the election…

32 minutes ago

Foxconn is also reaping the benefits of the AI ​​boom – profits jumped 14% thanks to AI servers

Foxconn, the world's largest contract electronics manufacturer from Taiwan and Apple's main iPhone assembly partner,…

1 hour ago

China is running out of cheap labor – robots will solve the problem

Chinese industry is currently struggling to solve one of its biggest problems with labor shortages.…

1 hour ago

A laser beam can cast a shadow, scientists have found

The imperfection of a 3D simulator of nonlinear optical phenomena in materials, which in the…

2 hours ago

Sonos reported a drop in revenue after the fiasco with its proprietary application

Sonos is still trying to overcome the problems associated with a radical update of the…

2 hours ago