The AI ​​bubble bursts as OpenAI, Google and Anthropic try to create more advanced AI

Three leading artificial intelligence companies have seen noticeable diminishing returns from their expensive efforts to develop new AI systems. OpenAI’s new model, known as Orion, hasn’t achieved the company’s desired performance, Google’s upcoming Gemini iteration isn’t meeting expectations, and Anthropic is facing a schedule delay for its Claude model, called 3.5 Opus.

Image source: unsplash.com

After years of rushing out increasingly complex AI products, the space’s three leading companies are seeing diminishing returns on the costly efforts to create new models. It is becoming increasingly difficult to find fresh, untapped sources of high-quality data to train more advanced AI systems. And the current very modest improvements are not enough to recoup the enormous costs associated with the creation and operation of new models, as well as to meet the expectations from the release of new products.

Thus, OpenAI claimed that it was on the verge of an important milestone. In September, the initial round of training was completed for a new large-scale model of Orion, which would get closer to creating powerful AI that could outperform humans. But the company’s expectations, according to informed sources, were not met. Orion failed to demonstrate the breakthrough that the GPT-4 model had previously shown compared to GPT-3.5.

Image source: Pixabay

Anthropic, like its competitors, faced challenges during the development and training of 3.5 Opus. According to insiders, the 3.5 Opus model performed better than the old version, but not as much as expected, given the size of the model and the costs of its creation and launch.

These issues challenge the prevailing wisdom in Silicon Valley about the scalability of AI. Global AI advocates must acknowledge that more computing power, more data, and larger models do not yet pave the way for technological breakthroughs in AI.

Image source: NVIDIA

Experts have reasonable doubts about the return on investment in AI and the overarching goal that AI model developers are striving for: artificial general intelligence (AGI). The term is usually applied to hypothetical AI systems that can match or surpass humans in most intelligent tasks. Executives at OpenAI and Anthropic have previously said that AGI could be a few years away.

The technology behind ChatGPT and competing AI chatbots was created using data from social media, online comments, books and other internet sources. This data has been enough to create products that generate surrogate essays and poems, but developing AI systems that will surpass the intelligence of Nobel Prize winners – as some companies hope – may require other data sources beyond Wikipedia posts and YouTube subtitles.

Image source: unsplash.com

OpenAI has been forced to reach agreements with publishers to meet at least some of the need for high-quality data, while also adapting to growing legal pressure from copyright holders of content used to train AI. There is a high demand in the labor market for specialists with higher education who can label data related to their area of ​​competence. This helps make trained AI systems more efficient at answering queries.

Such efforts are more expensive and require an order of magnitude more time than simply indexing the Internet. So tech companies are turning to synthetic data, such as computer-generated images or text that mimic human-generated content. However, this approach has its limitations, as it is difficult to achieve qualitative improvement when using such data for AI training.

Nevertheless, AI companies continue to follow the principle of “more is better.” In an effort to create products that approach human intelligence, technology companies are increasing the amount of computing power, data and time spent training new models, driving up costs. Anthropic CEO Dario Amodei said companies will spend $100 million this year to train the latest models, a figure that could reach $100 billion in coming years.

Image source: unsplash.com

There is certainly potential to improve AI models beyond scaling. For example, for its new Orion model, OpenAI uses a months-long post-training process. This procedure involves using feedback from people to improve responses and refine the “emotional tone” of interactions with users.

AI model developers are faced with a choice: either offer old models with additional improvements, or launch extremely expensive new versions that may not perform much better. As costs rise, so do expectations—the rapid advances in AI in its infancy have created high expectations among practitioners and investors alike.

admin

Share
Published by
admin

Recent Posts

New Google Android feature makes it easier to transfer data when changing devices

Google has introduced a new feature for Android called “Restore Credentials” that will greatly simplify…

4 minutes ago

Google is ready to permanently cancel the development of the Pixel Tablet 3 tablet

Google appears to be freezing its market presence in the tablet category again, exiting the…

44 minutes ago

OpenAI is trying to enter the market of Internet browsers and search engines

The relatively young company OpenAI may try to challenge Google in the market segment where…

54 minutes ago

Apple will teach Siri to better support dialogues with the user

The Siri voice assistant was considered quite advanced at the time it entered the market,…

2 hours ago

Nvidia warned of a possible shortage of gaming solutions in the fourth quarter

Not the most widely publicized news following Nvidia's quarterly report was the statement by the…

4 hours ago