Amazon is investing billions in developing AI chips to reduce dependence on Nvidia

The AWS division of the American Internet giant Amazon has long been one of the largest players in the cloud services market. It is heavily dependent on Nvidia components and software, but at the same time it is developing its own infrastructure, using the developments of Annapurna Labs, acquired in 2015 for $350 million.

Image source: Amazon

Next month, the Financial Times reports, the company is due to demonstrate to the public its Trainium 2 accelerators, which can handle training large language models. Samples of these accelerators are already being used by the startup Anthropic, in which Amazon has invested $4 billion. Amazon’s clients in this area also include Databricks, Deutsche Telekom, Ricoh and Stockmark.

AWS Vice President of Compute and Networking Services Dave Brown said: “We want to be the absolute best place to run Nvidia, but at the same time we think it’s okay to have an alternative.” Already now, the accelerators of the Inferentia family are 40% cheaper than Nvidia solutions when generating responses from AI models. When it comes to spending tens of millions of dollars, these savings can be critical when choosing a computing platform.

By the end of this year, Amazon’s capital expenditures could reach $75 billion, and next year they will be even higher. Last year they were limited to $48.4 billion, and the size of the increase shows how important the company considers financing its infrastructure in the context of the rapid development of the market for AI systems. Futurum Group experts explain that large cloud service providers are striving to form their own vertically integrated and homogeneous structure of the chips used. Most of them strive to develop their own chips for computing accelerators, this allows them to reduce costs, increase profits, and strengthen control over the availability of chips and business development in general. “It’s not so much about the chip, but about the system as a whole,” explains Rami Sinno, director of development at Annapurna Labs. Few companies can replicate what Amazon does on a large scale, he said.

The proprietary chips allow Amazon to consume less power and improve the efficiency of its own data centers. TechInsights compares Nvidia’s chips to station wagons, while Amazon’s own solutions resemble smaller hatchbacks designed to perform a narrow range of tasks. Amazon is in no hurry to share data on testing the performance of its accelerators, but the Trainium 2 chips should be four times faster than their predecessors, according to available data. The mere emergence of alternatives to Nvidia solutions can already be highly appreciated by AWS customers.

admin

Share
Published by
admin

Recent Posts

AMD showed how Ryzen AI 300 destroys Intel Lunar Lake in games – there were some tricks

AMD boasted on its official blog about the incredible gaming performance of its Ryzen AI…

43 minutes ago

Google and NVIDIA showed the first results of TPU v6 and B200 in the MLPerf Training AI benchmark

NVIDIA's Blackwell accelerators outperformed H100 chips in the MLPerf Training 4.1 benchmarks by more than…

1 hour ago

TSMC found a bomb on the territory of the future chip plant

An unexploded rusty bomb was discovered at the construction site of a new TSMC plant…

2 hours ago

Nvidia is targeting the market for humanoid robots – it will provide them with “brains”

Nvidia will launch Jetson Thor, a new computing platform for humanoid robots, in the first…

3 hours ago

Veterans of Gearbox, Bethesda and Epic Games team up to create “games where you don’t have to wait for the fun to start”

Texas studio Ruckus Games, founded by veterans of Gearbox Software, Bethesda Softworks, Riot Games, Blizzard…

3 hours ago

The world’s first flight with a solid-state battery was performed by the EHang air taxi – autonomy doubled

Recently, the Chinese company EHang conducted unmanned flights of a two-seater air taxi EH216-S with…

5 hours ago