Nvidia CEO Jensen Huang said the performance of Nvidia chips has increased 1,000-fold over the past 10 years, defying Moore’s Law. And that progress isn’t slowing down anytime soon, with the company’s chips driving further cost reductions and accelerating artificial intelligence (AI) development, Huang told TechCrunch after his CES 2025 keynote.

Image source: NVIDIA

Moore’s Law, formulated in 1965 by Intel co-founder Gordon Moore, predicted that the number of transistors on a chip would double every two years, resulting in a doubling of chip performance. The cost of computing decreased accordingly. For decades, this rule determined the development of computing technology. However, in recent years, Moore’s Law has begun to slow down. However, Jensen Huang strongly disagrees with this, as he has repeatedly stated before. In November, Huang said the AI ​​world was on the verge of a “hyper Moore’s Law.”

This time, the founder of Nvidia noted that Nvidia’s AI chips are developing ahead of schedule and the new superchip for data centers is 30 times faster than the previous generation of chips when performing tasks related to artificial intelligence. “Our systems are progressing much faster than Moore’s Law,” Huang told TechCrunch on Tuesday.

Huang’s announcement comes as the industry faces questions about slowing progress in artificial intelligence. But Nvidia, while remaining a key player in the market that supplies chips to leading AI labs such as Google, OpenAI and Anthropic, says it can move faster than Moore’s Law because innovation is happening at every level, from chip architecture to software algorithms. “We can create architecture, chip, system, libraries and algorithms all at the same time,” Huang noted. “If you do that, you can move faster than Moore’s Law because you can innovate across the entire stack.”

The Nvidia chief showed off the data center superchip at CES that powers the GB200 NVL72 system, which he says is 30 to 40 times faster at AI computing than the previous flagship H100 chip. This significant increase in performance, according to Huang, will reduce the cost of running AI models that require large computing power, such as OpenAI’s o3 model. It also emphasizes that in the long term, expensive reasoning models can be used to generate better data for subsequent training of AI agents, which will lead to lower costs.

Huang rejects the idea that AI progress is slowing down and argues that advances in hardware could directly impact the future development of AI capabilities. “Moore’s Law was so important in the history of computing because it brought down the cost of computing,” Huang told TechCrunch. “The same thing will happen with inference [running trained neural networks]: we will increase productivity, and as a result, the cost of inference will become less.”

admin

Share
Published by
admin

Recent Posts

OpenAI begins searching for specialists to help it create robots

OpenAI is best known for creating the ChatGPT chatbot and large language models, but last…

11 minutes ago

Altera announced that it is becoming a company independent from Intel

Amid the swift resignation of the former Intel CEO in early December, everyone almost forgot…

45 minutes ago

TikTok could be blocked in the US on January 19 by Supreme Court decision

The US Supreme Court is deciding the issue of freedom of speech and national security…

1 hour ago

Up to 2280 cores and 42 TB of RAM per rack: Oracle introduced the Exadata X11M DBMS platform

Oracle has announced a new version of the Exadata DBMS platform. Even in the previous…

4 hours ago

SK hynix is ​​developing HBM4 memory faster than Nvidia requested

The development of high-performance HBM4 memory by SK hynix is ​​proceeding faster than requested by…

4 hours ago