Nvidia CEO Jensen Huang said the performance of Nvidia chips has increased 1,000-fold over the past 10 years, defying Moore’s Law. And that progress isn’t slowing down anytime soon, with the company’s chips driving further cost reductions and accelerating artificial intelligence (AI) development, Huang told TechCrunch after his CES 2025 keynote.

Image source: NVIDIA

Moore’s Law, formulated in 1965 by Intel co-founder Gordon Moore, predicted that the number of transistors on a chip would double every two years, resulting in a doubling of chip performance. The cost of computing decreased accordingly. For decades, this rule determined the development of computing technology. However, in recent years, Moore’s Law has begun to slow down. However, Jensen Huang strongly disagrees with this, as he has repeatedly stated before. In November, Huang said the AI ​​world was on the verge of a “hyper Moore’s Law.”

This time, the founder of Nvidia noted that Nvidia’s AI chips are developing ahead of schedule and the new superchip for data centers is 30 times faster than the previous generation of chips when performing tasks related to artificial intelligence. “Our systems are progressing much faster than Moore’s Law,” Huang told TechCrunch on Tuesday.

Huang’s announcement comes as the industry faces questions about slowing progress in artificial intelligence. But Nvidia, while remaining a key player in the market that supplies chips to leading AI labs such as Google, OpenAI and Anthropic, says it can move faster than Moore’s Law because innovation is happening at every level, from chip architecture to software algorithms. “We can create architecture, chip, system, libraries and algorithms all at the same time,” Huang noted. “If you do that, you can move faster than Moore’s Law because you can innovate across the entire stack.”

The Nvidia chief showed off the data center superchip at CES that powers the GB200 NVL72 system, which he says is 30 to 40 times faster at AI computing than the previous flagship H100 chip. This significant increase in performance, according to Huang, will reduce the cost of running AI models that require large computing power, such as OpenAI’s o3 model. It also emphasizes that in the long term, expensive reasoning models can be used to generate better data for subsequent training of AI agents, which will lead to lower costs.

Huang rejects the idea that AI progress is slowing down and argues that advances in hardware could directly impact the future development of AI capabilities. “Moore’s Law was so important in the history of computing because it brought down the cost of computing,” Huang told TechCrunch. “The same thing will happen with inference [running trained neural networks]: we will increase productivity, and as a result, the cost of inference will become less.”

admin

Share
Published by
admin

Recent Posts

Nissan Leaf EV to Become NACS-Ported Compact Crossover in Third Generation

Nissan Leaf can rightfully be considered a long-liver of the electric car market, since the…

3 days ago

OpenAI expects to more than triple its revenue this year and then double it next year.

OpenAI, the market leader in generative artificial intelligence systems, remains nominally a startup, its financial…

3 days ago

OpenAI Decides to Hold 4o Image Generation Launch for Free Users

OpenAI has been forced to delay the release of ChatGPT's built-in image generator for free…

3 days ago

1440p and 240Hz for just $200: Xiaomi updates the 27-inch Redmi G27Q gaming monitor

Xiaomi continues to update its Redmi G27Q gaming monitor every year. The model was first…

3 days ago

Beware, Android is shutting down: OS development will cease to be public, but there is no reason to panic

Android device makers can significantly customize the look and feel of the operating system, but…

3 days ago

Fake GeForce RTX 4090s with RTX 3090 chips have started popping up in China — craftsmen are even changing the GPU markings

In China, scammers have started selling GeForce RTX 3090 graphics cards, passing them off as…

3 days ago