Nvidia CEO Jensen Huang said the performance of Nvidia chips has increased 1,000-fold over the past 10 years, defying Moore’s Law. And that progress isn’t slowing down anytime soon, with the company’s chips driving further cost reductions and accelerating artificial intelligence (AI) development, Huang told TechCrunch after his CES 2025 keynote.

Image source: NVIDIA

Moore’s Law, formulated in 1965 by Intel co-founder Gordon Moore, predicted that the number of transistors on a chip would double every two years, resulting in a doubling of chip performance. The cost of computing decreased accordingly. For decades, this rule determined the development of computing technology. However, in recent years, Moore’s Law has begun to slow down. However, Jensen Huang strongly disagrees with this, as he has repeatedly stated before. In November, Huang said the AI ​​world was on the verge of a “hyper Moore’s Law.”

This time, the founder of Nvidia noted that Nvidia’s AI chips are developing ahead of schedule and the new superchip for data centers is 30 times faster than the previous generation of chips when performing tasks related to artificial intelligence. “Our systems are progressing much faster than Moore’s Law,” Huang told TechCrunch on Tuesday.

Huang’s announcement comes as the industry faces questions about slowing progress in artificial intelligence. But Nvidia, while remaining a key player in the market that supplies chips to leading AI labs such as Google, OpenAI and Anthropic, says it can move faster than Moore’s Law because innovation is happening at every level, from chip architecture to software algorithms. “We can create architecture, chip, system, libraries and algorithms all at the same time,” Huang noted. “If you do that, you can move faster than Moore’s Law because you can innovate across the entire stack.”

The Nvidia chief showed off the data center superchip at CES that powers the GB200 NVL72 system, which he says is 30 to 40 times faster at AI computing than the previous flagship H100 chip. This significant increase in performance, according to Huang, will reduce the cost of running AI models that require large computing power, such as OpenAI’s o3 model. It also emphasizes that in the long term, expensive reasoning models can be used to generate better data for subsequent training of AI agents, which will lead to lower costs.

Huang rejects the idea that AI progress is slowing down and argues that advances in hardware could directly impact the future development of AI capabilities. “Moore’s Law was so important in the history of computing because it brought down the cost of computing,” Huang told TechCrunch. “The same thing will happen with inference [running trained neural networks]: we will increase productivity, and as a result, the cost of inference will become less.”

admin

Share
Published by
admin

Recent Posts

GTX 750 Ti is no longer enough for the game: Ubisoft announced the system requirements of Rainbow Six Siege X

Publisher and developer Ubisoft has revealed the system requirements for Tom Clancy's Rainbow Six Siege…

47 minutes ago

Asus Unveils ProArt GeForce RTX 5080 Graphics Cards with Wood Finish, USB-C, and M.2 Slot

Asus has brought wood grain textures to its graphics card lineup. The company has unveiled…

47 minutes ago

AI agents: fas, profile, passwords, appearances

The perception of AI agents, their plans and hopes – as well as their impact…

47 minutes ago

Asus Unveils GeForce RTX 5060 Gaming Laptops — $300-400 Cheaper Than RTX 5070 Models

Asus has refreshed its lineup of gaming laptops, introducing new configurations with Nvidia GeForce RTX…

47 minutes ago

Microsoft Open Sources WSL, a Subsystem for Running Linux Applications on Windows

Microsoft has opened the source code of a set of tools that provide the Windows…

47 minutes ago

Cisco’s quarterly results and outlook beat Wall Street expectations

Cisco Systems, an American supplier of enterprise networking equipment, announced results for the third quarter…

1 day ago