Nvidia's AI Chips Outpace Moore's Law, CEO Jensen Huang Claims

Taylor Brooks

Taylor Brooks

January 08, 2025 · 3 min read
Nvidia's AI Chips Outpace Moore's Law, CEO Jensen Huang Claims

Nvidia CEO Jensen Huang has made a bold claim, stating that the performance of his company's AI chips is advancing at a rate that surpasses the historical pace set by Moore's Law. In an interview with TechCrunch, Huang attributed this rapid progress to Nvidia's ability to innovate across the entire technology stack, from architecture to algorithms.

Moore's Law, coined by Intel co-founder Gordon Moore in 1965, predicted that the number of transistors on computer chips would roughly double every year, leading to exponential increases in computing power and reductions in cost. While this prediction held true for decades, the law has slowed down in recent years. However, Huang believes that Nvidia's AI chips are moving at an accelerated pace, with the company's latest datacenter superchip, the GB200 NVL72, boasting a 30x performance boost for AI inference workloads compared to its previous generation.

The significance of this claim cannot be overstated, as many leading AI labs, including Google, OpenAI, and Anthropic, rely on Nvidia's AI chips to train and run their AI models. Any advancements to these chips would likely translate to further progress in AI model capabilities, which have been questioned by some in recent times. Huang rejects the idea that AI progress is slowing, instead identifying three active AI scaling laws: pre-training, post-training, and test-time compute.

Huang's assertion is particularly noteworthy given the current focus on inference, which involves running AI models on existing data. The cost of inference is a significant concern, with AI models that use test-time compute, such as OpenAI's o3 model, being expensive to run. For example, OpenAI spent nearly $20 per task using o3 to achieve human-level scores on a test of general intelligence, whereas a ChatGPT Plus subscription costs $20 for an entire month of usage.

However, Huang is confident that the performance jump offered by Nvidia's latest chip will drive down the cost of inference, making AI reasoning models like OpenAI's o3 more accessible to a wider range of users. He emphasized that his focus is on creating more performant chips, which will ultimately lead to lower prices in the long run.

In the long term, Huang envisions AI reasoning models being used to create better data for the pre-training and post-training of AI models, further accelerating progress in the field. This vision is supported by the trend of plummeting AI model prices over the past year, driven in part by computing breakthroughs from hardware companies like Nvidia.

Huang's claim is all the more remarkable considering that Nvidia's AI chips have improved by a staggering 1,000x over the past decade, a pace that far outstrips the standard set by Moore's Law. As the AI landscape continues to evolve, it remains to be seen whether Nvidia can maintain its position at the forefront of innovation, but one thing is clear: the company's CEO is committed to pushing the boundaries of what is possible with AI.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.