Nvidia CEO: Someday we’ll have 1B robotic cars on the road

Source: Venture Beat
Nvidia CEO Jensen Huang predicted that someday we’ll have a billion cars on the road and they will all be robotic cars.
It sounds like science fiction, but as Huang has said before, “I am science fiction.” He made the comments in a conference call with analysts about Nvidia’s FYQ4 earnings ending January 26, 2025. (Here’s our full report on the earnings). Nvidia’s stock is current down half a percent to $130.72 a share in after-hours trading.
Colette Kress, EVP and CFO, said in the conference call that the data center business was up 93% from a year ago and 16% sequentially as the Blackwell ramp commenced and the Hopper chip sales also grew. Blackwell sales exceeded Nvidia’s expectations, she said.
“This is the fastest product ramp in our company’s history, unprecedented in its speed and scale,” said Kress. “Blackwell production is in full gear across multiple configurations and mere increasing supply,
expanding customer adoption. Our Q4 data center compute revenue jumped 18% sequentially and over 2x year on year. Customers are racing to scale infrastructure to train the next generation of cutting edge models and unlock the next level of AI capabilities.”
With Blackwell, it will be common for these clusters to start with 100,000 graphics processing units (GPUs) or more, Kress said. Shipments have already started for multiple infrastructures of this size. Post training and model customization are fueling demand for Nvidia infrastructure and software as developers and enterprisers leverage techniques such as fine tuning, reinforcement learning and distillation to tailor models. Hugging Face alone posts over 90,000 derivatives created from the Llama foundation model.
The scale of post training and model customization is massive and can collectively demand orders of magnitude more compute than pre training, Kress said. And inference demand is accelerating, driven by test time scaling and new reasoning models like OpenAI o3, DeepSeek and more. Kress said she expected China sales to be up sequentially, and Huang said China is expected to be the same percentage as in Q4. It is about half of what it was before export controls were introduced by the Biden administration.
Nvidia has driven to a 200 times reduction in inference costs in just the last two years, Kress said. She also said that as AI expands beyond the digital world, Nvidia infrastructure and software platforms are increasingly being adopted to power robotics and physical AI development. On top of that, Nvidia’s automotive vertical revenue is expected to grow as well.

Regarding CES, she noted the Nvidia Cosmo World Foundation model platform was unveiled there, and the robotics and automotive companies — including Uber — have been among the first to adopt it.
From a geographic perspective to potential growth of data center revenue was strongest in the U.S., driven by the initial ramp up. Countries across the globe are building their AI ecosystems, and demand for compute infrastructure is searching France’s 200 billion euro AI investment and the EU’s 200 billion euro investment initiatives offer a glimpse into the build out set to redefin global AI infrastructure in the coming years.
Kress said that as a percentage of total data center revenue, data center sales in China remained well below levels seen before the onset of export controls. Absent any change in regulations, Nvidia believes that China shipments will remain roughly at the same level in China for data center solutions.
“We will continue to comply with export controls while serving our customers,” Kress said.
Gaming and AI PCs

Kress noted that gaming revenue of $2.5 billion decreased 22% sequentially, and 11% year on year.
Full year, revenue of $11.4 billion increased 9% year on year, and demand remained strong throughout the holiday. But Kress said Q4 shipments were impacted by supply constraints.
“We expect strong sequential growth in Q1 as supply increases, the new GeForce RTX 50 series desktop and laptop GPUs are here, built for gamers, creators and developers,” Kress said.
The RTX 50 Series graphics cards use the Blackwell architecture, fifth-generation Tensor cores, and 4th generation RT cores. The DLSS4 software boosts frame rates up to eight times the previous generation by turning one rendered frame into three.
Automotive revenue was a record $570 million, up 27% sequentially, and up 103% year on year. Full year, revenue of $1.7 billion increased 55% year on year. Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotics.

At CES we announced Toyota, the world’s largest automaker, will build its next generation vehicles on Nvidia Orin running the safety certified Nvidia Drive. Kress said Nvidia saw higher engineering development costs in the quarter as more chips moved into production.
Nvidia expects FYQ1 revenue to be $43 billion, with sequential growth in data center revenue for bot compute and networking.
Nvidia’s next big event is the annual GTC conference starting March 17 in San Jose, California, where Huang will deliver a keynote on March 18.
Asked about a blurring line between training and inference, Huang said there are “multiple scaling laws” now including the pre-training scaling law, post-training scaling using reinforcement learning, and test-time compute or reasoning scaling. These methods are at the beginning and will change over time.
“We run every model. We are great at training. The vast majority of our compute today is actually inference. And Blackwell, with the idea of reasoning models in mind, and when you look at training, is many times more performant,” he said. “But what’s really amazing is for long thinking, test-time scaling reasoning, AI models were 10s of times faster, 25 times higher throughput.”
He noted he is more enthusiastic today than he was at CES, and he noted 1.5 million components will go into each one of the Blackwell-based racks. He said the work wasn’t easy but all of the Blackwell partners were doing good work. During the Blackwell ramp, the gross margins will be in the low 70s percentage points.

“At this point, we are focusing on expediting our manufacturing to make sure that we can provide” Blackwell chips to customers as soon as possible, Kress said. There is an opportunity to improve gross margins over time to the mid-70s later this year.
Huang noted that the vast majority of software is going to be based on machine learning and accelerated computing. He said the number of AI startups is still quite vibrant, and that agentic AI for the enterprise is on the rise. He noted physical AI for robotics and Sovereign AI for different regions are on the rise.
Blackwell Ultra is expected in the second half of the year as “the next train,” Huang said. He noted the first Blackwell had a “hiccup” that cost a couple of months and now it is fully recovered.
He went to the core advantage that Nvidia has over rivals, saying that the software stack is “incredibly hard” and the company builds its stack from end to end, including the architecture and the ecosystem that sits on top of the architecture.
Asked about geographic differences, Huang answered, “The takeaway is that AI is software. It’s modern software. It’s incredible modern software, and AI has gone mainstream. AI is used in delivery services everywhere, shopping services everywhere. And so I think it is fairly safe to say that AI has gone mainstream, that it’s being integrated into every application. This is now a software tool that can address a much larger part of the world’s GDP than any time in history. We’re just in the beginnings.”