Nvidia CEO Jensen Huang took the stage at CES 2026 with a message that could reshape industries: artificial intelligence is ready to move beyond screens and into the physical world. In a sweeping keynote presentation, Huang unveiled a new stack of robot foundation models, simulation tools, and edge hardware that positions Nvidia as the default platform for the coming wave of general-purpose robotics.
'The ChatGPT Moment for Robotics'
"The ChatGPT moment for robotics is here," Huang declared to the packed audience at the Fontainebleau Las Vegas. "Breakthroughs in physical AI—models that understand the real world, reason and plan actions—are unlocking entirely new applications."
The comparison to ChatGPT is deliberate and instructive. Just as large language models suddenly made AI accessible to anyone who could type a question, Huang argues that advances in vision-language-action models are about to make general-purpose robots feasible for the first time. These aren't the rigidly programmed industrial arms of the past, but machines that can understand their environment and adapt to new tasks without explicit instructions.
The Partnership Parade
Huang's presentation featured an impressive roster of companies already building on Nvidia's physical AI stack. Representatives from Caterpillar, Uber Eats, LG Electronics, and Boston Dynamics joined him on stage to demonstrate the range of applications—from autonomous mining equipment to delivery robots to humanoid workers.
The partnerships signal that Nvidia's robotics ambitions extend far beyond consumer gadgets. Caterpillar is integrating Nvidia's platforms into heavy equipment for mining and construction. Uber Eats is exploring autonomous delivery robots. LG unveiled a new home robot designed for household tasks. The common thread is Nvidia's computing infrastructure and AI models powering each application.
Jetson Thor: The Robot Brain
Central to Nvidia's physical AI push is the Jetson Thor computing module, a powerful edge AI processor designed specifically for robotics applications. Built on Nvidia's Blackwell architecture, Jetson Thor delivers four times the performance of the previous generation at a price point of $1,999 for 1,000-unit volumes.
Boston Dynamics, Humanoid, and RLWRLD have already integrated Jetson Thor into their humanoid platforms to enhance navigation and manipulation capabilities. The module provides the onboard computing power needed for real-time perception, reasoning, and action—the trifecta required for autonomous operation in unstructured environments.
The Cosmos AI Foundation
Perhaps the most memorable moment of the keynote came when Huang was joined on stage by two small BDX droids—autonomous robots operating through Nvidia's Cosmos AI foundation models. The cute companions navigated the stage independently, demonstrating the kind of general-purpose autonomy that Huang argues is now achievable at scale.
Cosmos represents Nvidia's attempt to create foundation models specifically for physical AI—pre-trained systems that understand spatial relationships, physics, and object manipulation. Like GPT for text or DALL-E for images, Cosmos aims to provide a base layer that robotics companies can build upon without starting from scratch.
The Android Parallel
Industry analysts are drawing parallels to the smartphone revolution. Just as Android became the default operating system for mobile devices by providing a common platform for diverse hardware manufacturers, Nvidia is positioning its physical AI stack as the standard infrastructure for robotics.
The strategy makes sense given Nvidia's dominance in AI computing. Companies building robots need training infrastructure (where Nvidia's data center GPUs already dominate), simulation environments (where Nvidia's Omniverse platform leads), and edge computing for deployment (where Jetson is established). By controlling the full stack, Nvidia can capture value at every stage of the robotics development cycle.
Investment Implications
For investors, Huang's CES presentation reinforces the thesis that Nvidia's growth story extends beyond the current AI training boom. As AI models mature and the focus shifts from training to deployment, Nvidia is positioning itself to benefit from physical AI applications that could ultimately dwarf the software market.
The robotics total addressable market is difficult to size because many applications don't yet exist at commercial scale. But if Huang's "ChatGPT moment" analogy proves accurate—if physical AI is about to become as accessible and transformative as large language models—the opportunity could be enormous.
The Competitive Landscape
Nvidia isn't operating in a vacuum. Tesla is developing its own AI stack for Optimus, Google DeepMind is advancing robotics research, and Chinese competitors are investing heavily in autonomous systems. But Nvidia's ecosystem advantages—the partnerships, the software stack, the developer community—create significant switching costs for companies already building on its platform.
Whether the "ChatGPT moment for robotics" arrives in 2026 or later, Nvidia has clearly made its bet: the future is physical AI, and the company intends to be the infrastructure layer that makes it possible.