When Nvidia CEO Jensen Huang declared at CES 2026 that "Blackwell sales are off the charts," he wasn't exaggerating. The semiconductor giant enters February with a staggering $275 billion backlog of data center chip orders—a figure that underscores the insatiable demand for AI computing power and Nvidia's dominant position in supplying it.
"We've entered the virtuous cycle of AI," Huang explained during the company's recent earnings call. "Compute demand keeps accelerating and compounding across training and inference—each growing exponentially."
The Numbers Behind the Backlog
Nvidia's fiscal third-quarter 2026 results tell the story of a company operating at a scale few could have imagined just two years ago. Revenue reached $57.01 billion—beating estimates by 3.48%—while adjusted earnings per share of $1.30 exceeded analyst expectations by a similar margin.
But the backward-looking financials only hint at what's coming. The $275 billion order backlog represents approximately five quarters of revenue at current run rates, providing extraordinary visibility into Nvidia's near-term trajectory.
During the first nine months of fiscal 2026, Nvidia returned $37 billion to shareholders through buybacks and dividends, with $62.2 billion remaining under its repurchase authorization. It's the kind of capital return program typically associated with mature blue chips, not a company still growing revenue at double-digit rates.
The OpenAI Alliance
Among the most significant developments driving Nvidia's backlog is a strategic partnership with OpenAI. The deal commits at least 10 gigawatts of Nvidia systems—an almost incomprehensible amount of computing power—to support OpenAI's next-generation AI infrastructure.
To put that in perspective, 10 gigawatts is roughly equivalent to the power consumption of 7-8 million average American homes. The scale of investment required to support such deployments explains why hyperscalers are collectively planning over $470 billion in capital expenditures for 2026.
Nvidia has also announced partnerships with Google Cloud, Microsoft, Oracle, and xAI to build what the company describes as "America's AI infrastructure."
Anthropic Joins the Nvidia Ecosystem
Perhaps most notably, Anthropic—the AI safety-focused company behind the Claude assistant—announced it will "for the first time run and scale on NVIDIA infrastructure." The deal includes an initial commitment of 1 gigawatt of compute capacity using Nvidia's Grace Blackwell and upcoming Vera Rubin systems.
Anthropic's embrace of Nvidia hardware represents a significant endorsement, given the company's technical sophistication and its previous reliance on alternative cloud computing arrangements.
China: The Market Reopens
The Trump administration's decision to allow Nvidia to sell advanced chips into China has unlocked substantial new demand. Chinese tech firms have reportedly placed orders for over 2 million of Nvidia's H200 AI GPUs for 2026, with prices around $27,000 per chip.
Nvidia informed Chinese customers it plans to begin shipping H200 processors before the mid-February Lunar New Year holiday, with initial shipments of 40,000 to 80,000 chips. The resumption of China sales removes what had been a significant headwind for the company's growth trajectory.
The Vera Rubin Surprise
At CES 2026, Nvidia surprised investors by announcing that its next-generation Vera Rubin chip is already in "full production." The earlier-than-expected timeline suggests Nvidia is maintaining its aggressive pace of innovation despite the extraordinary demand for current-generation products.
Analysts expect Nvidia's earnings growth to accelerate further in fiscal 2027 (which began this month), with projections calling for 61% growth compared to the 57% jump in the just-completed fiscal year.
The Supercomputer Buildout
Beyond chip sales, Nvidia is increasingly involved in full-system deployments. The company revealed plans to accelerate seven new supercomputers, including a partnership with Oracle to build the U.S. Department of Energy's largest AI supercomputer, dubbed Solstice, featuring 100,000 Blackwell GPUs.
These government and enterprise deployments provide another dimension to Nvidia's revenue beyond the hyperscaler demand that has dominated recent quarters.
Risks and Considerations
Despite the extraordinary momentum, investors should consider several factors:
Valuation: Nvidia trades at premium multiples that assume continued exceptional growth. Any deceleration could trigger significant multiple compression.
Customer concentration: A handful of hyperscalers account for a substantial portion of Nvidia's data center revenue. Any pullback in their spending plans would be felt immediately.
Competition: AMD, Intel, and custom chips from major tech companies are all targeting Nvidia's AI dominance. While Nvidia maintains a significant lead, the competitive landscape is intensifying.
Geopolitical uncertainty: The China market access could be reversed if trade tensions escalate.
The Investment Case
For investors, Nvidia presents a classic growth stock dilemma: the company's dominance and execution are undeniable, but so is its premium valuation. The $275 billion backlog provides unusual revenue visibility, while partnerships with every major AI player suggest the demand wave has years to run.
"Cloud GPUs are sold out. Compute demand keeps accelerating and compounding."
— Jensen Huang, Nvidia CEO
Whether that justifies current valuations depends on one's view of how long the AI infrastructure buildout will continue—and whether Nvidia can maintain its market position as the ecosystem matures. The backlog suggests that question won't be answered for several years at least.