The Rocket Engine for AI
Elon Musk does not hand out compliments. So when he looks at a chip and calls it "the rocket engine for AI," something has changed. Not his opinion of NVIDIA. His calculation of what it will take to build intelligence at planetary scale.
Elon Musk does not hand out compliments. Ask the engineers at Waymo what he thinks of their lidar sensors. He once called them "a fool's errand," a solution in search of a problem, a vestigial organ bolted onto a car that doesn't need one. So when Musk looks at a chip and calls it "the rocket engine for AI," you pay attention. That's not marketing. That's a man who has spent twenty years building actual rocket engines telling you something about what keeps him up at night.

The chip in question is Rubin, NVIDIA's next-generation GPU platform unveiled at CES 2026 in Las Vegas. Jensen Huang announced it the way he announces most things: quietly, methodically, and with the kind of confidence that comes from knowing something the rest of the room hasn't fully processed yet. Standing close enough to the stage to feel the room's temperature shift, it became clear that this wasn't just a product launch. It was a reframing of what computing means.
Here is the problem Rubin is trying to solve. For sixty years, the semiconductor industry ran on a simple promise. Gordon Moore, then a researcher at Fairchild Semiconductor, observed in 1965 that the number of transistors on a chip doubled roughly every two years. He later refined this after co-founding Intel in 1968. The industry treated his observation like a law of physics. A planning document, a contract. Build your roadmap around it and you'll be fine. Then the contract expired.
Rubin carries roughly 1.6 times the transistors of its predecessor, Blackwell. Not two times. Not four. The physical limits of silicon are asserting themselves, and no amount of engineering ambition can fully paper over them. The math on the other side of that constraint is less forgiving. AI models are growing ten times larger every year. The inference workloads, the "thinking" side of AI where a model reasons through a problem rather than simply pattern-matching, are generating five times more tokens annually. The industry wants token costs to fall by a factor of ten, every year. Moore's Law, even at its peak, could never have kept pace with that.
NVIDIA's answer was to stop optimizing the chip and start redesigning the system. Six new chips, reengineered together: the GPU, the CPU, the memory architecture, the networking fabric, the power delivery, the cooling. The company calls it extreme co-design. Nobody redesigns six chips at once unless they believe the old approach has stopped working.
Musk understood this before most. His enthusiasm for NVIDIA reads like a procurement decision, not a compliment. Running the numbers on what xAI needs to train its next generation of models, and then running the numbers on what exists to deliver that compute, leads to one conclusion: NVIDIA is not one option among several. It is, for now, the only option. "Gold standard" isn't flattery. It's a supplier assessment.
What makes this moment worth watching is what happened next. In February 2026, SpaceX acquired xAI. A rocket company and an AI company, merged. From the outside, it reads like an empire consolidation play. The internal logic runs deeper. xAI's stated mission is to understand the true nature of the universe. SpaceX exists to get humanity off this planet. In Musk's telling, these aren't parallel projects. They're sequential. You cannot understand the universe from a single vantage point. You have to go out there.
The consequences for computing are real, even if they take time to land. The constraints binding AI infrastructure today — power availability, land, cooling, regulatory approval — are all terrestrial problems. They are artifacts of building data centers on a planet with a finite grid and neighborhood opposition. In orbit, those problems look different. The sun doesn't set. There's no zoning board. The only variable that matters is the cost of getting hardware into space, and that cost has been falling for twenty years with no floor yet in sight.
Rubin, seen that way, is not just a faster chip. It is the last major milestone before the question changes from how do we build better data centers to what does a GPU look like when it's designed for orbit.
One company is built to make intelligence. The other is built to give that intelligence somewhere to go.