Open Source Is How Nvidia Sells More GPUs

Jensen Huang called OpenClaw "the new Linux" at GTC last week. He told 30,000 people that every company needs an OpenClaw strategy. He open-sourced Isaac Sim. He open-sourced Newton, a GPU-accelerated physics engine for robotic manipulation. He gave away the agent framework, the simulation stack, and the robotics toolkit.
Everyone heard generosity. They should have heard strategy.
This piece makes a single argument: Nvidia is not using openness as a moral stance or developer goodwill play. It is using openness as demand engineering. By giving away the layers above compute -- agents, simulators, physics, protocols -- Jensen maximizes the number, complexity, and frequency of workloads that must ultimately run on Nvidia hardware.
What follows is not a tour of open source idealism. It's a walkthrough of how Nvidia positions itself underneath openness, where volume beats margins and ubiquity beats ownership.
Open Source Is Demand Engineering
The pattern is old. Rockefeller gave away kerosene lamps to sell oil. Gillette gave away razors to sell blades. Jensen gives away agent frameworks, physics simulators, and robotics toolkits to sell compute.
The playbook hasn't changed. The scale has.
OpenClaw hit 332,000 GitHub stars -- the fastest-adopted open-source project in history. It gives AI agents a standard environment to navigate file systems, spawn sub-agents, run tasks, and operate autonomously. Nvidia built NemoClaw on top of it for enterprise: policy enforcement, guardrails, privacy routing, production deployment in under an hour.
None of this makes Nvidia money directly. All of it makes Nvidia money indirectly. Every agent running OpenClaw burns inference tokens. Every token burns compute. Every compute cycle runs on Nvidia silicon.
This is the core pattern: Nvidia opens the layer developers touch so it can monopolize the layer they cannot avoid.
The gift is the trap.
The Contrarian Read: It's Not About Robots
The GTC keynote was wall-to-wall robotics. 110+ partners. Isaac GR00T humanoid models. Newton physics for contact-rich manipulation. ABB, KUKA, Figure, Agility, Medtronic, Toyota Research Institute -- all building on the Nvidia stack.
It's easy to watch that and think: Jensen is betting on robots.
He's not. He's betting on what training robots requires.
Robotics is the excuse. Compute is the objective.
Physical AI is the most compute-intensive workload humans have ever attempted. Training a robot to fold a towel requires:
- Photorealistic rendering of the environment
- High-fidelity contact physics for every fiber
- Domain randomization across millions of scenarios
- Massive parallel simulation environments
Jensen doesn't need robots to succeed. He needs robotics teams to try. Every failed simulation, every discarded policy, every retrained model -- it all burns compute. The attempts are the product.
AGM Is Right. And Wrong.
Antonio Garcia Martinez wrote a sharp piece this week arguing open doesn't always win. His evidence lands:
- Google built open web ads. Facebook built a walled garden. Facebook won.
- Farcaster's open social protocol failed. Closed networks kept the users.
- Google never opened AdWords to its own programmatic ecosystem. Monopsony on demand meant it didn't have to.
He's correct -- at the application layer. If you own the user, you wall them in. That's how Facebook works. That's how Apple works. That's likely how OpenAI will work.
But the insight breaks down when you change layers.
The real framework isn't "open vs. closed." It's "open vs. closed at which layer."
If you own the consumer experience: close everything. Maximize margin.
If you own the compute underneath: open everything above you. Maximize volume.
Jensen sits at the bottom of the stack. He doesn't need to own the user. He doesn't need to own the protocol. He doesn't even need to own the agent. He needs all of them to exist -- in the largest possible numbers, running the most complex possible workloads, on Nvidia hardware.
Openness isn't his philosophy. It's his distribution strategy.
Jevons Paradox, Applied
Here's where it gets counterintuitive.
Nvidia's Vera Rubin architecture ships H2 2026: 5x inference performance over Blackwell Ultra, 10x lower token costs, 10x performance per watt.
Naive read: cheaper inference means less GPU revenue.
Actual read: cheaper inference means more things get automated. More automation means more total inference. More total inference means more GPUs sold.
This is Jevons paradox -- the observation that increasing efficiency in resource use tends to increase total consumption of that resource, not decrease it. It happened with coal. It happened with bandwidth. It's happening with compute.
Every time Jensen makes a token cheaper, the number of economically viable use cases expands. The denominator shrinks, but the numerator explodes.
Open standards accelerate this. When every robotics team uses the same simulation framework, the same physics engine, the same agent orchestrator -- iteration cycles shorten. More experiments happen. More experiments mean more failed attempts. Failed attempts still burn tokens.
The more open the ecosystem, the more tokens get consumed.
The Agentic Commerce Connection
This same demand‑engineering logic shows up outside robotics -- most clearly in agentic commerce. The surface debate is about protocols and payments. The underlying reality is about inference volume.
The same logic runs through the agentic commerce wars.
ACP (OpenAI + Stripe) is a walled garden -- centralized catalog, Stripe-only payments, ~$7.20 per $100 transaction. UCP (Google + Shopify) is open infrastructure -- merchant-hosted, any payment processor, ~$3.20 per $100.
AGM's point: if OpenAI captures the consumer, ACP wins and it doesn't matter if UCP is technically better. The aggregator dictates terms.
But here's what neither protocol's backers are saying out loud: every agentic transaction -- whether ACP or UCP -- requires inference. Product discovery, price comparison, negotiation, checkout, fulfillment tracking -- each step is a model call. Each model call is a token. Each token is compute.
Nvidia doesn't care which protocol wins. It cares that agentic commerce exists. More protocols, more agents, more transactions, more tokens, more GPUs.
Stripe figured this out too. Their new Machine Payments Protocol lets agents pay each other directly. It works with both ACP and UCP. Stripe positioned itself as the settlement layer regardless of which commerce model wins -- the same move Jensen made with compute.
The Question Nobody Is Asking
The discourse is stuck on: Will AI be open or closed?
Wrong question.
The right question: Which layers will be open, which will be closed, and who already positioned themselves underneath the open ones?
Open at the consumer layer means fragmented demand and protocol wars. Open at the compute layer means Nvidia sells to everyone. Open at the agent layer means more agents, more workloads, more tokens.
Jensen isn't predicting the future of open source. He's engineering the conditions where openness above him maximizes demand at his layer.
He doesn't sell the lamp. He doesn't sell the razor. He sells the oil. He sells the blades.
And he just gave away a lot of lamps.