Computing Capital Markets Are Here
On May 5, Larry Fink told an audience at the Milken Institute Global Conference that compute would become "a new asset class." Seven days later, CME Group and index provider Silicon Data announced they were building the infrastructure to make him right.
The product is futures contracts on GPU computing power, the first exchange-traded derivatives market for AI infrastructure. The announcement is pending regulatory review. The contracts do not yet exist. But the pricing architecture that will underlie them is already designed, the index is already publishing, and the participants who will populate the market are already waiting.
How these contracts will actually work, and whether the commodity underneath them qualifies as a commodity at all, is the harder question.
The Problem the Market Is Solving
Start with a number: $80.62 per hour.
That is what AWS charges for a single eight-chip NVIDIA H100 SXM instance on demand today. Strip it down to a single H100 PCIe chip from CoreWeave, the largest of the neo-cloud providers, and the rate drops to $4.25 per hour. Spot availability windows in peak US-East regions have narrowed to two to four hours, making continuous access unreliable for anything beyond a test workload.
The spread between those two prices is the cost of certainty. An AI lab burning through hundreds of millions of dollars of training compute does not want to discover at hour 11,000 of a 12,000-hour training run that spot capacity has collapsed because three frontier labs simultaneously launched new foundation model training jobs. It pays the premium for guaranteed access.
This is precisely the kind of availability risk and price volatility that futures markets were designed to manage. Airlines buy jet fuel forward. Natural gas utilities lock in winter supply in summer. An AI company that needs 20,000 H100-equivalents for a training run starting in Q1 2027 should, in principle, be able to lock in that capacity today at a known price, transferring the risk to a counterparty willing to hold it.
The reason it cannot do that today is that no standardized contract exists. No discoverable price. No clearing mechanism. No way to transfer the risk at all. Buyers and sellers negotiate bilaterally over email, pricing is whatever two parties agreed to before the contract expires, and risk management is largely a function of hoping nothing changes.
CME's announcement is an attempt to fix that structure. The harder question is what, exactly, they would be standardizing.
The Index Architecture
Silicon Data's H100 Index is the reference price that would underlie the CME contracts. It tracks rental rates across 95% of neo-cloud providers and 100% of major hyperscalers, published at financial-grade frequency, covering both NVIDIA H100 PCIe and H100 SXM chips.
This is the compute equivalent of Henry Hub, the Louisiana delivery point whose spot price anchors the entire US natural gas derivatives market. Henry Hub works because natural gas from Haynesville is chemically identical to natural gas from the Marcellus. The benchmark prices the molecule regardless of where it came from.
Computing power cannot be benchmarked the same way, and that is where the pricing complexity begins.
How the Contracts Would Actually Be Priced
Compute futures face at least five pricing layers that oil and natural gas have never had to deal with.
The hardware generation layer. An H100 contract written today settles against H100 pricing. But NVIDIA's Blackwell B200, roughly three to five times more efficient per training dollar than the H100, is already entering data centers. By the end of a twelve-month contract's term, the market's reference price for equivalent compute output may have shifted substantially. Unlike oil, where a barrel of WTI today represents the same unit of energy as a barrel of WTI next year, compute hardware depreciates technologically. A 2027 H100 purchases less effective compute per dollar than a 2025 H100, because newer hardware has arrived to do the same work more cheaply.
This creates a structural directional bias on the forward curve: the market's consensus view is that GPU prices in 2028 will be lower than today, because hardware efficiency per dollar improves roughly tenfold every two years. In commodity terms, compute futures are expected to exhibit persistent backwardation under normal conditions, a futures curve that slopes downward with spot prices higher than forward prices.
The contango scenario. Backwardation breaks down under supply shocks. NVIDIA's current production is constrained by Taiwan Semiconductor's advanced node capacity. A Taiwan Strait escalation, an expansion of US semiconductor export controls, or a fabrication incident at TSMC's most advanced fabs would send spot prices sharply higher and pull the forward curve into contango, exactly what happened to natural gas after the 2005 Gulf Coast hurricanes disrupted production infrastructure. A compute futures market would, in this scenario, effectively be pricing geopolitical risk in the South China Sea alongside GPU availability.
The location basis. Data center geography carries a measurable and persistent premium. US-East Virginia is the benchmark, offering the lowest latency to the East Coast, deepest capacity, and most competitive pricing. US-West runs approximately five to eight percent above that baseline. European data centers in Frankfurt and Amsterdam carry a fifteen to twenty-five percent premium, driven by electricity costs, GDPR compliance overhead, and lower competitive density among providers. Asia-Pacific locations in Tokyo and Singapore command twenty to thirty percent above the US-East benchmark.
Each of these creates a basis spread, the compute equivalent of the Brent-WTI differential or the difference between PJM electricity in Pennsylvania and ERCOT electricity in Texas. The natural architecture that emerges is regional contracts: a Virginia H100 contract, a Frankfurt H100 contract, an Asia-Pacific H100 contract, each with its own price and basis against the Silicon Data benchmark.
The GPU model differentiation. H100 SXM and H100 PCIe are not the same chip. SXM is the server interconnect form factor, delivering higher memory bandwidth and significantly better performance on large-model training workloads. PCIe is the standard slot version, approximately 20 to 30 percent slower on the workloads that matter most to frontier AI labs. A single "H100 contract" that conflates both would be analogous to a single crude contract that conflates light sweet and heavy sour without a differential, practically useless for anyone who actually cares about the delivery.
The solution that researchers at 252.capital have proposed is a standardized unit called the Effective Compute Hour, or ECH, a hardware-neutral reference that expresses computing capacity in normalized terms. One H100 SXM equals X ECH. One B200 equals Y ECH. A contract written against ECH survives hardware transitions without amendment. Whether CME adopts ECH or settles directly against the Silicon Data H100 Index is an open design question pending regulatory review.
The term premium. A monthly contract for reserved GPU capacity prices differently from an annual contract, and the gap goes beyond interest rate carry. The term premium on compute futures is a bet on where hardware prices land at expiration relative to spot today, adjusted for the probability that a generational chip transition reprices the entire market mid-contract. A twelve-month forward that straddles a major Blackwell deployment is a fundamentally different instrument from one written entirely within a stable hardware cycle.
Who Uses These Contracts
AI developers and frontier labs are the natural hedgers. OpenAI, Anthropic, Meta AI, Google DeepMind, and the dozens of well-capitalized labs below them are spending hundreds of millions per training run. Their financial planning currently depends on opaque bilateral contracts or spot market exposure that can double in price during high-demand windows. A liquid futures market lets them lock in compute twelve to eighteen months ahead, converting a massive variable cost into a fixed one and enabling accurate budget forecasting for a line item that represents thirty to sixty percent of total R&D spend.
AWS, Microsoft Azure, and Google Cloud are simultaneously the largest suppliers of compute and the entities most exposed to demand volatility. A futures market lets them sell forward capacity they have not yet built, financing data center construction against pre-committed revenue. This is the same structure LNG project developers use: long-term offtake agreements backstop capital raises for liquefaction facilities. If Azure can pre-sell 2028 Virginia H100 capacity at today's futures price, it can justify the capex today without carrying the full demand risk on its balance sheet.
CoreWeave, Lambda Labs, Crusoe, and the other neo-cloud providers are the market's most natural short sellers. They own physical GPUs, earn revenue from rental pricing, and are acutely exposed to the risk that Blackwell reprices H100 rates below their acquisition cost. A short position in compute futures is a direct hedge against the technological obsolescence risk embedded in their asset base. Without it, a neo-cloud buying H100s at today's prices is making an unhedged bet that the hardware will not be commoditized before the investment pays off.
Financial speculators and macro funds are the participants who make the market liquid enough for everyone else. Fink's observation, that compute would become a new asset class, was aimed precisely at this audience: the capital allocators who turned natural gas from a bilateral OTC market into the world's most actively traded energy derivative by simply showing up and making prices. Compute futures give macro investors direct exposure to AI infrastructure investment cycles without the equity risk of buying NVDA or the credit risk of lending to a hyperscaler.
What This Market Gets Wrong About Compute
The most serious objection is physical, not financial.
Oil futures work because oil is fungible and physically deliverable. When a trader takes delivery on a WTI contract, they receive specific grades of crude at Cushing, Oklahoma. The physical delivery mechanism disciplines futures prices against spot, preventing the two from diverging indefinitely.
Compute works differently. A GPU-hour in US-East cannot be delivered in AP-Southeast. A training workload that requires 10,000 H100s in a specific cluster topology cannot be split across geographically dispersed data centers without incurring latency and communication overhead that degrades performance non-linearly. The commodity being contracted is heterogeneous in ways that matter enormously to the buyer.
Electricity futures faced exactly this problem. The solution was regional market segmentation, separate contracts for separate hubs: Henry Hub for gas, PJM for Mid-Atlantic power, ERCOT for Texas. The same architecture is the likely outcome for compute: regional contracts with basis differentials rather than a single global H100 price.
The deeper challenge is obsolescence velocity. Henry Hub gas from 2019 is still gas from 2019. An H100 from 2024 delivers far less per dollar by 2027, because NVIDIA's training efficiency benchmarks improve roughly tenfold every two years. A three-year compute futures contract is, at its core, a bet on the rate of GPU commoditization. No major commodity futures market has had to price that before.
The Precedent That Actually Matters
The natural gas analogy is instructive but imperfect. The more precise precedent is electricity deregulation in the late 1990s.
Before deregulation, electric utilities operated as vertically integrated monopolies: they owned generation, transmission, and distribution. Prices were set by regulators. Enron, whatever its ultimate fate, correctly identified that deregulation would create price volatility requiring hedging infrastructure, built the first liquid electricity trading desk, and briefly became the most admired company in America for doing so.
Compute in 2026 occupies the same structural position electricity occupied in 1997. The underlying asset is essential infrastructure, priced bilaterally through monopolistic relationships with a small number of dominant suppliers, AWS, Azure, and Google Cloud, with demand growing faster than the market's ability to price it efficiently. CME's announcement is an attempt to build the Henry Hub benchmark before the Enrons arrive.
Whether it succeeds depends on one variable that no exchange can manufacture: enough physical participants on both sides of the market to generate genuine price discovery. If hyperscalers refuse to participate, fearing that a transparent forward curve would expose the premium they charge for guaranteed capacity, the market will be illiquid and disconnected from the real economy it is supposed to serve. If they participate, they surrender pricing power they have held for the last five years.
The regulatory process is formality. The real question is whether hyperscalers will show up.
The Asset Class That Already Existed
Compute has been an investable asset class for years without the infrastructure to trade it as one. NVIDIA's stock is a leveraged bet on GPU demand. CoreWeave's IPO in early 2025 was a direct bet on compute rental margins. The AI infrastructure bond market, data center REITs, hyperscaler debt, and colocation facility financing has grown to hundreds of billions. Polymarket has active prediction markets on GPU availability and chip export restrictions.
What CME and Silicon Data are proposing is the formalization of an asset class that already exists, priced badly, without term structure, without transparency, and without the hedging mechanisms that allow large capital to allocate at scale.
Natural gas traded as a utility service under long-term bilateral contracts, with no market-determined forward curve, before NYMEX standardized the contract. Then pipelines got built in the places the forward curve said gas would be valuable, rather than the places the monopoly utilities found convenient.
If CME's compute contracts work, data centers get built where the forward curve says compute will be valuable. The next generation of foundation models gets trained on infrastructure financed by markets. And the price of intelligence, which is what a GPU-hour ultimately is, gets discovered in public, for the first time, by anyone with a terminal.
CME Group and Silicon Data declined to comment on contract specifications and the regulatory timeline. CoreWeave pricing as of May 12, 2026. GPU spot market data from Digiteria Labs cloud pricing tracker. Larry Fink's remarks reported by Bloomberg, May 5, 2026. CME-Silicon Data partnership announced May 12, 2026.