The Levelized Cost of Compute

A Framework for Comparing Orbital vs. Terrestrial Data Centers
Pranav Myana
Download PDF Interactive Simulator

You can't master something that you can't even measure. And everyone has been measuring the cost of compute wrong. This has led to the denouncement of Data Centers in Space. In this whitepaper, I break down how Orbital Compute versus ground data centers through the mechanism of a new standard, the Levelized Cost of Compute.

LCOC Crossover Projection (Baseline Scenario)

Orbital LCOC
Ground Market Price

First Principles: What Does a Data Center Need?

But first, let's take a step back. Data Centers in Space sounds jarring. Let's break it down into first principles. What are the inputs for a data center on the ground?

Ground Inputs

  • Power: Grid interconnection queues are 3-5 years long, and only looking to get longer. Behind-the-meter natural gas and solar generation are a great lever, but you require fuel, permitting, and entering into grid backup contracts. You outsource risk to utilities, regulators, commodity markets, and regulators.
  • Land: To get enough space for data centers, you need to negotiate with landowners, municipalities, NIMBYs, and sometimes even federal agencies. You have to fight zoning, environmental review, and community pushback.
  • Water: The best way to do cooling on the ground relies on water, but water rights are extremely contested and tightening.
  • Labor: You need to get skilled technicians and engineers to relocate to remote environments, train them, house them, build transportation, and deal with rising labor costs.

Orbital Inputs

  • Launch: If you're vertically integrated, you have complete control over launch. You don't outsource the risk to anybody else, and you're tied to an enviable cost curve ($60,000/kg with space shuttle in the past, to $1,500/kg with Falcon currently, to $10-20/kg with Starship in the future).
  • Manufacturing: You set up production lines and factories. You work with learning curves where volume drives the cost down. You build where logistics and labor already work best.
  • Power: You have access to nearly 24/7 power in Dawn-Dusk Sun Synchronous Orbit. The permissions problem is almost entirely gone, you don't have to interconnect with the grid, and you don't have to rely on fuel.
  • Cooling: You do radiative cooling into space. You don't have to waste so much energy on it and deal with water rights. The engineering is challenging, but solvable.

The LCOC Framework

To deal with this massive rift in constraints and situations, we need to create a new standard to measure setting up data centers against. The Levelized Cost of Compute (LCOC). This is inspired by a metric used in the energy industry, the Levelized Cost of Energy. It's a standard metric that compares generation/storage assets in power that have different cost structures. It helps you measure a natural gas plant (lower capex, volatile fuel) versus a solar + battery farm (higher upfront investment, lower marginal cost) versus a nuclear plant (huge capex but a potential much larger operating life). LCOE accounts for construction, financing, fuel, O&M, capacity factor, and the rate of return.

With the LCOC we take it a step further, factoring in the time value of money. The current metrics — $/GPU-hour, $/rack/month, $/W capture cost at a certain moment for a certain project structure. The LCOC is a measure of $/delivered GPU-hour with a standard SLA (service level agreement) with 99.9% uptime. This means we factor the capital expenditure (servers, power systems, cooling infrastructure, land/launch costs), operating expenses (power, maintenance, staffing, replacement cycles), financing costs (cost of capital, debt service), and depreciation schedules. Time is money and LCOC factors that in. The denominator is actual useful output after accounting for utilization rates, cooling overhead, and downtime. LCOC is workload agnostic, just like LCOE. You can use it on inference or training, the same way you can use energy for lights or heat pump. With LCOC, we can compare apples to apples in radically different architectures.

Naive metrics favor ground infrastructure. Terrestrial data centers look cheap if you ignore the interconnection queue & permitting that delays revenue, PUE overhead that eats your power budget, and the fact that land and power costs escalate while your hardware depreciates. On the other hand, orbital looks expensive when you ignore the collapsing launch costs, the fact that you have free power after deployment, and that you have basically 0 permitting or interconnection delays. LCOC forces you to account for cost trajectories, time value of money, and the full lifecycle of the asset.

Core Equations

Delivered GPU-hours per year:
$$\text{DeliveredGPUh}_y = \text{GPUeq}_y \cdot 8760 \cdot \text{SLA} \cdot \min\left(1,\frac{BW_{\text{avail},y}}{BW_{\text{need},y}}\right) \cdot \min\left(1,\frac{D_y}{C_y}\right)$$
Effective LCOC:
$$\text{LCOC}_{\text{effective},y} = \frac{\text{LCOC}_{\text{base},y}}{u_{\text{sell},y}}$$
Base LCOC:
$$\text{LCOC}_{\text{base}} = \frac{\text{CAPEX} \cdot \text{CRF}(r,n) + \text{OPEX}}{\text{GPUeq} \cdot 8760 \cdot \text{SLA}}$$
Capital Recovery Factor:
$$\text{CRF}(r,n) = \frac{r(1+r)^n}{(1+r)^n - 1}$$

Where:

Why Now?

Ground compute costs are flattening, while a lot of the easy sites are already taken. Remaining capacity requires longer timelines for interconnection, more expensive land, and tighter water constraints. At the same time, orbital costs are rapidly declining from launch costs and manufacturing.

I predict crossover sometime in the early 2030s or late 2020s. Reaching this conclusion requires acknowledging that compute demand is insatiable. If you believe in compute growing at anywhere near the scale these hyperscalers are projecting, all you can do is bear witness to the havoc scarcity is going to wreak on ground data centers.

If you don't believe in space data centers, you don't believe in compute growth.

Risks

Thermal

This is the largest engineering challenge to put data centers in space. You can only get rid of heat in space by radiating it away. You don't have the same luxury you have on the ground of moving it through mediums and using convection. Every satellite will need ISS-level, and eventually better, thermal management. However, there are very promising technologies such as deployable structures, droplet radiators, and more that can solve this problem.

Radiation

This one is still up in the air. The results that will come from Starcloud's deployment will be incredibly important — the entire industry is watching. There are two things you have to deal with in space when it comes to radiation engineering: SEU (single event upsets) and TID (total ionizing dose). Single event upsets are probabilistic and something that you can only protect against with architecture, redundancy, and error correcting codes. Shielding doesn't work here. In TID, you can use shielding but, to heavily simplify the physics, shielding can act as armor. Armor that protects you, but when it breaks it inverts and pierces you right in the heart. Shielding can sometimes hurt a lot more than it helps.

Bandwidth

You can have a lot of compute going on in space, but beaming that information down to the ground is really hard.

Maintenance

You're going to have to have autonomous drones that go into space and clear up debris, swap out racks, and do other jobs that are prohibitively expensive with current technologies.

Latency

Unless you're in DFW, NOVA or Memphis (where these hyperscalers are building campuses) then you're going to experience lower latency for satellites in LEO. However, as we expand to different shells, we add more and more latency, which restricts certain applications to certain shells.

Conclusion

Space data centers are the obvious next step. If you don't believe me, go to the interactive simulator and play with the sliders yourself. The source code is available on GitHub.