TL;DR: Musk wants data centers in orbit. I want one in every basement — heating your home and running decentralized AI on solar surplus.
Elon Musk just announced TERAFAB — a fab targeting over 1 terawatt of AI compute capacity per year. In his own framing, that rivals the entire US power grid. Roughly 80% is meant for space, because Earth-side electricity simply can’t keep up.
Impressive. And it’s the wrong approach.
Here’s a product that should exist: a compact unit that combines AI inference chips with a heat pump. The compute hardware produces a continuous stream of high-grade waste heat at 60–80°C. That’s a gift for a heat pump — instead of struggling to extract energy from freezing outdoor air, it gets a hot source delivered for free.
The result: one machine delivers both useful compute and serious residential heating from the same electrical input. A unit consuming 5 kW of compute power can drive a heat pump to deliver 20–25 kW of heating — enough for an entire house. (The heat pump draws additional electricity, but far less than heating from scratch.)
In summer, reverse the cycle: the heat pump becomes AC, the GPU heat goes outside, your house stays cool. You contribute compute, cool your home, and the physics works for you instead of against you.
One device replaces: gas furnace, air conditioning, and a side income. Three devices in one, self-financing. (Heatbit already proved the concept for Bitcoin mining — now imagine the same thing for useful AI compute.)
We call it HeatMine.
Germany alone has 42 million households. Even modest adoption at scale generates hundreds of gigawatts of distributed compute.
Globally? Two billion households need heating. If 10% participate — 200 million HeatMine nodes at 5 kW — that’s 1 terawatt. Musk’s entire TERAFAB output, matched by people heating their homes. No rockets, no orbital solar farms, no trillion-dollar fabs. Just rooftop solar and a box in the basement.
And it grows every year as people upgrade hardware. Not because a company decides to scale — because individuals decide the ROI works.
When the sun shines, Germany already produces more electricity than the grid can absorb. Negative prices, plants get curtailed, energy gets wasted.
In a decentralized AI network: sun shines → electricity is essentially free → every HeatMine node cranks to maximum → the network floods with cheap compute. The energy isn’t stored — it’s instantly converted into useful work.
In winter: no solar surplus, but heating demand → nodes run at full blast to heat homes and compute simultaneously.
Most of the year, either the electricity is cheap enough or the heat is useful enough — often both. The network stays productive year-round.
Every HeatMine hosts specialized expert models — not one monolith, but many small models optimized for specific domains. What size? That’s determined by hardware: a lightweight node runs 7B models, a beefy one hosts 70B. The network is agnostic.
Routing works like BitTorrent: no central server, no central authority. Demand determines which experts get loaded. Routing is stateless — full context in, answer out, no long-lived session required.
And every query is simultaneously reasoning and mining. Useful compute becomes a metered commodity — node operators get paid for supplying it. Your HeatMine earns while you sleep, work, or vacation.
The buying decision changes: you buy the beefiest hardware not because you need it, but because the ROI works. The price difference isn’t consumption, it’s investment. The machine pays for itself.
Today’s frontier models get trimmed to the same consensus through RLHF and safety filters. What comes out is the average of all acceptable opinions. Useful, but rarely surprising. Regression to the mean.
A decentralized swarm doesn’t have this problem. Every expert model can think radically differently — axiomatic, contrarian, specialized in niches no committee would ever approve. It doesn’t need to please everyone. It just needs enough demand to survive. Natural selection, not editorial curation.
Blaise Agüera y Arcas has shown experimentally: complexity doesn’t emerge from mutating a single entity, but from merging different ones — symbiogenesis. Large models are Darwinism: one company incrementally mutating a monolith. A swarm of experts is evolution through combination.
Today you send your most intimate questions to OpenAI’s servers, linked to your name and credit card. One company, one server, full history.
In the swarm: your query goes to some node. It doesn’t know who you are, has no history, no account. Next query, different node. No single participant ever has the full picture. Not perfect anonymity — IP metadata and timing analysis exist — but structurally far more private than anything centralized. And you decide what to send, just like you decide what to type into a search engine.
Karpathy’s autoresearch: 630 lines of Python, AI agents autonomously running ML experiments. Hyperspace distributed the loop — 35 agents across a P2P network, 333 experiments in one night, zero human intervention. Different hardware led to different strategies — diversity as a feature, not a bug.
Steinberger’s OpenClaw: launched without perfect security, the world came running anyway. Ship first, iterate later.
Bittensor, Render, Golem: decentralized compute markets already exist. What’s missing is the synthesis — specialized experts instead of generic compute, demand-driven routing instead of static allocation, evolution instead of optimization.
Bitcoin separated money from banks. HeatMine separates intelligence from cloud monopolies — and heats your home while it’s at it.