Everyone said AI would “never be profitable” because compute and energy would eat every dollar of revenue. Then the margins showed up anyway.
December 21, 2025.
Why AI Was Supposed To Be an Economic Black Hole
For the latest wave of AI doomers, the story was seductively simple: large‑scale AI would collapse under its own weight. Every new model would demand more GPUs, more electricity, and more data centers, until the economics snapped.
The arguments came in three familiar flavors. First, the “infinite compute, zero profit” mantra: the idea that each improvement in capability would push costs up faster than revenue, trapping AI companies in a permanent loss machine. Second, the energy apocalypse: AI framed as a planetary leech, destined to wreck grids, spike prices, and die when the power bill came due. Third, the capex panic: trillion‑dollar build‑out numbers repeated as if “large investment” and “unsustainable bubble” were interchangeable.
It sounded like hardheaded realism. It was actually a moral panic in economic drag.
The Numbers That Quietly Killed the Doomer Script
While everyone argued on social media, the spreadsheets were doing something far less dramatic and far more important. Analyses of OpenAI’s business show compute margins for paying customers climbing from the mid‑30% range in early 2024 toward roughly 70% by late 2025, as models, hardware, and routing became more efficient.
Independent breakdowns of GPT‑4 and GPT‑4o API usage told a similar story. Once you strip away the rhetoric and look at cost per token versus price per token, the APIs are not loss‑leaders: they are surprisingly profitable even before every optimization trick has been deployed.
Meanwhile, revenue moved out of the realm of hypotheticals. Public estimates now place OpenAI’s annual revenue firmly in the multi‑billion dollar band, with an increasing share coming from enterprise and API usage rather than just individual subscriptions. The much‑promised “inevitable economic collapse of AI” did not happen in a televised debate; it simply failed to materialize in the ledger.
How the Doomers Misread the Curve
The mistake was not in noticing the costs: it was in misreading where we were on the curve. Massive upfront infrastructure spend is not evidence of failure; it is the admission ticket to every major platform shift of the last 150 years, from railroads to telecom to cloud. The live question is payback period and durability of demand, not whether big numbers exist.
Doomers also treated volatility as impossibility. Margins dip when a new model rolls out, then recover as engineers optimize kernels, caching, data routing, and pricing. That sawtooth pattern is exactly what progress looks like in a capital‑intensive industry: launch, absorb the hit, tune, harvest efficiency. Declaring doom at the bottom of each trough is not analysis; it is impatience.
Most critically, their narrative pretended learning curves did not exist. Each generation of models, compilers, networking, and silicon squeezes more capability out of each joule and each dollar of capex. Treating costs as static while capabilities climb is how you end up surprised when margins improve.
The Energy “Gotcha” That Wasn’t
Energy became the rhetorical trump card: whatever you said about revenue, someone would reply with grid graphs and horror about data center demand. Training and inference are absolutely energy‑hungry; no serious person disputes that.
But new industries reshape infrastructure instead of politely staying within yesterday’s capacity. Utilities, grid operators, and hyperscalers are already re‑tooling around AI‑driven demand, from new transmission investments to power‑purchase agreements inked specifically for AI workloads. The same build‑out that was presented as a crisis is, in reality, an investment program. Analyses of macro impact increasingly show that the secondary economic growth driven by AI, new products, higher productivity, and entirely new categories of work, dwarfs the direct energy cost of the underlying compute.
Doomers treated today’s constraints as if they were laws of physics. They are not. They are snapshots of a system mid‑upgrade.
What the New AI Economics Actually Mean
Once you accept that the unit economics work, the conversation has to change. Strong compute margins and profitable APIs mean AI companies can fund their own research instead of living on pure hype and cheap money. That translates directly into better tools, faster iteration, and more powerful systems for everyone building on top of these platforms.
It also shifts the real risk. The worry is no longer “AI will implode because the power bill is too high.” The worry is “who controls the systems that clearly can make money,” and what they choose to optimize for: open tools versus closed platforms, augmentation versus lock‑in, autonomy versus surveillance.
The doomer script promised that AI would drown in its own compute and energy costs. The balance sheets now say something else entirely: the economics are not failing: the analysis is.