Something unprecedented is happening in America's AI race with China. Unlike the Apollo program, which cost taxpayers $260 billion in today's dollars, our nation's AI efforts have been privatized.
The race for artificial general intelligence, potentially the most consequential technology since nuclear weapons, isn't being led by DARPA or government labs. Microsoft alone spent more on R&D last year ($29.51 billion) than the entire U.S. National Science Foundation ($9.806 billion). The most important technological race since the Cold War is being bankrolled not by Congress, but by Excel subscriptions and search ads.
This isn't a theoretical concern or future trend. Public research has collapsed. Universities have become paper mills. Government labs can't compete for talent. The transition happened gradually, but the end result is clear: private companies now drive America’s most important research.
This forces an uncomfortable truth: The same tech giants that Congress investigates for antitrust violations are the ones outspending our government on frontier research by orders of magnitude. Breaking up Google might get you better ad rates, but it would also destroy the research capacity that's advancing AI faster than any government lab.
In our technological competition with China, America's edge depends on recognizing this reality: Sometimes market dominance serves the public interest—not despite concentrated power, but because of it.
The Golden Age of Government Innovation: When America Dreamed Big
In 1962, President Kennedy stood at Rice University and declared that America would put a man on the moon "not because it is easy, but because it is hard." The audience didn't laugh. They didn't tweet skeptical memes. They believed him.
What followed was the greatest government funded technological achievement in human history. We didn't just reach the moon—we revolutionized human capability in the process.
From 1940 to 1970, government research spending grew nearly twentyfold. DARPA, NASA, and federally-funded university labs weren't just agencies—they were cathedrals of innovation, places where decade-long research horizons weren't just tolerated but expected.
The Space Race was the ultimate expression of government-driven innovation. When Sputnik beeped overhead, Americans understood the stakes viscerally. This wasn't about profits or patents—it was about survival. Congress opened the checkbook. Scientists got blank permits. A nation unified behind a single technological mission.
The technological spillovers were staggering: GPS navigation? Created to guide spacecraft. Solar panels? Developed to power satellites. Modern semiconductors? Miniaturized for rocket guidance systems. The early internet? Born from DARPA's need for resilient military communications.
Each breakthrough sparked cascade effects that transformed the economy. GPS enabled everything from precision agriculture to Uber. DARPA's network experiments evolved into the trillion-dollar internet economy. Government-funded semiconductor research at Stanford seeded Silicon Valley itself.
The golden age of government innovation didn't end because the model failed. It ended because we stopped believing in shared national purpose. Federal R&D spending peaked at 1.9% of GDP in 1964, falling to just 0.6% by 2019. As public funding retreated, something had to fill the void.
The Corporate Research Cycle: From Bell Labs to Google Brain
What's fascinating is that while government innovation was booming in the mid-20th century, another powerful innovation model was quietly developing in parallel – one that would eventually take center stage. This wasn't the startup ecosystem we celebrate today. It was something much more surprising: research funded by market-dominant corporations.
The story of America's innovation evolution isn't just about government retreat. It's about a torch being passed – first from government labs to corporate research giants, then from old-school monopolies to today's tech platforms. This cycle reveals an uncomfortable truth: breakthrough innovation has always depended on concentrated power and resources, whether wielded by nations or corporations.
Bell Labs: When Monopolies Invented the Future
The year was 1947, and Bell Labs had just invented the transistor, the foundation of all modern computing. The team celebrated by... returning to their other world-changing projects. They were also busy inventing cellular technology, solar cells, and information theory.
This wasn't a startup racing to an IPO. It wasn't a government moon-shot. It was AT&T's research arm, funded by telephone monopoly profits.
Bell Labs wasn't just a research center—it was a monument to what market dominance enables. AT&T's telephone monopoly generated such massive profits that they could employ 1,200 PhDs to explore whatever interested them. No quarterly earnings pressure. No venture capital timelines. Just pure research.
The results were staggering:
The transistor (basis of all computing)
Unix operating system
C programming language
Information theory
Fiber optics
Solar cells
Cell phone technology
Nine Nobel Prizes
Meanwhile, across the country, Xerox's PARC facility was performing similar magic. Protected by Xerox's dominant position in copying, PARC invented:
The graphical user interface (GUI)
The mouse
Ethernet networking
Object-oriented programming
Laser printing
In 1956, regulators struck a historic bargain with AT&T: Keep your monopoly, but run Bell Labs for the public good. They understood something we've forgotten: Sometimes consumer benefit comes from innovation, not just lower prices.
The 1984 breakup of AT&T is usually celebrated as a win for competition. Phone calls got cheaper. Long-distance rates plummeted. But Bell Labs died. The new AT&T couldn't afford to fund basic research. The monopoly profits that powered innovation vanished, depriving humanity of unknown future advancements.
Fast Forward: When Tech Giants Became Nation-States
Now fast forward to 2024. We face another existential technology race, this time against China's AI capabilities. The stakes are just as high as the Space Race. The technological gap is just as daunting.
But here's the plot twist: This time, it's not NASA or DARPA leading the charge. It's OpenAI, Anthropic, xAI and Google DeepMind. The most important technological race since the Cold War is being bankrolled not by Congress, but by Microsoft, Amazon, Sequoia and Nvidia.
This isn't how it worked last time. Imagine if in 1962, Kennedy had announced: "We choose to go to the moon... and Boeing will handle everything." The nation would have been baffled.
Yet today, we take it for granted that private companies are driving humanity's next great leap. The government's role? Mostly writing stern letters about AI safety.
Let's talk about scale:
Microsoft will spend more on AI research and infrastructure in 2024 than NASA's entire budget
Google maintains quantum computing facilities that make government labs look obsolete
Meta's AI research team is larger than most university computer science departments
Apple spent more developing its own chips than the entire Apollo 11 mission cost (adjusted for inflation)
This isn't corporate R&D as we once understood it. This is nation-state scale investment by companies with GDP-sized revenues.
The Return of the Monopoly Innovation Machine
We're witnessing history repeat itself, but with a critical twist. Like Bell Labs before them, today's tech giants have created the conditions for breakthrough innovation through three key elements:
1. Enormous, sustained profits
Google's search dominance funds DeepMind and quantum computing labs
Microsoft's cloud and software revenue bankrolls OpenAI and massive AI infrastructure
Apple's premium hardware margins enabled them to spend billions designing their own chips
2. Freedom from short-term market pressures
These companies can pursue research agendas that span presidential administrations
They can build infrastructure anticipating needs 5-10 years in the future
They can withstand years of losses on moonshot projects that might never pay off
3. The ability to compete with nations
We're watching something unprecedented in human history: A handful of private companies are racing against an entire superpower to develop artificial general intelligence—potentially the last invention humanity ever needs to make.
China has bet the entire might of its authoritarian state on winning the AI race:
Data from 1.4 billion citizens
Mandatory corporate cooperation
State-directed university research
The full power of a techno-authoritarian regime
By every conventional measure of state power, China should be dominating this race. They're not. Not even close.
A handful of American companies—funded by cloud computing profits and advertising revenue—are out-innovating a superpower. This isn't just surprising. It breaks everything we thought we knew about technological innovation.
History tells a consistent story: breakthrough research is brutally hard to commercialize. Bell Labs invented the transistor but struggled to turn it into products. Xerox PARC created the graphical interface but couldn't bring it to market. Even Google, which invented the transformer architecture powering modern AI, watched OpenAI turn the technology into ChatGPT.
This "commercialization gap" isn't a bug—it's a feature. When Bell Labs struggled to commercialize the transistor, they created billions in surplus value for Intel and countless others. When Xerox fumbled the graphical interface, they gave birth to Apple and Microsoft. Google's transformer architecture spawned dozens of AI startups. Great research creates opportunities that extend far beyond its creators.
Yet there's a crucial difference between then and now: Bell Labs existed because regulators allowed AT&T to maintain its monopoly in exchange for public-interest research. Today's tech giants face aggressive antitrust scrutiny even as they fund America's most important innovation.
America's Innovation Vacuum: How Our Institutions Failed
With private companies now leading America's most crucial technological race, we have to ask: How did we get here? The answer isn't that tech giants seized power. It's that our traditional innovation institutions imploded, creating a vacuum that market-dominant companies were perfectly positioned to fill.
The Academic Meltdown: When Universities Chose Prestige Over Truth
In 2006, Duke University announced a medical breakthrough that made headlines worldwide: They had discovered how to match cancer patients to the most effective chemotherapy drugs using genetic testing. Prestigious journals published it. Clinical trials began. Thousands of cancer patients received hope.
There was just one problem: It was all fake.
Dr. Anil Potti had manipulated data, fabricated results, and lied about his credentials. But here's the truly horrifying part: Despite immediate red flags from other scientists, Duke let the clinical trials continue for four years. Journals kept publishing his papers. The NIH kept funding him. Cancer patients kept receiving treatments based on fraudulent research.
The system didn't just fail to catch fraud—it actively resisted discovering it.
This wasn't an isolated incident. When scientists at Amgen tried to reproduce 53 "landmark" cancer studies, only 11% could be replicated. Bayer reported similar numbers: just 25% of published academic findings were reproducible. We waste an estimated $28 billion annually on irreproducible preclinical research.
Academia has optimized for the wrong thing. Modern universities operate on a brutal equation: publish or perish. What advances careers isn't being right—it's being prolific. A scientist who publishes 30 flashy-but-wrong papers outperforms the one who publishes 10 rock-solid studies.
The result: modern academic science has become performance art. Papers written to be published, not to be true. Methods designed to sound rigorous, not be reproducible. Results framed for impact factors, not accuracy.
Even worse, pharmaceutical researchers have uncovered widespread "p-hacking" in academic studies—the practice of manipulating statistical analyses until insignificant results magically become significant. The issue is so pervasive that drug companies now employ specialized teams just to untangle the statistical manipulation in academic papers before deciding if a research direction is worth pursuing.
This isn't just a theoretical concern. In clinical trials investigating new treatments, a pattern emerges: industry-sponsored phase III trials show distinctly different statistical patterns than academic trials, suggesting systematic manipulation of results in academia. When billions of dollars and patients' lives are at stake, pharmaceutical companies can't afford the academic luxury of p-hacking.
Even Stanford's President Marc Tessier-Lavigne, the very top of the academic pyramid, resigned after investigations revealed data manipulation in his lab's papers. The rot goes all the way to the top.
Yet universities haven't lost any prestige. The public still views them as unimpeachable sources of knowledge while they've become paper mills optimized for publication metrics. They maintain their reputations while the reliability of their research collapses.
The implications are profound. As universities chase metrics over truth, the private sector has been forced to build its own research capacity. Companies can't rely on academic papers anymore—they need their own labs, their own experiments, their own verification processes. What started as a credibility crisis in academia has driven the largest privatization of research in history.
The Government Surrender: Abandoning America's Innovation Infrastructure
Our national labs, once the crown jewels of American science, are also running on fumes. Los Alamos uses computers generations behind those at Microsoft. Fermilab watches private companies build more advanced quantum systems. Lawrence Berkeley Lab hemorrhages talent to tech companies offering triple the salary.
America's best scientists are voting with their feet, and for good reason. Why would a brilliant physicist accept $150,000 at a national lab when Google offers $500,000 with cutting-edge equipment? Why commit to a government mission that might vanish after the next election?
During the Space Race, NASA had a clear purpose: beat the Soviets to the moon. DARPA had a mission: maintain America's technological military edge. Today? Government priorities reset with each administration. You can't achieve breakthroughs on a four-year election cycle.
America hasn't just cut research funding—it's abandoned the very idea of national purpose. As our public research infrastructure crumbles, we've created the perfect conditions for private companies to fill the void.
The Innovation Baton Pass: From Public to Private
With universities producing unreliable research and government labs losing their edge, a technological leadership vacuum formed at precisely the moment when America faced its greatest technological challenge since the Space Race.
Something had to fill this void. The future doesn't wait for broken institutions to fix themselves.
Enter the tech giants—flush with unprecedented profits, free from quarterly pressures, and able to think in decades rather than funding cycles. Where government retreated, they advanced. Where universities failed, they succeeded. Where national labs lost talent, they attracted it.
This wasn't a power grab. It was a response to institutional collapse. The market-dominant tech companies didn't steal America's innovation engine—they rebuilt it after we abandoned it.
The Regulation Paradox: America's Dangerous Gamble
Here's the dangerous paradox we now face: America's technological leadership depends on the very companies we're trying hardest to constrain.
Market dominance enables breakthrough innovation. But it doesn't guarantee it. Look at Epic Systems. They control 78% of U.S. patient records, a near-monopoly in healthcare data. Yet instead of transforming medicine, they've given us clunky interfaces and siloed systems. Market dominance provided the resources for innovation. Leadership chose stagnation.
This reveals a crucial truth: Market dominance is necessary but not sufficient. You need both the capacity and the courage to reinvest profits into ambitious research. What makes America's current situation unique is that our tech giants are doing exactly that—choosing moonshots over margins, decade-long bets over quarterly returns.
Meanwhile, China pursues technological dominance with decade-spanning research priorities that survive leadership changes, coordinated universities, and directed corporate resources. In theory, this centralized approach should give China an insurmountable advantage.
But something unexpected happened: America's tech giants built research capabilities that entire nation-states can't match. They attracted global talent that China can't access. They maintained research horizons that governments can't sustain.
Breaking What Works
If we apply traditional antitrust thinking to these companies, we might get lower ad rates or app store fees, but lose the innovation engine that's keeping America ahead.
Break up these tech giants, and you'll fragment the organizations that can think in decades rather than quarters. You'll dismantle America's last remaining bastions of breakthrough innovation.
As discussed in my Algorithm Wars post, this isn't an argument against all regulation. It's an argument for smarter regulation that distinguishes between:
Natural network effects that enable innovation (Google Search gets better with more users)
Abuse of distribution that stifles competition (Google using Search to push its flights product above competitors)
Let's look at the innovation landscape honestly:
Government research has retreated to historic lows
Universities have become unreliable paper mills
National labs can't compete for talent
Startups chase quick exits, not breakthrough research
Venture capital wants returns in years, not decades
The only institutions consistently funding decade-long, civilization-scale research are the tech giants we're trying to break up.
The Stakes: More Than Just Market Share
We're living through a pivotal moment in human history. Artificial general intelligence, quantum computing, biotechnology are more than emerging technologies. They're the foundations of future civilization. The nation that leads these breakthroughs will likely shape the course of human progress for centuries.
In 1962, when Kennedy declared we'd go to the moon, America understood the stakes. We didn't break up NASA into competing agencies to lower the cost per rocket. We didn't worry that the Apollo program had too much market power in space exploration.
The current technological race with China is our generation's moonshot moment. But instead of rallying behind our innovation champions, we're debating which successful companies to dismantle first.
The uncomfortable truth is that in 2024, America's technological leadership depends on preserving these rare institutions capable of true breakthrough innovation. We didn't design this system. We didn't choose this path. But it's where we are.
The choice isn't between big tech and small tech. It's between American innovation and Chinese dominance. Between a future shaped by companies that can be regulated and one controlled by a regime that can't be.
We should regulate tech companies. But first, we need to understand what we're really regulating: not just corporate entities, but America's last remaining engines of breakthrough innovation.
America has a winning hand. Let’s not fold it.
Note: We typically talk about AI, but R&D is being privatized across other sectors too.
For example, in Pharma, the industry spent ~$83 billion on R&D, up from $5 billion in 1980 and $38 billion in 2000 (adjusted for inflation). Smaller biotech firms, often privately funded, also play a growing role, focusing on early-stage drug discovery.
Interesting perspective, David! I'll push back on one or two things, first, that corporations are thinking in decades instead of quarters. The VC structure in Silicon Valley prioritizes time-to-market over all else, which leads to what we see in OpenAI: initial fundamental research, but a complete shutoff of fundamental research after 2021-22 when ChatGPT began to yield results. In the scientific community, we call them "ClosedAI". You also mentioned Google/Deepmind as another example, but they similarly halted publications for a long while during the rollout of Gemini. A good example you can point to would be Meta/FAIR, which is leading the open-source AI movement to everyone's surprise.
I'm curious if you have any clarity on how to achieve both technological supremacy and public self-determination. By definition, monopolies lack the second, but public institutions are too broke to be "cathedrals of innovation" (love that term, by the way). Intentionally designed, case-by-case regulation allowing companies with good intentions to pool resources into basic research sounds like a great way forward, but I fear Congress lacks the attention span and ideological nuance to achieve anything close to that.
Interesting read, but it leans more toward a clever contrarian take than a grounded analysis. It paints monopolies as misunderstood engines of progress without really grappling with how concentrated power tends to play out—through regulatory capture, labor suppression, and gatekeeping. The binary between “chaotic competition” and “benevolent monopoly” feels overly neat.
There’s also this quiet assumption that we should move on from public institutions—as if the answer to inefficiency is just to hand the reins to private monopolies. But looking around at hype-driven markets and platform bloat, it’s hard to see that as a reliable path forward.
Corporate innovation clearly has a place. But we also need to rebuild trust in institutions that are meant to serve the public, not scrap them entirely. Letting go of that project just shifts more power to actors who answer to no one but shareholders—and that’s not the same thing as progress.