Axios, whose reporting is increasingly defined by minuscule “scoops” about the artificial intelligence industry, reported last week on the latest red alert from Anthropic about an impending “intelligence explosion.” The AI lab’s research arm released an agenda for tackling the myriad risks and threats of this fearsome technology, including the likelihood that AI will effectively procreate—that is, build new models without any human involvement. “My prediction is by the end of 2028, it’s more likely than not that we have an AI system where you would be able to say to it: ‘Make a better version of yourself.’ And it just goes off and does that completely autonomously,” Anthropic co-founder Jack Clark told Axios.
The creator of Claude is known for issuing such omens; as Axios notes, Anthropic’s “identity is wrapped around warning the world about AI risk.” But it’s hardly alone. OpenAI, which makes ChatGPT, also regularly sounds the alarm. At the same time, these companies are raising capital at historic levels to fund their work and enrich themselves. The Wall Street Journal reported Sunday that OpenAI recently allowed current and former employees to sell up to $30 million worth of shares. More than 600 people jumped at the chance, and together they made $6.6 billion.
The contradictions surrounding AI have become impossible to ignore. The companies building it warn about catastrophic risk while simultaneously speeding up deployment, competing to dominate what we’re told is the most consequential technological transformation in history. Every player in the race, including governments, which are rushing to integrate AI deeper into military, educational, and administrative systems, publicly acknowledges the challenges and the dangers, and none can afford to stop.
Anthropic CEO Dario Amodei’s essay “The Adolescence of Technology,” published in January, is a masterpiece of the form. Drawing on Carl Sagan’s Contact, he frames our current moment in AI as humanity’s defining crossroads. He claims we are at the turbulent threshold between the civilization we have and the one we might become, facing five categories of existential risk: rogue autonomous AI systems, misuse of AI for mass destruction, authoritarian capture of AI for political control, economic disruption, and extreme wealth concentration. He also foresees cascading indirect effects we cannot yet expect.
Seventeen days after the essay was published, Anthropic raised $30 billion in new funding, bringing its valuation to $380 billion. And last week, the same day as Axios’s scoop, the Financial Times reported that the company is looking to raise tens of billions more this summer, with an eye toward a $1 trillion valuation—which would top OpenAI’s $852 billion valuation.
That juxtaposition is not coincidental. “The Adolescence of Technology” is not just a warning; it’s a pitch deck too. This technology is the most consequential in human history is both a caution and an investor’s dream. We are the responsible ones is both a moral claim and a competitive moat. China is catching up is both a geopolitical concern and an argument that makes the next funding round feel patriotic.
Amodei acknowledges the danger that AI is such a glittering prize that humanity may be incapable of imposing meaningful restraint upon it, and he seems to recognize the potential for social unrest as it displaces workers and concentrates wealth. But his proposed remedies are modest. He is on record advocating for transparent legislation, constitutional AI training, interpretability research, industry coordination, and progressive taxation. These are all reasonable measures, but they are not the response of someone who genuinely believes we may be one or two years away from systems capable of destabilizing civilization.
It would be easy to dismiss this as hypocrisy, except the situation is more structurally tragic than that. Even if the people pushing AI on the world wanted to stop, the system cannot allow it. Employees will defect to competitors. Valuations will crater. Investors will sue. China will continue regardless. Everyone in the AI race understands the dangers. Rational self-interest, aggregated across all players, produces a collectively irrational and potentially catastrophic outcome.
The Manhattan Project analogy is instructive here, but as a warning rather than a parallel. Oppenheimer and his colleagues built the bomb under direct government control, with a defined wartime enemy and a defined end point. After they built the bomb, the project ended, and many of those scientists spent the rest of their lives advocating for arms control, openly wrestling with the moral consequences of what they had created.
The AI race has none of those constraints. There is no coherent governing oversight, no defined enemy beyond commercial competitors, and no end point. And certainly, no public moral remorse for what has been unleashed. Instead, the technology has assumed a life of its own, self-accelerating with each generation and increasingly involved in building the next. We are not simply inventing a tool but constructing systems intended to speed up their own development indefinitely. We are sprinting toward a cliff.
Meanwhile, the physical world imposes constraints that much of the rhetoric surrounding AI conveniently minimizes. Large-scale AI systems already place extraordinary demands on energy infrastructure, water usage, semiconductor supply chains, and electrical grids. The utopian vision of AI eliminating disease, poverty, and war requires energy abundance, global infrastructure, and governance frameworks that do not exist.
The contradictions persist because modern technological culture treats acceleration itself as proof of legitimacy. That something can be built becomes evidence that it should be built, and speed substitutes for wisdom.
In Candide, Voltaire’s protagonist endures every conceivable catastrophe, including war, earthquake, enslavement, and the systematic destruction of everyone he loves. Throughout the novel, his tutor, Pangloss, insists with each fresh horror that all events unfold according to rational design in “the best of all possible worlds.” Pangloss never abandons this conviction, even while reality demolishes the idea that grand theoretical optimism has any purchase.
Pangloss represents the theorist who endlessly explains why the system must continue despite the visible suffering it produces. He is every modern technologist insisting that temporary disruption, inequality, or social instability are unfortunate but necessary transitional costs on the road to a transformed and better future. The optimism is sincere. The catastrophes it glosses over are real. And the prescription to trust the process because we are the responsible ones is precisely the grand theoretical framework that Voltaire spent his life attacking.
Voltaire’s target was not optimism alone. It was the human tendency to subordinate lived experience to grand explanatory systems. The conclusion of Candide is famous precisely because it refuses both utopian optimism and nihilistic despair: “Il faut cultiver notre jardin” (We must cultivate our garden). This line is often misunderstood as a retreat from public life, but that interpretation misses the deeper argument. Voltaire does not advocate disengagement from reality, but reengagement with it. The garden is not an escape; it is a rejection of grand systems, and a refusal to hand over moral agency to those who run them.
The mythology surrounding AI depends heavily on the idea that history now unfolds beyond ordinary human participation, that the future belongs primarily to the engineers, executives, investors, and geopolitical actors competing to shape it. Everyone else is positioned as a spectator, consumer, or eventual casualty.
Voltaire rejects precisely that kind of surrender. Tending the garden means being attentive to the concrete world immediately in front of us: the human beings affected by decisions, the institutions being reshaped, the labor displaced, the environments consumed, the civic structures weakened, and the psychological habits being altered by systems advancing faster than public understanding. This will neither stop the AI race nor magically solve structural problems, but it’s the only meaningful antidote to systems that increasingly reward acceleration detached from human scale.
The danger posed by AI is not only technological but philosophical, encouraging a view of humanity in which optimization replaces judgment, scale replaces intimacy, and inevitability replaces politics. That is why the response cannot simply be more acceleration managed by supposedly enlightened actors, nor resignation to the idea that systems have grown too large for human judgment to matter.
Voltaire understood that the gap between the world as theorized and the world as experienced is where most human suffering occurs. The AI industry increasingly lives inside that gap. The garden, at least, remains a place where reality still answers back.






