If you have paid attention to artificial intelligence at all in the past two years, there is a good chance that you have heard more about its risks than its benefits. Some believe you should be worried about existential or catastrophic risks—the notion that AI systems may one day be so powerful that they could exterminate humanity or be used by malicious humans to cause widespread harm. Others think that category of risks is “hype” or “marketing,” and that you should instead focus on a variety of alleged “present-day” harms of AI such as misinformation and discrimination. Perhaps the central debate in AI discourse of the last two years is not whether you should fixate on risk, but instead which kind of risk should be your primary interest.
This alone is a remarkable fact. There is no other general-purpose technology in human history that entered society with such obsession over its risks. It isn’t healthy. Most risk prognosticators are happy to pay lip service to the “benefits” of AI, but these are almost invariably ill-defined—“curing disease,” “helping with climate change,” and the like. But what, really, are the benefits? How will they be realized? Why, after all, should we bear all these supposed risks? What are we striving for? Our answers to these questions are shockingly under-developed. Too often, we rely on platitudes to express what many now agree will be the most important technology transition of our era, if not ever.
Last October, Dario Amodei, CEO of the frontier AI company Anthropic (they make Claude models) tried to fill this void with an essay called “Machines of Loving Grace,” its title borrowed from a poem by Richard Brautigan. It is among the most sophisticated and concrete treatments yet of a crucial topic: what, precisely, does it mean for “AI” to “go well”? The essay envisions the rapid development of what Amodei calls “powerful AI” (what others might call “artificial general intelligence” or even “artificial superintelligence”), enabling a century’s worth of scientific progress to be compressed into a decade or so and perhaps even securing the long-term hegemony of Western democracies.
Amodei deserves praise for this effort; still, his essay leaves some unanswered questions. What would it take for America—or any country—to realize the benefits of AI on the timescales he imagines? How would America and its allies regain unquestioned global supremacy, and would the process of doing so itself provoke a war? Perhaps most importantly, how will average humans—the ones who do not know about the latest advancements in AI, the ones who merely want “life, liberty, and property,”—contend with the arrival of a new ‘superintelligent’ entity?
Amodei and his Anthropic co-founders worked at OpenAI until 2021, when they left that company, ostensibly out of concern over what they perceived to be OpenAI’s lackadaisical approach to AI risk. Anthropic, in their mind, would be the AI safety AI company. It has retained that reputation to the present day, garnering cheers from those concerned about existential risk from AI, and occasional eyerolls from those of the “accelerationist” persuasion. Still, nearly everyone agrees Anthropic’s Claude models have been among the very best in the world since early 2024.
To the most orthodox AI safetyists, a prominent long-form essay from the CEO of this risk-focused company on the benefits of AI may be alarming. Yet it reflects one of many quirks of America’s “AI community”: those most concerned about the major risks of AI tend also to be the ones most bullish about the technology’s potential. Indeed, their conviction about the near-term ability of AI companies to transform what we today call “a chatbot” into superintelligence explains their concern about the risks. On the other side, the accelerationists tend to be somewhat more pessimistic about the near-term potential of the technology (in particular to create doom) but focused aggressively on the long-term benefits. Maybe the doomers are the real optimists.
Amodei opened his essay, “I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”
The intramural dispute within the AI world will continue, but it will likely diminish as AI itself becomes more capable. It was easy to dismiss language models as “stochastic parrots” when the technology was more nascent (way back in the summer of 2024), but as of this writing, in January 2025, OpenAI’s o3 model seems to perform among the very best human coders and mathematicians on Earth.
The debate has shifted yet again, the goalposts teleported; now, the AI community debates whether the language models will be superhuman only at coding and math or in other domains as well. Some, like the perennial AI critic Gary Marcus, criticize the language models for not being humanoid robots, unable to cook meals or clean dishes. Online fights aside, though, it increasingly feels as though we are living in Amodei’s world—the world in which we are, in fact, on a rapid trajectory to “powerful AI,” the world in which “superintelligence” is not an abstraction or a literary device, but an app on a phone, an open tab in a web browser, a voice on your kitchen table, or in a military outpost.
But what is “powerful AI,” exactly? Amodei favors this term because it avoids the “sci-fi baggage and hype” associated with terms like “AGI” or “superintelligence.” This baggage, however, is carried almost exclusively by the vanishingly small number of people within “the AI community.” For almost everyone else on Earth, Amodei’s definition of “powerful AI” will sound very much like science fiction. He imagines an AI system that “is smarter than a Nobel Prize winner” at endeavors like “biology, programming, math, engineering, [and] writing” while processing information at orders of magnitude faster than humans—all while being trivially easy to run in millions of separate instances. It will be like having, as Amodei says, “a country of geniuses in a data center.”
Amodei is quick to point out, however, that this “country of geniuses” will have its limits. Intelligence, he reminds us, is not magic; we cannot subvert the laws of nature with it, nor can we even predict the behavior of many complex phenomena in nature all that much better than we can today. There is no reason to suspect, a priori, that the powerful AI he envisions will be able to, say, “know” the recipe for creating a room-temperature superconductor, or how to cure cancer, or how to solve nuclear fusion. That is because intelligence is not exclusively or even primarily about knowing facts. It is about solving problems, about navigating through intellectual inquiry. Intelligence is searching.
Instead of conceiving of AI as a genie, Amodei encourages us to think about the “marginal returns to intelligence”—that is, to consider what specific endeavors can be aided by adding more intelligence to the equation, and how much that extra intelligence helps. It is easy to imagine how more intelligence might help cure cancers or create better hypersonic weapons. But some problems–like the overall structure of the American healthcare or defense procurement systems–seem less susceptible to “intelligence” per se. Sure, an AI system might have creative ideas for how to improve those systems–but so too have many humans for decades. The problem, instead, is one of political will, and it is not obvious that intelligence itself solves this bottleneck.
Amodei’s more specific question—which problems exhibit high marginal returns to intelligence?—helps us to build an intuition about where AI might transform the world quickly. Amodei believes that AI might specifically help accelerate innovations in science related to measurement and methodology. It may sound mundane, but superior forms of measurement are at the heart of many technological breakthroughs, because by being able to measure something more precisely, you can also often control it more precisely as well. What’s more, innovations of this kind are often pioneered by small teams or even individual researchers, suggesting that they really are driven by ingenuity and intellect.
Amodei’s theory of the case is plausible. He focuses especially on biology and neuroscience, where he believes that automated scientific invention could result in curing nearly all major diseases, and extend the human lifespan.
Breakthroughs like this would almost certainly require achievements beyond scientific and technological advancement. How would the FDA respond to fully personalized medical treatments, where drugs are customized for each patient and therefore not susceptible to clinical trials? Innovations of this kind will require meaningful changes to current policy, and if we do not make those changes we risk stumbling into the exact kind of stasis that characterizes Europe today. But will we have the fortitude to update our regulatory institutions for the new world Amodei imagines? It is in grappling with problems like this that “Machines of Loving Grace,” and much of the other writing about potential AI futures (good or bad), falls flat.
In the example above, Amodei does mention that the current structure of FDA clinical trials is likely a barrier to the world he envisions. Perhaps, he reasons, AI can help with this problem too, for example by “helping to improve the clinical trial system” or “helping to create new jurisdictions where clinical trials have less bureaucracy.” Without a doubt, AI systems will be able to come up with all sorts of creative ideas for solving our policy problems (the best of the current models are already competent at this). But whether those ideas will matter in the political process is an entirely separate question.
It is unfortunately easy to imagine a world in which the United States does not properly adapt its regulatory climate to AI, and even adds regulation of its own to AI itself (there will be the better part of one thousand AI-related laws proposed in state legislatures in 2025 alone; there were nearly 650 last year). In this world, the use of AI by legitimate, law-abiding actors would be curtailed, and many of the benefits would be delayed or simply outlawed. But the worst uses of AI—those undertaken by malicious actors—would not be hindered at all, since those actors, by definition, disregard the law.
This is not some fantastical scenario. And finding a way to allow institutional innovation and entrepreneurship within the most rule-bound parts of our society—is, perhaps, the central challenge of the thing we today describe as “AI policy.” It is up to people like Amodei—a widely respected figure in AI who has predicted the future more accurately than almost anyone else—to illuminate this profound challenge. “Machines of Loving Grace,” however, only gestures at it.
There is another political dimension that figures heavily into Amodei’s thinking: geopolitics. In short: Amodei believes that America, and the West more broadly, must lead the world in advancing AI and related technologies so that we can secure enduring dominance over our adversaries—namely, China. If we do this, Amodei argues, we can secure “an eternal 1991,” bringing the end of history, and victory of capitalist liberalism, finally to fruition.
In contrast to the rest of “Machines of Loving Grace,” Amodei here is quite conventional; one would be hard-pressed to find a frontier AI executive, DC policymaker or analyst, or other relevant member of American elite society who does not agree with this general sentiment—even if they wouldn’t say that AI can bring about the end of history. And Amodei is also mirroring the argument made in 2024’s other AI-focused mega-essay: Leopold Aschenbrenner’s Situational Awareness.
Conventional thinking is by no means always wrong, but when a huge proportion of American elites begin to repeat the same argument, it is worth taking a closer look.
Say that America does maintain its lead in AI, and that all the capabilities Amodei expects near-future systems to have are realized. And say we reform our environmental permitting and other regulations so that we can build all the energy infrastructure, semiconductor manufacturing facilities, and data centers we need to run those models—all while keeping advanced AI chips out of China’s hands (the latter is dubious, and maybe irrelevant).
What happens next? How quickly does superintelligence enable, as Amodei puts it, “robust military superiority”? Superintelligence-driven militaries will presumably not just outthink their foes—they will need to be able to destroy or otherwise render inoperable things in the physical world in some radically better way than we can imagine today. We will need to build new weapons. Will doing so require technology alone, or will we also need to transform the Pentagon’s procurement and other bureaucratic procedures? How do we expect China, and every other country on Earth, to react to America’s explicit plan to accelerate AI to secure enduring American hegemony? In some ways, the most potent adversary of American technology is in fact Europe, with its endless, and often overtly anti-American regulations.
How confident are we that each of these questions will work out in our favor? Are we confident enough to make this vague strategy the operating plan for the rest of the decade? Are we confident enough to risk untold trillions? Are we confident enough to risk starting a war? Should we be thinking about this more carefully?
It is not, necessarily, that Amodei’s plan is “bad.” Most Americans probably like the sound of a world in which our country secures a renewed lease on global hegemony. But just like the definition of terms like “AGI” or “superintelligence,” this plan is woefully under-specified, and when you think about it in more detail, serious questions start to emerge.
Those questions can, at least in principle, be answered. But turning AI into an enduring source of national strength will require far more than just having the best models, since experience has shown us that these can often be quickly and easily replicated by our friends and enemies alike. Instead, it will require building new capabilities with AI, creating wholly new inventions, and engaging in the institutional innovation that allows those new creations to be built and flourish.
A country with excess intelligence but dysfunctional institutions may not be a superpower for long.
Perhaps this grander task helps us to answer, at least in part, the questions Amodei leaves us with at the end of “Machines of Loving Grace.” The concluding section is titled, “Work and Meaning,” and it grapples with questions like “with AIs doing everything, how will humans have meaning?” and “how will [humans] survive economically?” As Amodei admits, these questions are “more difficult than the others” and have a “lack of clear answers.” As you might imagine, Amodei believes that AI will eventually become so powerful that “our current economic setup will no longer make sense,” necessitating “a broader societal conversation about how the economy should be organized.”
Maybe that conversation will come to pass, or maybe not. Regardless, though, the arrival of machines with greater intelligence than all of us will invite new questions about what it is, exactly, that we humans should be doing with our time. One of those questions will be whether we want our machines of loving grace to “watch over” us, in the first place.
Perhaps, rather than conceiving of AI as something that “watches over” humans, we should conceive it is a new kind of tool—or even a force of nature we have discovered—that we use to ascend to new heights. To do this, though, we will need to build the kind of society that cultivates such ambition in all productive domains of human life. Perhaps this is the new “economic setup” to which we should aspire, rather than one based on preemptive safety, unending “risk management,” and universal basic income.
Viewed in this light, the better purpose of “AI policy” is not to create guardrails for AI — though most people agree some guardrails will be needed. Instead, our task is to create the institutions we will need for a world transformed by AI—the mechanisms required to make the most of a novus ordo seclorum. America leads the world in AI development; she must also lead the world in the governance of AI, just as our constitution has lit the Earth for two-and-a-half centuries. To describe this undertaking in shrill and quarrelsome terms like “AI policy” or, worse yet, “AI regulation,” falls far short of the job that is before us.
It is a hard job. America’s current institutions are sticky, deeply entrenched into our lives in countless ways. But if we can do this—if we can reinvent the contemporary technocratic state to pave the way for a new world, if we can discard the old statecraft to make way for the new—it would be a gift to humanity at least as large as powerful AI itself.
Can we do it? We do not know. Such is life in our humble commercial republic, operating at the outer conceptual extreme of what is possible in the free world. But if we are to succeed, we will need to do much better than nebulous platitudes about “maximizing benefits and minimizing risks.” We will need to approach our task with gravity and depth. And we will need to have an ideal for which to fight. Amodei’s essay paves the way, but as ever with life at the frontier, our work has just begun.