Search for an article…

/

Search for an article…

/

~

/

/

Dwarkesh Patel's Chronicle of Intelligence

Technology

Nov 10, 2025

Dwarkesh Patel's Chronicle of Intelligence

Reviewed - The Scaling Era: An Oral History of AI, 2019-2025 by Dwarkesh Patel with Gavin Leech.

Free email newsletter:

Get Arena Magazine in your inbox.

One of the most revealing sentences in Dwarkesh Patel’s new book is a footnote attached to the title of chapter eight: “It is important to note that each interviewee uses a different definition of AGI.” In that last and shortest chapter, luminaries from the world of artificial intelligence make predictions (or decline to) about the timeline for Artificial General Intelligence, or AGI.

As the footnote suggests, AGI can mean many things to many people — both at the level of what qualifies as AGI, and what it will be for the world. Patel, host of the popular Dwarkesh Podcast, is  uniquely able to give readers the full spectrum of opinion. He did it by curating interviews with a dozen-and-a-half subjects from his show into one book.

The book, out now via Stripe Press is The Scaling Era: An Oral History of AI, 2019-2025, co-written with Gavin Leech, ex-Head of Research at the Dwarkesh Podcast. A sort of podcast anthology, The Scaling Era doesn’t have one flashy thesis. Rather, the authors (shall we call them curators?) are happy to mostly let the guests speak. The result may be a first: a distillation, in print, of more than a hundred hours of interviews from a single podcast.

It’s quite the rolodex: names that many will know like Mark Zuckerberg and Tyler Cowen; and names that few outside the AI scene would, like Carl Schulman and Sholto Douglas. Naturally, many of the guests disagree, including about definitions.

Dwarkesh Podcast has enjoyed an explosion in popularity — growing from tens of thousands of subscribers on YouTube at the beginning of 2024 to 1 million today. Millions are tuning in, from frontier researchers to schoolchildren. Patel says he often spends a week or more researching for an interview, taking the approach of an investigative journalist. His guests have included nearly all the biggest names in AI research, neuroscience, and intelligence. One could say that Dwarkesh Patel’s main strength as an interviewer is in coming up with big, urgent questions. As it turns out, that strength shines through most in discussions about geopolitics and AI. Those worlds collide in two of The Scaling Era’s final chapters about the possible impacts of AI in the world, and especially in conflicts between states.

By curating questions and answers, Patel gives readers a picture of the hopes, fears, objections, and projections of some of AI’s leading thinkers and practitioners.

The Scaling Era’s chapters are organized around eight concepts in AI research: scaling, evals, internals, safety, inputs, impact, explosion, and timelines. There’s a bit of background at the beginning of each chapter explaining to a nontechnical audience what, for example, a “benchmark test” is (think of it like a much longer, more challenging SAT to test the strength of a large-language model). But Patel and Leech are really just setting the scene. They must ultimately construct the “argument” with excerpts from guests on Dwarkesh Podcast. In the margins next to blocks of screenplay-like podcast text, unfamiliar terms are defined. (If you’ve never heard of a residual stream or inference, these notes are for you.)

A core theme of the book, as its name suggests, is scaling. Scaling is what it sounds like — how do we scale up AI models efficiently and effectively? Much of the scaling discussion stems from a short essay published in 2019 by Richard Sutton, a professor of computer science at the University of Alberta. “The Bitter Lesson” is reprinted in the appendix of The Scaling Era. Sutton’s bitter lesson is, in short, that scaling raw computational power (abbreviated as “compute”) has always been more successful than trying to infuse human-ness into AI systems — like IBM’s Watson computer learning to play chess, or speech recognition. “We have to learn the bitter lesson that building in how we think we think does not work in the long run,” Sutton wrote. In other words, “tricks” to make machine learning models smarter, like changing the model architecture, are far less effective than scaling every input. What matters is the scale of the data, the scale of the model, and the scale of training time.

Between 2020 and 2022, OpenAI and DeepMind released research on “scaling laws,” illustrating a predictive relationship between the amount of data, compute, and number of parameters you need in a model to sufficiently learn the underlying data distribution for a particular set of tasks. Over time, “scaling laws” have begun to refer to other similar phenomena, such as how the increasing size of models over time is correlated with increasing accuracy, generalizability, and “intelligence” of AI.

These inputs to AI form tight bottlenecks for AI development. There is only so much data in existence (most of it online), and it may not be enough for the next generation of models that might qualify as AGI — and that’s not even counting the immense power and infrastructure needed to train them.

Jared Kaplan, cofounder of Anthropic, puts the scale of an AGI training run at 1029 or 1030 floating point operations (FLOPs). To put that in perspective, the largest current AI models are trained at around 1025 FLOPs, so training needs to scale by a hundred thousand times from here. Kaplan thinks we’ll reach that by 2030, though he gives a ten to thirty percent chance that he’s “just kind of nuts.” That colorful humility from AI pioneers is characteristic of discussions with Dwarkesh.



Throughout The Scaling Era, it’s a given that AI is a big deal, and the biggest current deal. Most frontier researchers believe that we will reach human-level intelligence soon on many, if not most, economically useful tasks (in some cases, we have already). Almost everything that humans do to make money — sell insurance, diagnose illnesses, develop software — will be within the reach of AIs.

And this stems from the idea that LLMs are actually “intelligent” — that they can reason, not just match patterns in words in their training data (though they do that too). It’s clear that beyond a certain size, models understand associations between concepts, just as humans do, but it’s not quite clear how. Patel and Leech explain this paradox: “in theory, with the right data, it could perform any task… [but] people who work with it know to constantly doubt its output. It’s like having a brilliant but sometimes amnesiac coworker.”

Most of Patel’s guests — and Patel himself — argue that just because the underlying architecture and process seem relatively simple and sub-intelligent at first, they are quite complex systems, hard for us to understand, and therefore hard for us to say for certain that they don’t possess “intelligence,” a term that does not have a well-accepted definition. After all, the scope of tasks that AGI should be able to solve, or what constitutes “human-level” capability for them, is large, often hard to measure, and amorphic.

For some, the less philosophical and more practical big questions will be those related to future scenarios around “AI risk.” Patel and his guests evaluate a number of such scenarios. Can AI overthrow governments?

“Is there any hope that having leverage over the complex global supply chains that advanced rogue AIs would initially rely on to accomplish their goals would make it easy to disrupt their behavior?” Patel asks Carl Shulman, an advisor at Open Philanthropy.

To some, a scenario like this may seem fanciful. But whether or not one is persuaded, The Scaling Era presents some very real views from people who are influential at the highest levels of AI about its possible risks.

These are problems that fall under the broad term of alignment, the study of getting a sufficiently intelligent AI to act and think as humans want, or aligned in line with human values and morals. Because we lack a thorough understanding of how LLMs “think” and that it is hard to manipulate effectively what is essentially a huge matrix of decimals, the problem of alignment is quite difficult to solve.

The “shoggoth with a smiley-face mask” meme perhaps best encapsulates the challenge of alignment and understanding LLMs. The shoggoth — a fictional monster created by H.P. Lovecraft as a “vast, indescribable thing,” shapeless, indiscernible, eerie, and all-encompassing — represents LLMs’ unpredictability and associated risks. Reinforcement Learning for Human Feedback (RLHF), a common method of imbuing an LLM with human preferences, is the smiley-face mask, an almost hilariously flimsy layer on top of the shoggoth. Since then, it’s become a meme — pessimistic, perhaps — to attribute to all attempts at LLM alignment.

Like many of the large-language models it covers, The Scaling Era has a “knowledge cutoff.” That cutoff is November 2024. In the eleven months since that cutoff, things are still moving forward at a rapid pace in the world of AI. In one way this is a weakness for a work that wants to comprehensively cover a subject. But as an oral history, a snapshot of what smart people thought during a crucial period, it works. And by going to print, there was no way to avoid leaving some stuff out.

Patel and Leech mostly don’t point readers to a “correct” conclusion from a set of conversations, many of which disagree in meaningful ways. At times it’s frustrating, particularly for readers looking for clear answers about what to think about such an important and constantly evolving field, but it’s kind of the point, too. A multiplicity of perspectives is, after all, inherent to the book’s format. In choosing to preserve rather than resolve the contradictions, The Scaling Era offers something rare: an honest portrait of uncertainty at the frontier of human knowledge.

About the Author

Neha Desaraju does AGI quantitative finance with Squaretower, and works with SF Parc to create homes for ambitious technologists. She is on X @nehadesaraju.

ComponentTest

Copyright © 2025 Intergalactic Media Corporation of America - All rights reserved

Copyright © 2025 Intergalactic Media Corporation of America - All rights reserved

Copyright © 2025 Intergalactic Media Corporation of America - All rights reserved