AI Monopolies? Not So Fast.

When assessing present conflicts, there’s a tendency to focus on the last conflict to produce a mental model. The age of social media was one of natural monopolies. Now it is taken for granted that the age of AI will be the same. 

“Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future,” Daron Acemoglu and Simon Johnson write in the New York Times. “This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.” This lens — which lacks an understanding of the underlying technology — is uncritically taken to a literal Marxist conclusion: “We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago.”

The curse of every contrarian is for his views to be imitated by the crowd, twisted to the point of cliché. Peter Thiel believed he was stoking controversy when he wrote “Creative monopoly means new products that benefit everybody and sustainable profits for the creator. Competition means no profits for anybody, no meaningful differentiation, and a struggle for survival.” Now, regulators and columnists are assuming that all emerging technologies naturally become monopolistic. This simplistic view fails to account for the distinct qualities and dynamics of each sector. As a result, regulators confident that monopolies are inevitable are setting policies with those assumptions — and with complete disregard for the consequences for startups.

The idea that AI is the next great monopoly is a common belief driving regulators to take pre-emptive action. In July 2023, the FTC released a blog post titled “Generative AI Raises Competition Concerns.” Lina Khan, chair of the Federal Trade Commission (FTC), has made statements implying she believes a monopoly is inevitable. She stated the FTC wanted to act against AI monopolies “before it becomes fully fledged.”

Advisors to Senator Elizabeth Warren, Ganesh Sitaraman, and Tejas N. Narechania, articulated the same sentiment in Politico. “While AI might be new, the problems that arise from concentration in core technologies are not. To keep Big Tech from becoming an unregulated AI oligopoly, we should turn to the playbook regulators have used to address other industries that offer fundamental services, like electricity, telecommunications and banking services.” In an Iowa Legal Review article, Narechania uses the term “‘natural monopoly” to express similar apprehensions: “I find that some machine-learning-based applications may be natural monopolies, particularly where the fixed costs of developing these applications and the computational costs of optimizing these systems are especially high, and where network effects are especially strong.”

But unlike these academics, the market still believes in competition; it’s positively planning on it

Vertical Disassembly

Venture capital money is going to various layers of competition throughout the AI pipeline. This pipeline can be broken down even further, but let’s proceed with the following six layers:

  1. Hardware (Nvidia, Groq)
  2. Infrastructure (AWS, Langchain)
  3. Data supply / generation / curation (Reddit data repositories, Large Model Systems/lmsys.org)
  4. Pre-training (OpenAI, Anthropic, Meta)
  5. Fine-tuning (Some done by base companies, but more widely available)
  6. Prompting / Software Chain / Audience Specialization

An important takeaway from the current AI ecosystem is that there is rapid diversification throughout the AI pipeline. What was once a single, vertically integrated process done by one company is separated into specialized improvements. As funding becomes more available and demand grows for niche applications of AI, the machine learning process is being sliced into ever narrower, more precise subprocesses. This is vertical disassembly. 

The market intelligence firm CB Insights found that while most of the VC money is going into AI infrastructure, the number of deals is much higher in applications.

Even the most dominant player in building foundation models, OpenAI, is moving its strategy in this direction toward applications. OpenAI released a diversified slew of tools: their voice model, improved fine-tuning, and the video model Sora in quick succession. These are not the actions of a company that believes there will be a single winner-take-all model or even “artificial general intelligence,” a form of AI capable of automating all economically valuable work. Instead, they are the actions of a company that believes there will be specialized competition.

Many of these niches include resolving problems models encounter in specific contexts. “We were just surprised at how bad [unmodified GPT4] was … because we were constraining it in interesting ways,” Geet Khosla, co-founder of Proemial AI, said. His company uses language models to parse a collection of scientific papers concurrently and communicate them in a simplified way. They’ve invested hundreds of hours into finding the most effective prompt pipeline for their task. 

For Matthew Phillips, founder of Superflows.AI, the distance between base LLMs and their goal of API explanation is clear. He is aiming for a more consistent and predictable specialized model. “Asking novel questions is not the priority.” He worries that more complicated models come with tradeoffs to his process. Latency, the amount of time it takes for an LLM to process a request from start to finish, often trumps pure accuracy for Phillips. “We think a lot about latency. It really really matters.… The next generation of models will just be getting larger and larger. Even if you get a speed increase from hardware, the latency will be larger.” 

This is characteristic of startups experimenting with “chain-of-thought,” a process of using multiple prompts or multiple models to process information in sequence. Latency versus accuracy is one example of the practical tradeoffs many AI startups face. As demand for differing priorities increases, economic theory predicts greater specialization of models into these niches. 

Matt Popovich, co-founder of Legislature AI, presents another type of tradeoff. His startup summarizes and critically examines state and federal legislation. While some users might want LLMs to assume any input it’s given is true by default, that’s a drawback for Legislature. “LLMs are really gullible … The preamble [to proposed laws] is a kind of prompt.” Matt told me. A model trained to efficiently accept new information as ground truth would struggle to judge proposed legislation. This isn’t just because of technical constraints, but because these two goals are fundamentally at odds.

However, Matt is concerned about the potential monopolization of base models. He recounts a story in which, due to payment processing problems, his entire research and product pipeline ground to a halt. Legislature was forced to switch to a different model entirely. “We have to have more redundancies and fallbacks.”

Many of these divergences cover vertical competition — competition across the process of training a single AI model from initial training to user consumption. Another axis is horizontal competition — competition of AI models specialized to a specific domain, such as biotech, robotics, video, or text. These forms of competition foster a dynamic ecosystem in which specialization and iteration are both rewarded. 

A Return to Traditional Microeconomics

Traditional microeconomics predicts that competition enables optimization through specialization. As Adam Smith puts it, “The wealth of nations is built on the division of labor.” This point is sharpened by Hayek’s knowledge problem:

“Today it is almost heresy to suggest that scientific knowledge is not the sum of all knowledge. But a little reflection will show that there is beyond question a body of very important but unorganized knowledge which cannot possibly be called scientific in the sense of knowledge of general rules: the knowledge of the particular circumstances of time and place. It is with respect to this that practically every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active cooperation.”

Thiel overturned this with his thesis that the incumbent network effects led to natural monopolies if executed upon correctly. “Brand, scale, network effects, and technology in some combination define a monopoly; but to get them to work, you need to choose your market carefully and expand deliberately.” The value of the accumulated participants in Facebook, Google, or Microsoft was inherently worth more than the fruits of a more specialized network.

Thiel’s analysis is based on the comparison of two variables: network effects and returns to specialization. Social media saw significant improvement in the strength of the social network. The second, often overlooked variable is the rate of improvement of smaller, specialized networks, or lack thereof. With social media, specialized innovation never reached the pace required to compete with network effects. The same is not true for AI.

Compute and Comparative Advantage

A common argument for the winner-take-all position is that the ability to compete in the AI market will increasingly depend on having access to large pools of compute. There is truth to this, at least on the foundation model level. GPT4, a leading base model, reportedly cost one hundred million dollars to train. 

Crucial to the comparison between smaller, more cost-efficient models and larger, expensive models is the idea of comparative advantage. Here is a quick summary from economist Noah Smith:

When most people hear the term “comparative advantage” for the first time, they immediately think of the wrong thing. They think the term means something along the lines of “who can do a thing better.” After all, if an AI is better than you at storytelling, or reading an MRI, it’s better compared to you, right? Except that’s not actually what comparative advantage means. The term for “who can do a thing better” is “competitive advantage,” or “absolute advantage.”

Those who think of market competition as akin to academic competition, in which the AI that does best at a benchmark will automatically be better than its competitors, are begging the monopoly question. Real life does not work like an academic competition. Many factors complicate the relationship between theoretical performance and practical applications, most commonly cost. As several founders mentioned earlier, an easy problem that can be solved by both a cheaper and more expensive model is best solved by the cheaper model, since it will solve the problem at a lower cost. Mixture of depths and other algorithmic solutions are being attempted, but are not yet commercially viable.

None of this is to say a competitive market is guaranteed. The degree of hardware improvement, changes in algorithm structure, market distribution, and the policy environment will all play important roles in determining the future AI market. 

However, there is reason to expect continued tradeoffs in algorithmic improvements. Empirical results show that  many fine-tuning techniques reduce AI performance on tasks for which it is not specialized. This is known as an alignment tax — if an AI is optimized to conform to a stricter form, function, or ideology, its performance in other tasks will suffer. This is observed across a variety of models, including by OpenAI.

Comparative advantage shows that AI monopolies are not inevitable. While large-scale computing power is important, the diverse and evolving AI market allows smaller, specialized models to compete successfully.

Policy Matters

Monopoly fatalism can be a self-fulfilling prophecy. If regulators assume the market will inevitably be dominated by monopolies, they will write regulations suited to those monopolies. Fantasies about inevitable market concentration or runaway development of AI lead to concrete regulatory harms to almost every company shown in the earlier graphic. By inflicting legal and compliance costs on startups, the US government creates unnecessary barriers to entry which make monopolies far more likely. One example of a monopoly-solidifying policy is the Sam Altman-endorsed AI licensing plan, which would create a regulatory agency that decides which AI companies can and cannot operate. 

The National Telecommunications and Information Administration’s list of harms and risks

Some anti-competitive policies are already being solidified. The Biden Executive Order on AI directed the National Telecommunications and Information Administration to commission a report on “AI Accountability,” which threatened to impose costs “across the AI lifecycle and value chain.” This regulatory approach needlessly targets companies that have no impact on the “harms and risks” identified by the report, listed above. Companies that are developing infrastructure, hardware, and developer tools have no control over the information given to users, making it impossible for them to address the above harms and risks in any material way. 

Instead of focusing regulatory action on a single point of intervention, which would minimize the costs to both companies and the agency itself, it recklessly expands the regulatory domain, risking creating exactly the anticompetitive environment commentators fear about AI. 

This is the self-fulfilling prophecy: the more regulators assume the AI industry is monopolistic, the more they will act to make it so. Mergers that have had no synergy and lead to lower efficiency become justified based on compliance costs. Marginal startups exploring new alternative frameworks for hardware, training, fine-tuning, or deployment will no longer be able to operate due to compliance costs. 

The natural monopoly narrative captures a particular moment in time — the social media age — with specific economic incentives. In the long term, the story of specialization and trade told by traditional microeconomics is backed up by empirical results, industry strategy, and practical experience. Tradeoffs in cost, performance, and application are ubiquitous in current AI products. The current startup economy is investing heavily into specialization as a solution to these problems. The result is that a varied and competitive market is far more likely than commentators and policymakers believe.

Nonetheless, the mainstream monopoly argument is not to be entirely ignored. It raises correct points about the concentration of talent, legacy distribution, the barriers to entry of creating base models from scratch, and potential regulatory captures. Both sides of the ledger must be considered. 

The AI industry is in a rapidly fluctuating state. Many technological and business questions have yet to be answered. Monopoly is neither inevitable nor impossible. In other words: decisions matter. The decisions of individual founders, leading companies, and regulators will all factor into the state of the AI market in ten years. 

Today, thousands of founders are inspired by a vision of market competition. In AI, as in most areas of life, nihilism and fatalism contradict the evidence. In each of our own endeavors, we can learn something from that competitive spirit.