The Internet was built off of a deal between companies and customers, one that we ought to understand. Companies provide a cornucopia of free, or almost-free, content and services, but in exchange we provide companies with data that helps them direct tailored advertisements to us.
The Federal Trade Commission, or FTC, does not like this deal. Under the twin pretenses of privacy and data security, the FTC wants to make a new Internet, one more suited to the preferences of legal academics and bureaucrats than consumers. The new Internet would only permit companies to gather the data the government deems necessary.
The FTC has set out to create an Internet New Deal that would end “Surveillance Capitalism,” as they call it, and mandate “data minimization” as a means to stop the unjust “monetization” of data. For companies and consumers, it would be a bad deal.
“The Nation’s Privacy Agency”
The FTC recently said that “[f]or more than two decades, the Commission has been the nation’s privacy agency.” Technically, it is not. No one has designated the FTC the director of privacy. However, since the early 2000s, the FTC has taken that job upon itself, based on its own interpretation of vague congressional language. There are several laws in the United States that concern privacy, from the Health Insurance Portability and Accountability Act (HIPAA) to the Fair Credit Reporting Act. The FTC, as part of its general mission to protect consumers, monitors companies’ adherence to these laws. But much of the FTC’s privacy and data security work is based on a law dating from the 1930s that bans “unfair or deceptive acts or practices.” The agency has argued that certain uses of consumer data are “deceptive” — if the consumer does not fully and enthusiastically consent to each specific use.
The FTC’s privacy and data security campaign has tended to operate through lawsuits instead of regulations. Lawsuits give the FTC more discretion than rulemaking. In its 2016 report “Big Data: A Tool for Inclusion or Exclusion?” the FTC noted that “[o]nly a fact-specific analysis will ultimately determine whether a practice is subject to or violates” the Fair Credit Reporting Act. For unfair and deceptive acts, the FTC’s “inquiry will be fact-specific, and in every case, the test will depend” on how the company uses data. A “case-specific inquiry” is also necessary to evaluate adherence to discrimination laws, thus “companies should proceed with caution.”
Most data cases brought by the FTC end in consent orders which require companies to rework their business or appoint FTC-approved monitors. The power of the agency is substantial, and almost all companies comply rather than fight back. When companies do fight back the agency does not come out looking great. In 2016, the FTC sued the medical testing business LabMD. The supposed consumer harm was so hypothetical and the order was so vague and sweeping that the FTC lost the case in front its own administrative law judge, a near-unheard-of event. The judge determined that the FTC had shown only the “possibility” of consumer harm, but not “probability.” The FTC, as it is able, then overruled its own judge, before finally losing the case before a federal appeals court. A three-judge panel for the 11th Circuit wrote, in its decision vacating the FTC’s order, “It does not enjoin a specific act or practice. Instead, it mandates a complete overhaul of LabMD’s data-security program and says precious little about how this is to be accomplished.” It was a pyrrhic victory. By that point, the years of litigation had pushed LabMD out of business.
The FTC is not shy about treating its consent orders as de facto laws for whole industries. After a settlement with Google over privacy practices, FTC Senior Attorney Lesley Fair suggested that “savvy executives” in the tech world should learn about how to restructure their companies based on the consent order. In another post discussing the privacy case against the Texas-based digital marketing company InMarket Media, Fair said that “[s]avvy companies will take the time to read the InMarket complaint carefully to glean important compliance guidance about what constitutes effective consumer consent.” Whether the entire tech industry has time to read a single complaint by the FTC and reshape their practices accordingly is a question left unasked.
The Ideology of the Khan FTC
Older FTC actions focused on the deception of consumers, and thus their consent orders generally mandated more notices of data gathering and requests for consumer consent. The result was a substantively worse Internet experience, with little upside: websites filled with unending pop-ups and unread privacy emails flooded our inboxes. Since the beginning of the Biden administration, however, new officials at the FTC have taken a much more radical perspective on data collection: they wish to ban the gathering of many kinds of data outright — consent or no consent.
FTC officials have taken inspiration from “The Age of Surveillance Capitalism,” a 2018 book by Harvard Business School Professor Shoshana Zuboff. This Marxist-inflected work treats data gathering or “commercial surveillance” as the first step in the transformation of humans into commodities in an evil capitalist mill. Or, as Zuboff states, in typical Marxist polysyllabic verbiage, commercial surveillance is part of the “parasitic economic logic in which the production of goods and services is subordinated to a new global architecture of behavioral modification” based on the “extraction imperative” of companies to acquire “behavioral surplus.”
These ideas have taken over a bureaucratic agency with broad powers to enforce them on Americans. The implications for the economy are startling when spelled out. In a recent press release the agency said, “The FTC is concerned that companies collect vast troves of consumer information.” If the nefarious gathering of data had not yet horrified readers, the agency went on: “The FTC is concerned that companies use algorithms and automated systems to analyze the information they collect.” And if the reader is not alarmed by the combination of gathering and analyzing data, there’s more: “The FTC is concerned that companies monetize surveillance in a wide variety of ways.” The FTC’s clear goal is to stop the gathering, analyzing, and monetization of data. Nowhere is the anti-capitalist ethos behind the campaign against “commercial surveillance” clearer.
The FTC has now adopted scare terms such as “commercial surveillance”: which, as one commentator noted, is an “ominous-sounding term” that basically means data held by companies. The FTC has started using other phrases that are common in the academic world of data regulation, such as “dark patterns,” meaning the designs or features that encourage people to do one thing instead of another, which could apply to every product’s packaging since the dawn of time. One University of Chicago Business Law Review article alluded to the “dark patterns,” which use “various psychological biases” to push people towards consenting to data gathering, specifically cookies. The article said such dark patterns are “unfair and deceptive” under the FTC Act. (The webpage for the article itself does not include an explicit cookie consent option, but tells the reader that they are using cookies and offers a large “dismiss” button.) The FTC worries that companies employ “dark patterns or marketing to influence or coerce consumers into choices they otherwise would not make, including purchases.” They don’t tell us when they learned that marketing can make people purchase things.
This anti-commercial ideology has been fully adopted by the leaders of the FTC. Chair Lina Khan has received significant attention for her opposition to big business, but there is less attention to her opposition to big data. The two are related in her mind. In the law review article that launched Khan’s career, “Amazon’s Antitrust Paradox,” she noted her concern that browsing e-books on Amazon “hands the company information about your reading habits and preferences, data the company uses to tailor recommendations and future deals.” Amazon, or other big companies, could use such data to improve services to customers, which leads to more customers and more data in a virtuous, or to Khan’s eyes, vicious loop. Khan also notes that large companies could have correspondingly large data breaches (ignoring that, if anything, smaller companies with ad-hoc security protocols are more likely to suffer data breaches). To Khan, big data reinforces big business, and both are irredeemable.
Khan wrote another, less famous law review on online privacy specifically. In this article with a fellow Columbia University Law professor, Khan highlighted the supposed horrors that accompany data gathering in general, such as “predatory advertising,” “enabling discrimination” and “the spread of disinformation.” But the basic problem in her mind was with “business models that demand pervasive surveillance” under “’surveillance capitalism.’” She attacked other methods of regulating big data, such as requiring companies to be “fiduciaries” of customer data, because they “characterize[] Facebook, Google, Twitter, and other online platforms as fundamentally trustworthy actors,” as opposed to, seemingly, malevolent behemoths. She suggested turning big platforms into common carriers subject to old-fashioned Progressive-Era regulation, and creating “bright-line prohibitions on specific modes of earning revenue” and collecting data. While Khan has advocated an end to the “consumer welfare” standard in antitrust, which states that the only goal of antitrust should be to help consumers, she has also seemed to advocate for an end to the consumer harm standard in privacy and data breach cases, with data gathering itself being the problem instead of any particular harm it causes.
After her appointment to the FTC, Khan offered what was called her first major address on privacy as chair at the Global Privacy Summit in 2022. She elaborated on her previous arguments about data and said that firms should not “condition access to critical technologies and opportunities on users having to surrender to commercial surveillance.” She bemoaned the “lack of legal limits on what types of information can be monetized.” She said that giving notice and getting consent was insufficient, since Americans should question whether “certain types of data collection and processing should be permitted in the first place.”
The two other Democratic commissioners on the FTC are also full-throated about their opposition to “notice and consent,” as well as their advocacy for restructuring the modern world of data. Commissioner Rebecca Slaughter claims that “consumers have no choice” about agreeing to many data exchanges. She said “I’m concerned that a market based around leveraging massive amounts of people’s data generates harms that extend well beyond traditional privacy concerns,” including putative issues of civil rights, discrimination, “misinformation,” and so forth. She claims that the internet “encourages leveraging huge amounts of data as a surveillance business model and then turning those data into products,” which might be a short description of why the American technology economy works so well—not a threat to be addressed.
The third commissioner in the Democratic majority, Alvaro Bedoya, has built his career on data privacy and previously headed a privacy center at Georgetown Law. Soon after his confirmation, Bedoya said that “Our nation is the world’s unquestioned leader on technology and the data economy. And yet we are almost alone in our lack of meaningful protections” against the use of such data. He seemed to not consider that the latter helped explain the former. He has attacked the idea of “notice and choice,” railed against “commercial surveillance,” and said that instead of garnering consent, “many technologies should never be deployed in the first place.”
Samuel Levine, the head of the Bureau of Consumer Protection, which enforces data laws for the FTC, is of a similar mind. Levine referenced the problems with “surveillance capitalism” this year and argued that “the scope of today’s surveillance economy calls into question whether existing tools and approaches are sufficient. In particular, I think it is fair to ask whether notice and choice can protect privacy.” He says that for the FTC the “first goal, quite simply, is an internet with less surveillance… At a high level, the solution is straightforward. Firms need to collect and retain less data about us, and secure it better. Yet the behavioral ad-driven business model that has shaped the internet for decades has pushed firms in the opposite direction.” Now, ending that ad-driven business model is the FTC’s aim.
The FTC Opens the Campaign
The FTC has begun lawsuits against companies with the goal of not just increasing consent but limiting data altogether. In January of this year, the FTC brought two separate cases against data providers: InMarket and Avast. In the inMarket case, the FTC argued that even though the company did not directly sell location data, it still sold products created from that data, supposedly without customer consent. The FTC succeeded in enacting a ban on the sale of certain location data, and a ban on using that data to build profiles of customers. In February, the FTC reached a similar deal with Avast, the antivirus company, which prohibited Avast from selling certain browser data and required it to delete any algorithms which used that data.
The FTC still bases most of its lawsuits on the increasingly thin claim of “consumer deception.” An FTC blog noted that although InMarket explicitly asked users to allow the collection of their data, that “was nowhere near the full story of what InMarket was doing with their personal data behind the scenes.” The company might wonder if the disclosure of the “full story” of data aggregation would have to be thousands of pages long. The FTC claimed the lawsuit was necessary because “half-truths are untruths,” a statement which might self-referentially be applied to itself in this case.
Typically, the FTC demands a consent order from a company in exchange for not contesting a suit. After the order is in place, the FTC can then bring new suits based on the order––instead of the law––and then demand new “modifications” of the original order. In recent years, these modifications have become more expansive, and have been aimed at ending “commercial surveillance.”
In 2012, the FTC got a consent order at Facebook based on their supposedly lax privacy practices. Then, in 2019, the FTC claimed that Facebook was not following the order and used that claim to extract a $5 billion settlement. As the FTC trumpeted on their own website, it “is one of the largest penalties ever assessed by the U.S. Government for any violation” and creates “unprecedented new restrictions on Facebook’s business operations.”The FTC set in place a new 20-year consent order which required Facebook to create a “privacy review” of every new or modified product and maintain government-approved independent assessor of data.
Last year the FTC proposed new modifications “to clarify and strengthen” the recent consent order for Facebook, now Meta: requiring a pause on all new products and features until an independent assessor can evaluate them. Even Commissioner Bedoya, an arch-privacy hawk, has said that he is concerned whether the new demands have any connection to the old consent order.
One finds similarly escalating attacks against data gathering at X, formerly Twitter, through consent orders. The FTC won a consent order against Twitter based on a similar allegation of lax data practices in 2011. The complaints included the allegation that too many employees had administrative control over the system, which the FTC said increased the risk of a breach. Recently, however, the FTC has been adding “new and burdensome” demands to its consent order, as the company said in a complaint. As if to prove the point, the FTC has recently announced that it is opening a new investigation into X based not on any actual data failure, but on the claim that, since Elon Musk took over, there were “radical changes” at the company, such as the Twitter Blue roll-out being “hasty.” The FTC was making clear that there was not any particular issue with the company, just a general concern that the company’s data, which constitutes almost the entirety of its business, should be managed differently — by planners in Washington.
In April of this year, the FTC celebrated that it won “our first-ever ban on the sharing of browsing data” as well as 16 consent orders with data minimization requirements. Some activists want the FTC to go beyond case-by-case orders, and create broad data rules for the entire economy. The agency has listened.
In 2021, President Joe Biden issued an executive order on competition where he bemoaned the “aggregation of data” in large platforms and urged the FTC to move beyond lawsuits and institute a rule around “unfair data collection and surveillance practices.” The same month, the FTC voted to streamline their rulemaking authority and to “reinvigorate” their ability to use rules instead of lawsuits.
The FTC has since floated a broad rule that could upend much of commercial life. In its “advanced notice of proposed rulemaking” on “commercial surveillance and data security” the FTC suggests a litany of harms that might emerge from data gathering: including that digital advertising could target fraudulent products and children could be targeted by cyber-bullies. (A concerned reader may wonder if fraudulent products were ever pitched, or children bullied, before big data arose.) The incipient proposal focuses on how to mandate more extreme forms of data security and “data minimization.” The FTC worries, strangely, that companies can use data to “place behavioral ads, or leverage it to sell more products.” As Jennifer Huddleston of the Cato Institute put it, “Khan is conflating beneficial and risky data collection practices in an effort to kneecap successful business models she dislikes.”
Without further congressional authorization, any explicit regulation from the FTC’s would be on shaky legal ground, considering the lack of clear laws on data privacy. But Congress is now considering a comprehensive bipartisan bill to make the FTC an official data regulator. The bill would force large companies to conduct privacy impact assessments every two years, file data control policy statements with the FTC every year, and require “data minimization” so that these companies can only collect the data that is “necessary, proportionate, or limited” to providing a service. Additionally, companies could not transfer certain data to a third party without explicit consent. But while the draft law proposes broad restrictions on for-profit companies, it conveniently excludes not only federal, state, and local governments from its mandates, but also any company that is collecting information “on behalf of” a government. As is so often true for the FTC, there is one rule for private citizens and a different rule for the government itself, even if the government suffers from many of the same or worse ills than private companies
The Ironies of the FTC’s Data Campaign
Despite regular suits against private companies for data breaches, the government has not been a great steward of people’s data. Since 2014, the government suffered over 1,200 data breaches affecting more than 200 million records. According to IBM, the average cost per record breach is about $165, which means these breaches have cost Americans about $30 billion.
The federal government sometimes makes contracts with the same companies and data brokers that the FTC sues, showing that they appreciate the opportunity to use the big data when they need it. Some of these contractors are not infallible data stewards. More than 632,000 Justice and Defense department employees had information exposed through one contractor hack. The government and politicians are also not shy about sucking up data when they need it. Senator Elizabeth Warren, a staunch privacy advocate when it comes to tech companies, has used her campaign pages to send browser information to Facebook, Twitter, Google, Amazon and other third-party trackers, as well as used it to discover the latitude and longitude of browsers as a means to discover potential voters and donors.
Another irony of the FTC’s campaign is that the agency itself has some of the most expansive powers in government to gather data on Americans. Any single member of the Commission can issue subpoenas, outside of a court process, to compel testimony and documents from private citizens. In 1994, the FTC received the power to issue “civil investigative demands,” which require people not only to answer questions but to provide data on any issue that the FTC demands. The FTC does allow a recipient to fight these demands, but the FTC itself then determines whether it made a mistake. (One might say it needs an independent assessor to monitor its data demands.) Last year, the FTC issued a new resolution that will “streamline” its staff’s ability to issue civil investigative demands related to artificial Intelligence. The FTC can also “make public” information they acquire if it decides it’s in the “public interest.” In another context, acquiring data without consent and then releasing it to the public would be known as a data breach.
Almost all of the FTC’s actions have been based on the idea that companies have been collecting and using data without someone’s consent, or their ability to consent, and that this data collection is harming consumers and democracy itself. But study after study shows that while people may value privacy in the abstract, they are willing to exchange massive amounts of personal information for trivial sums. On the Internet, consumer preferences are clear: people would rather pay with data than dollars.
The FTC can continue to take action against bad actors — those engaged in fraud or negligence — as they did against Ashley Madison when the company suffered a massive data breach after lying about deleting old accounts. But the most troubling aspect of the FTC’s new data ideology is that fraud, negligence, and consumer harm are no longer the guiding principles for action. Capitalism itself, in the form of exchanging data for services via targeted advertising, is the problem. An Internet based on data and targeted advertising is a boon for Americans. A recent paper estimated they receive over $300 billion in advertising-supported free content.
Ironically, considering the progressive bona fides of the FTC, moving away from a free Internet would be a massive and disproportionate burden on America’s poor, who would likely not be able to afford to use an internet gated by subscriptions and pay-to-use websites. The FTC wants to force them to shoulder costs that are otherwise borne by wealthier people who have to suffer the indignity of targeted ads gleaned from their shopping habits. What the FTC doesn’t don’t tell you is that on their version of the internet, where data is evil and consent irrelevant, advertisements won’t go away; they will just be worse. If you’re offended by Amazon making good book recommendations, wait until they make bad ones. The new internet would be the dumb internet. And there are few things the FTC could do that are dumber than that — trying to destroy the digital economy that has been one of America’s sole economic engines in recent years.