Note: this essay was originally written for, and posted on, Wired.
Facebook is a lot like a landfill, not only because it’s full of other people’s shit but because, while everyone agrees something needs to be done about it, nobody seems to quite know what. What most (American) commentators have in common, though, is where they look for the answer: the late 19th and early 20th century trust-busting and progressive movements, when activists and politicians broke harmful concentrations of economic power in everything from oil to railways. Applying antitrust protections to Facebook has been discussed to death; so, too, has the idea of Facebook as a public utility—as a socially accountable resource like water and electricity.
The first issue in this debate is whether Facebook should be considered a public utility at all. Wired reporter Gilad Edelman takes the perspective that it isn’t. Susan Crawford also argues that it isn’t, or shouldn’t be, largely because (to paraphrase) she feels the infrastructure it provides isn’t central enough to society to be a utility.
Others argue for treating Facebook as a public utility but disagree on what that might mean. Dipayan Gosh, over in the Harvard Business Review, says that it is, and the response should be regulating the company’s data handling, mergers, and approaches to ads and hate speech. This position strongly aligns with that of danah boyd, who proposed framing Facebook as a utility way back in 2012, with the vital difference that Gosh sees a public utility approach as a panacea; something to be done instead of any other action.
I happen to think that some of Facebook’s services are important enough to consider it a piece of social infrastructure and that the appropriate response to the company’s, shall we say, endless litany of zuck-ups is to put the regulatory boot in. But the bigger issue is that treating Facebook as a public utility requires not only answering the question of whether it’s a utility but which “public” it should be accountable to—and that’s a much more difficult problem.
Tech companies love to claim that they’re innovative, disruptive, and bringing us hitherto unseen vistas—but when it comes to sociopolitical dynamics, Facebook and its problems are old. Like, 19th century old. Before American society was reshaped by the internet, it was reshaped by railways, electricity companies, water providers, and a range of other new industries and resources—all privately controlled and highly concentrated and, eventually, with an enormous amount of political power.
The 19th century solution came in two forms: breaking monopolies, and reshaping them. The “breaking” was antitrust law, which treated monopolies as bad on their face and sought to actively force the breaking-up of companies that held them. The “reshaping” was for situations where monopolies were not, in and of themselves, the problem. Railways, electricity, water supplies: There are some pretty obvious public advantages to having these standardized, since all of them lose a vast amount of their actual usefulness if track gauges or voltage standards change every hundred miles (or hundred houses).
In such a situation, Louis Brandeis and the broader movement of Progressives instead advocated a “public utility” model. Companies and industries that had a “natural monopoly”—where the centralization was in some respects part and parcel of the very premise of the product—were not broken up but instead forced to abide by different rules and systems of public accountability.
K. Sabeel Rahman divides the approaches for public utilities into two categories. The first—used with telegraphs and telephones, and still used with internet service providers to this day—consisted of setting universal and standardized expectations. This included “common carrier” standards (in which providers had to take traffic from any source in order to avoid forms of private censorship or lock-out), control over costs and rates, and formalized standards for accountability. The second consisted of wholesale public takeover; the ownership of the infrastructure itself by the public, either through conventional forms of state government or local, quasi-governmental and democratically overseen boards of control (it’s from the vestiges of the latter that we get local public utilities). Taking either approach with Facebook would lead to a very different world, with very different problems. But it’s hard to tell, from where we are, if either makes sense.
Part of the problem is over what we mean by “Facebook” (they can call themselves Meta all they want, but a meta is something rather different, even if both involve making people complete dicks). While the company might have started off as a social networking platform, its interests and activities are now far more broad. In terms of user-facing technologies alone, there’s not only Facebook but also Instagram and WhatsApp. Beyond these platforms, there’s also the vast advertisement infrastructure that financially underwrites all of the above, along with tools for reformatting content to fit within them, both of which have played a direct role in some of the harm Facebook is doing.
This isn’t just pedantry; asking whether something is a public utility is ultimately dependent on agreeing on what that something is, and what it does. Facebook is more than a single utility. Trying to puzzle through how to treat it is like trying to work out how to handle railway barons if they also controlled coal mining, steel manufacturing, and, I don’t know, teapot manufacturing as a side project; what you end up with is not one simple, clearly defined and understood company and industry but this fuzzy mess of an animal in which everyone is grabbing a different limb and swearing they have a grip on the whole creature. The resulting uncertainty and debate is ultimately beneficial to Facebook; as Linsey McGoey has documented, uncertainty is a valuable resource when it comes to resisting regulation. It keeps people stuck on the status quo, and it invites us to endlessly cycle on defining terms instead of acting.
More worryingly, this central focus on the question of “utility” risks an approach to regulation that takes the word “public” for granted. And as history also shows us, that has dangerous consequences.
Each of Facebook’s platforms and products have different publics—different user bases, but also different communities and contexts that are affected, and that the company should be accountable to. Cleanly separating those platforms out makes the job of identifying those publics easier, but then raises a second problem: aligning those publics with, well, legal regimes.
Despite disagreeing over whether Facebook should be treated as a public utility, on what grounds it should be treated as a public utility, and what being treated as a public utility means, most of the commentary on public utility models shares one (implicit) agreement: The regulatory model imagined is an American one. Hell—as my potted history above demonstrates, even talking about it in terms of a “public utility model” at all is to slot these companies into the United States’ histories and models of regulation. If you’re imagining Facebook as “electricity 2.0,” and take care to step over the fly-swarmed nature of American public policy, this makes quite a bit of sense. If you’re paying any attention to Facebook as Facebook, however, it makes none at all.
Electricity companies are regional—sometimes national—companies. In the United States, they fit together into four networks (east, west, Alaska, and Texas, the one piece of infrastructure studies trivia that people picked up in 2021). The amount of shit Facebook justifiably gets for enabling the genocide in Myanmar is all the information one needs to know that Facebook—and WhatsApp, and Instagram—are global. The vast majority of users of each platform are not in the United States. Not only that, but the spaces in which they can most plausibly be considered a utility, or an infrastructure (serving a foundational role in communication), are uniformly outside of it.
Conventional ideas of public utilities imply either heightened standards, involving accountability, transparency, and forced neutrality, or complete public ownership. In both cases, though, they’re dependent on the public (or: the state claiming to represent the public) being able to set out what their expectations are. In the case of standards, you need some agreement on what accountability (or transparency) looks like. In the case of public ownership, you need some mechanisms for appointing democratic overseers, giving their demands the force of law and holding them, in turn, accountable to the public.
There is a fundamental mismatch between the public (of Facebook, of WhatsApp) and the public (of US policymaking). The vast majority of the people impacted by these platforms are not people to whom American lawmakers are accountable. They are also not people who fit neatly into any of the existing patterns for what “public utility” regulation looks like.
Thinking about Facebook as an American company (and problem) makes this seem easier than it actually is. When Facebook is, instead, a global company, who sets those standards for transparency? If public ownership is preferred, then which public? A world government is hardly a popular idea; a world government of Facebook is only likely to win points with Zuck himself. The alternative is a form of “infrastructural imperialism,” a system of vast import to people outside the US, whose features, actions, and users are held to explicitly American standards.
This doesn’t mean that we should embrace moral relativism, sit on our hands, and while away the hours until Marky Mark and the Flunky Bunch over there finally realize their dream of inserting VR ads into our dreams. To the contrary—again, endless, uncertain debate is fundamentally to the advantage of those who like where things are.
It simply means that if we’re going to look to history for answers about what to do with Facebook, we should be looking somewhere other than public utilities, with their neat bounding by domestic legal systems and social structures. We should be looking not at railways but at the United Fruit Company: a fundamentally global company that weaponized regulatory capture and what Adriana Petryna calls “ethical arbitrage” to alternate between imposing American ideals of accountability and avoiding accountability for its harms anyway. We should look not at subnational companies where regulation worked but at multinational companies where it didn’t.
Regulating Facebook might involve public utility approaches—but since those happen on a nation-by-nation basis, breaking up the company is a requirement to enable it, not an alternative that public utility thinking renders moot. For genuinely infrastructural components (WhatsApp, say), splitting it off might look less like fragmenting it at national boundaries, and instead like carving out the base protocol and looking at the governance model of something like ICANN—the international nonprofit that makes domain names (mostly) work. But all roads lead to breaking up Facebook, even if that’s not the final destination.