A Framework for Regulating Content on the Internet

This week, when the U.K.’s Secretary of State for Digital, Culture, Media and Sport and the Secretary of State for the Home Department released a white paper calling for significantly increased regulation for tech companies, the scope of the debate was predictable. The MIT Technology Review laid it out succinctly:

Technology giants will be forced to have a “duty of care” for their users, if a proposal announced by the government on Monday becomes law. The proposal — a “white paper,” in UK legal parlance, which is one of the first stages of a formal government policy — is, on the surface at least, sweeping in scope and is a serious shot across the bows for big tech companies. But it has also raised some serious concerns about how it will be implemented and the possible consequences it might have on citizens’ free speech…

The proposals have raised interest among academics and observers, and alarm among privacy campaigners. The former note that while the document is scant on details despite being tens of thousands of words long, it sets out a clear direction in a way few countries have been willing to do. But the latter fear that the way it is implemented could easily lead to censorship for users of social networks rather than curbing the excesses of the networks themselves.

This proposal comes on the tail of an exposé in Bloomberg entitled YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant; the debate around that piece, which I wrote about last week in two Daily Updates (here and here), not only touched on the question of free speech, but also the sheer scale of the problem — and, relatedly, the sheer scale of Facebook of Google.

Problematic Regulation

In short, there are clear questions that arise around all of these exposés and proposals:

  • First, what content should be regulated, if any, and by whom?
  • Second, what is a viable way to monitor the content generated on these platforms?
  • Third, how can privacy, competition, and free expression be preserved?

You can see how these questions quickly arrive at competing answers when looking at recent attempts at regulation:

  • GDPR has certainly increased the number of website click-throughs; it has also strengthened Facebook and especially Google’s competitive position exactly as predicted.
  • The European Copyright Directive, specifically Article 13, makes platforms liable for copyright violations, and while the European Parliament took care to state that this wasn’t a requirement for a content filter, there is no other viable solution. Content filters are not only extremely difficult and expensive to develop (Google has spent $100 million plus on ContentID), entrenching the largest players that have the resources to fund development and the leverage to pay for it, they will also necessarily be overly strict, limiting user expression.
  • Even more egregious than the Copyright Directive, amazingly enough, is Australia’s new law about “abhorrent violent material” like the live-streaming of the horrific Christchurch mass shooting. Companies are liable if such content is discovered on their service period — being told it exists is sufficient evidence of recklessness — and worse, every company in the stack is liable, from ISPs to cloud providers to social networks. That leaves no choice but to spy on all user traffic or, for small-and-medium-sized platforms outside of Australia, avoid the country altogether.

At the same times, the Christchurch video and its spread are clearly problematic — there is something off about the current state of affairs.

The Christchurch Video

It hardly bears noting that in a pre-Internet world there would be no widespread video of the Christchurch hate crime. Capturing video required specialized equipment, and more importantly, broadcasting video was limited to a small number of television stations, all of which, even if they had the video, would have exercised their editorial judgment to keep it off the air.

What is critical to note, though, is that it is not a direct leap from “pre-Internet” to the Internet as we experience it today. The terrorist in Christchurch didn’t set up a server to livestream video from his phone; rather, he used Facebook’s built-in functionality. And, when it came to the video’s spread, the culprit was not email or message boards, but social media generally. To put it another way, to have spread that video on the Internet would be possible but difficult; to spread it on social media was trivial.

The core issue is business models: to set up a live video streaming server is somewhat challenging, particularly if you are not technically inclined, and it costs money. More expensive still are the bandwidth costs of actually reaching a significant number of people. Large social media sites like Facebook or YouTube, though, are happy to bear those costs in service of a larger goal: building their advertising businesses.

The key differentiator of Super-Aggregators is that they have three-sided markets: users, content providers (which may include users!), and advertisers. Both content providers and advertisers want the user’s attention, and the latter are willing to pay for it. This leads to a beautiful business model from the perspective of a Super-Aggregator:

  • Content providers provide content for free, facilitated by the Super-Aggregator
  • Users view that content, and provide their own content, facilitated by the Super-Aggregator
  • Advertisers can reach the exact users they want by paying the Super-Aggregator

Everything is aligned from the Super-Aggregator perspective: users give attention that content providers work to earn, and advertisers compete to buy their way in.

The three-way market of a Super-Aggregator

Moreover, this arrangement allows Super-Aggregators to be relatively unconcerned with what exactly flows across their network: advertisers simply want eyeballs, and the revenue from serving them pays for the infrastructure to not only accommodate users but also give content suppliers the tools to provide whatever sort of content those users may want.

That, there, is the rub: given that these platforms are basically reflections of humanity, what users want varies from the beautiful to the profane — and things far more ugly than that. And worse, there is no editorial judgment to keep users from what they want, or suppliers from providing it. Indeed, that such sordid content can exist on YouTube and Facebook is testament to just how popular they are; that such content is effectively incentivized speaks to the fact that YouTube and Facebook’s moneymaking mechanism is completely divorced from this content match-making.

Market Failure

This is, in its own way, a market failure, albeit not, to be clear, in an economic sense: the allocation of goods and services by a Super-Aggregator is not only efficient, but also generates significant consumer surpluses. The failure, rather, comes from videos like that of the Christchurch massacre, or problematic YouTube content. It is not good for society that terrorists be able to freely broadcast their videos, or that child-exploitation videos spread on YouTube.

The problem is that there is no way to check this behavior: the vast majority of Facebook and YouTube users self-select away from this content, and while advertisers raise a fuss if they find out their ads are alongside this content, they have no incentive to leave the platforms entirely. That leaves Facebook and YouTube themselves, but while they would surely like to avoid PR black eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.

Note how much different this is than a traditional customer-supplier relationship, even one mediated by a market-maker: users disgusted by Uber, for example, could switch to Lyft, directly impacting Uber’s bottom-line. Or go back a few years, when GoDaddy expressed support for SOPA copyright legislation: the company was forced to change its position in the face of widespread boycotts (including by yours truly). When users pay they have power; when users and those who pay are distinct, as is the case with these advertising-supported Super-Aggregators, the power of persuasion — that is, the power of the market — is absent.

The Three Frees

There are, in Internet parlance, three types of “free”:

  • “Free as in speech” means the freedom or right to do something
  • “Free as in beer” means that you get something for free without any additional responsibility
  • “Free as in puppy” means that you get something for free, but the longterm costs are substantial

Most in the West agree, at least in theory, with the idea that the Internet should preserve “free as in speech”; China in particular represents a cautionary tale as to how technology can be leveraged in the opposite direction. The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.”

Specifically, Facebook and YouTube offer “free as in speech” in conjunction with “free as in beer”: content can be created and proliferated without any responsibility, including cost. Might it be better if content that society deemed problematic were still “free as in speech”, but also “free as in puppy” — that is, with costs to the supplier that aligned with the costs to society?

A Regulatory Framework for the Internet

This distinction might square some of the circles I presented at the beginning: how might society regulate content without infringing on rights or destroying competitive threats to the largest incumbents?

Start with this precept: the Internet ought to be available to anyone without any restriction. This means banning content blocking or throttling at the ISP level with regulation designed for the Internet. It also means that platform providers generally speaking should continue to not be liable for content posted on their services (platform providers include everything from AWS to Azure to shared hosts, and everything in-between); these platform providers can, though, choose to not host content suppliers they do not want to, whether because of their own corporate values or because they fear boycott from other customers.

I think, though, that platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.

This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects. Australia, for example, has no need to be concerned about shared hosting sites, but rather Facebook and YouTube; similarly, Europe wants to rein in tech giants without — and I will give the E.U. the benefit of the doubt here — burdening small online businesses with massive amounts of red tape. And, from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.

The result is a regulatory framework that looks like this:

The regulatory framework for the Internet

“Free as in speech” is guaranteed at the infrastructure level, the market polices platform providers generally (i.e. “free as in puppy”), while regulation is narrowly limited to businesses that are primarily monetized through advertising (i.e. “free as in beer”) and thus impervious to traditional content marketplace pressures.


This framework, to be clear, leaves many unanswered questions: what regulations, for example, are appropriate for companies like YouTube and Facebook? Are they even constitutional in the United States? Should we be concerned about the lack of competition in these regulated categories, or encouraged that there will now be a significant incentive to build competitive services that do not rely on advertising? What about VC-funded companies that have not yet specified their business models?

Still, I think this framework provides a very important foundation for addressing many of the flaws in today’s regulatory proposals, particularly the unintended effects on small-and-medium sized businesses and the platforms that support them which, I believe, are critical for the economy of the future. Regulators and lawmakers should, as always, be wary that in the well-meaning attempt to shape the world as it is they foreclose the world that might be.