Interviews with Patrick Collison, Brad Smith, Thomas Kurian, and Matthew Prince on Moderation in Infrastructure

These interviews formed the basis of this Weekly Article: Moderation in Infrastructure.

(Questions in bold; all interviews have been lightly edited for content and clarity)


Patrick Collison, CEO Stripe

It seems hard to believe that January 6 was only two months ago. In the moment most people were focused on social media companies and what they should do about Trump, including Stratechery, but looking back, I think one of the most surprising events for me personally was when Stripe stopped processing payments for the Trump campaign. At that point AWS had already decided to take down Parler, which was also pretty surprising, but perhaps because I’ve known you for a long time now I still found Stripe’s decision the most unexpected and thought-provoking. What was your thought process there?

Patrick Collison: In our view, providers of important infrastructure have a social responsibility to act neutrally and apolitically. If you go back to 1900, part of the fear of the Progressive Era was that privately owned infrastructure wouldn’t be sufficiently neutral and that this could pose problems for society. These fears led to lots of common carrier and public utilities law covering various industries — railways, the telegraph, etc. In a speech given 99 years ago next week, Bertrand Russell said that various economic impediments to the exercise of free speech were a bigger obstacle than legal penalties. How infrastructure intersects with other freedoms has been a longstanding concern for society and people are obviously right to care about this. As a result, Stripe biases strongly towards permitting any activity that’s legal and in compliance with the rules of the other actors in the financial system that we work with.

So, with that as context, we were very confident that it was correct to provide service to the Trump campaign in 2016 and 2020, even though employees at Stripe certainly had a wide spectrum of views about his merits as a leader. We work with lots of political parties in lots of countries (including many Democratic and Republican platforms and candidates in the US) and our political beliefs are firmly separate from our understanding of how to run a business.

Being impartial and apolitical doesn’t mean having no lines, though. For many years, we’ve prohibited incitement to violence. Some people are skeptical of any rules outside of “things that are that’s legal”, but we view a restriction on incitement to violence as consistent with existing exceptions to free speech protections, including the First Amendment and the European Convention on Human Rights.

After extensive review of the events of 1/6, our team decided that the events leading up to the Capitol invasion constituted such incitement to violence. As a result, we suspended some Trump Organization accounts. While we made our decision before the impeachment trial, I think it’s noteworthy that the Senate subsequently agreed. The vote was 57–43.

To what extent does it matter that it was the President? Trump wasn’t the only person suggesting that the election results were fraudulent, or suggesting that protesters make the voices heard. I use such an anodyne articulation on purpose — Trump didn’t explicitly tell protestor to invade the capital and carry zip ties and whatnot — but obviously his words carry far more weight than a random person on Facebook or Parler. Was this a “Presidential exception”, or is Stripe’s policy flexible enough to accommodate not just content but also context?

PC: There’s no Presidential exception. If we were reviewing another actor that was fundraising for causes related to the Capitol invasion in a fashion that looked like clear incitement, we’d probably take the same action. And we did suspend a number of other accounts that were connected to the day’s events.

Still, I agree, it’s a weird situation — we suspended service for an organization representing a sitting President! While it might seem like a rule prohibiting incitement to violence is a pretty anodyne and minimal restriction, it turns out that even that policy leads companies to some day refusing service to the most powerful person in the world.

When you ask about the context, though, I think it’s important to zoom out. Facebook, Twitter, and other platforms obviously suspended Trump’s accounts as well. And, outside the US, the response to that was often quite negative: European political leaders (including Macron and Merkel) were very critical. Alexey Navalny, the leader of Russia’s opposition, called the bans “unacceptable acts of censorship.” The criticism usually took the form of: this may or may not have been the right decision in a narrow sense, but private governance of public commons is a worrisome concept. (And this concern shows up in Europe’s new Digital Services Act proposal.) There’s a clear tension between the rights of private citizens to participate in society and the right of companies to choose their customers.

To me, the fact that even a case as clear cut as this one was received so ambivalently by elected leaders serves as a reminder of those societal expectations around infrastructure neutrality. We should act with humility!

What exactly was your motivation in terms of kicking the Trump campaign off of Stripe, though? The Wall Street Journal reported that the campaign switched to a fundraising platform call WinRed…which runs on Stripe! So was this really about Stripe saying “We don’t want to be associated with this”, as opposed to “We believe this is a clear and present danger that we have a responsibility to stop”?

PC: This gets into platform governance, which is one of the most important dimensions of all of this, I think. We suspended the campaign accounts that directly used Stripe — the accounts where we’re the top-of-the-stack infrastructure. We didn’t suspend all fundraising conducted by other platforms that benefitted his campaign.

We expect platforms that are built on Stripe to implement their own moderation and governance policies and we think that they should have the latitude to do so. This idea of paying attention to your position in the stack is obviously something you’ve written about before and I think things just have to work this way. Otherwise, we’re ultimately all at the mercy of the content policies of our DNS providers, or something like that.

Parler was a good case study. We didn’t revoke Parler’s access to Stripe… they’re a platform themselves and it certainly wasn’t clear to us in the moment that Parler should be held responsible for the events. (I’m not making a final assessment of their culpability — just saying that it was impossible for anyone to know immediately.) I don’t want to second guess anyone else’s decisions — we’re doing this interview because these questions are hard! — but I think it’s very important that infrastructure players are willing to delegate some degree of moderation authority to others higher in the stack. If you don’t, you get these problematic choke points.

By the way, the Parler situation also touches on the new phenomenon that Alec Stapp calls “DDoS but for everything” — what r/WSB is for Wall Street and what social media storms can be for content moderation and lots more besides. These sudden, unpredictable flocking events can create very intense pressure and I think responding in the moment is usually not a recipe for good decision making.

It’s always risky getting into hypotheticals, but can you envision a situation where you would act because of activity further up the stack? You are right that the primary takeaway from my Framework for Moderation piece is that moderation should happen at the level of the stack where harm appears, but the specific context of that article was Cloudflare’s decision to withdraw DDoS protection for 8chan after a number of mass murders had been announced and celebrated on the platform. Clearly 8chan moderators were not going to act, so if not Cloudflare, then who? I get the desire to have clear principles — and I obviously agree that “we moderate at our level but not above” is a good one — but at the end of the day does this ultimately devolve into a Justice Potter’esque “I know it when I see it” approach for the most difficult cases?

PC: Oh, it’s far from a complete disavowal… we undertake plenty of policy enforcement for the platforms we support. Depending on the policy, that might be more or less rigid. If it’s a law, say, it remains absolute. If it’s something more subjective or content-related, the platform should get to decide. Take Shopify. If a Shopify store is doing something obviously illegal (like simply abusing an account to commit fraud), we’ll certainly take action. This is one of the things that Stripe helps with — no platform wants illegal activity. But if someone is objecting to Shopify hosting the Breitbart store… well, I think it’s important that Shopify gets to make that decision.

To what extent do you feel your views are shaped by 1) The fact that Stripe is a global service and that 2) The fact you and John are immigrants? It does seem like many of these policy debates become extremely U.S.-centric, but as you noted previously, most countries outside of the U.S. were very critical about Twitter’s actions relative to Trump. It almost feels like the European part of you wants to ensure that U.S. politics aren’t unduly governing your platform, but you have become Americanized enough that you retain the right to kick the President off if you must!

PC: Hah. Yeah, I think our perspective as immigrants — and, specifically, as Europeans — probably informs our approach quite substantially. With the EU, everyone understands that blanket approaches are fraught… whether it’s telecoms or fisheries policy, the challenge is always to find frameworks that can adapt to very different cultures, economies, and histories. So, when it comes to moderation decisions and responsibilities as internet infrastructure, that pushes you to an approach of relative neutrality precisely so that you don’t supersede the various governmental and democratic oversight and civil society mechanisms that will (and should) be applied in different countries.

Sure, but to perhaps challenge the premise of my own question, neutrality as a policy isn’t new, nor is the idea that tech companies are well-served by consistency, the better to stand up to governments everywhere. This has been a core argument for a hands-off approach for many years. Is explicitly re-affirming that approach a sort of “Back to the Future” moment, and a suggestion that some tech companies have lost their way?

PC: Well, we’re different to others… we’re financial services infrastructure, not a content platform. I’m not sure that the kind of neutrality that companies like Stripe should uphold is necessarily best for Twitter, YouTube, Facebook, etc. However, I do think that in some of the collective reckoning with the effects of social media (and there are indeed effects — Stripe Press published Martin Gurri’s book!), the debate sometimes underrates the importance of neutrality at the infrastructure level. For example, a number of companies have been on the receiving end of sometimes significant criticism as a result of contracts with ICE — and, indeed, a few have dropped them as a result. Now, every company is free to make its own decisions, of course, but these controversies make me think that it’s important to reiterate the value of apolitical service at the lower levels of the stack. I sometimes remind employees that Stripe would work with ICE if they came to us — ICE doesn’t violate any Stripe policy. (And that’s occasionally unpopular — people have left Stripe because of this stance.) That’s not because I’m an ICE proponent, or even defender, but simply because they’re part of a legitimate, democratically elected government.

And does that mean you would approach a non-democratically elected government differently?

PC: Yes. There’ll probably always be some degree of deference to governments — they are, after all, governments. But how much deference should depend on whether or not they’ve been appointed by their citizens with mechanisms of accountability in place.

So to summarize: despite what happened with Trump, or perhaps more accurately, because of what happened with Trump, you feel it is essential for Stripe to re-affirm that it believes that:

1. It is an infrastructure company
2. Infrastructure companies have an obligation to be neutral
3. If infrastructure companies act, they should only act against companies directly on their platform, not companies one or more layers removed

And that cases above and beyond that are sort of a Justice Potter-esque “I know it when it see it” type of scenario, with a heavy bias towards not acting?

PC: Directionally, yes. On (2), I’d want to be clear that it’s not a legal obligation but (in my view) a kind of social responsibility. And, on (3), “only” is too strong… it depends. But, yes, I’d say that infrastructure companies should place more focus on companies directly using their platforms.

In that Justice Potter line, he started out by saying “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description”… so, I don’t think that he was disclaiming the importance of frameworks or consistency! We have pretty comprehensive policies and we’re always striving to make them more amenable to evenhanded application.

Last question, which is kind of an inverse of the European/immigrant perspective question: as you look back to January 6, did you feel a certain sort of responsibility given the fact you were an American company, particularly given the huge amount of freedom and leeway America gives its corporations, including the opportunities afforded Stripe? I have argued that in America the concept of a separation of powers isn’t simply a legal construct, but also a cultural one, and corporate America is one of those powers. It’s sort of the accountability counterpart to free speech not being simply about legality but also culture. Is it fair to argue that the luxury of neutrality and independence in some respects compelled Stripe to act on January 6, and that’s how you thread the needle?

PC: I don’t want to over-personalize these decisions… to state the obvious, these assessments are made and implemented by a team and a process. So, while John and I signed off on the various Trump-related decisions, I don’t want to suggest that they came from our personal outlook. They came from direct application of our policy.

All that said, I think you’re getting at the idea that companies do have some kind of ultimate responsibility to society, and that that might occasionally lead to quite surprising actions, or even to actions that are inconsistent with their stated policies. I agree. It’s important to preserve the freedom of voluntary groups of private citizens to occasionally act as a check on even legitimate power. If some other company decided that the events of 1/6 simply crossed a subjective threshold and that they were going to withdraw service as a result… well, I think that’s an important right for them to hold, and I think that the aggregate effect of such determinations, prudently and independently applied, will ultimately be a more robust civic society.


Brad Smith, Microsoft President

It’s been two months since January 6th, and obviously it’s momentous from a US Constitutional and political perspective and whatnot, but I think the implications for tech are very, very interesting and unsure about how it’s going to play out, particularly this idea that US tech companies have this power and the internet drive towards centralization, and it makes economic sense to be centered on just a couple public clouds. But with that comes this risk that I think was highlighted in a very stark way on January 6th and I’m just curious if Microsoft’s thinking around this has changed at all. Did the foreign criticism of Twitter’s actions, for example, resonate in a way that it might not have broken through previously?

Brad Smith: Well, I would say a couple of things. First, I thought it was interesting because what you had was someone like Chancellor Merkel saying, “Tech companies shouldn’t be deciding what content gets thrown off of a service, government should be doing that.” And I actually think that there’s probably a lot of tech executives that have sympathy for that point of view.

Right.

BS: Because it’s such a difficult thing to do. And I think that if the United States had the same legal system as the rest of the democratic world, like a Germany or France and Australia, you’d probably see that migrate in that direction, it’s just that the US has the First Amendment and because the US has the First Amendment, it actually has a huge restraint on what the US government can do and therefore that forces the dialogue back into the tech sector lap, like it or not. So that would be the first thing I would say. I would say largely though, Ben, I think that January 6th sustained and perhaps accelerated some of the momentum around content moderation principles that had been unfolding over the last two years. So if you want, I’ll go back and I just put that in context or not.

That’s actually the point I want to get to. How do you think about moderation particularly in the context of Azure? I know that you had the thing with Gab a few years ago where you basically told them to take something down or their service was coming down. And I’m curious, is that a good way to think about the way you think about Azure, or has that evolved? Is there a framework that you can articulate about how you’re going to approach these problems in the future?

BS: I think the answer is yes. And like all frameworks in a rapidly evolving situation, the framework itself is evolving as well. The first thing I would say is that, and I don’t think this is unique to Microsoft, that basically the industry is looking at the stack and almost putting it in two layers. At the top half of the stack are services that basically tend to meet three criteria or close to it. One is it is a service that has control over the postings or removal of individual pieces of content. The second is that the service is publicly facing, and the third is that the content itself has a greater proclivity to go viral. Especially when all three or even when say two of those three criteria are met, I think that there is an expectation, especially in certain content categories like violent extremism or child exploitative images and the like that the service itself has a responsibility to be reviewing individual postings in some appropriate way.

Then that kind of service sits on a platform underneath it. The platform underneath it actually doesn’t tend to meet those three criteria, but that doesn’t absolve the platform of all responsibility to be thinking about these issues. The test for, at the platform level, is therefore whether the service as a whole has a reasonable infrastructure and is making reasonable efforts to fulfill its responsibilities with respect to individualized content issues. So whether you’re a GCP or AWS or Azure or some other service, what people increasingly expect you to do is not make one-off decisions on one-off postings, but asking whether the service has a level of content moderation in place to be responsible. And if you’re a gab.ai and you say to Azure, “We don’t and we won’t,” then as Azure, we would say, “Well look, then we are not really comfortable as being the hosting service for you.”

And so you’re then propelled to have a different kind of conversation with the customer. Now, I would say there’s one additional layer that I think it needs to be considered as part of the framework. Let’s just say, to be a consumer facing technology service, you need to have a means of production and you need to have a means of distribution. The means of production really relies on a cloud service and the means of distribution is among other things, reliant on Apple and Google as the two big app stores. And so if you’re going to cut somebody off from their means of distribution, you need to do it with a real sense of gravity because you’re having a big impact on somebody’s business. Of course, in the world today, there’s no single place for distribution, so you can find alternative means, but that’s a real loss.

If you’re cut off from your means of production, you’re out of business until you have an alternative means of production. And so typically what we try to do is identify these issues or issues early on. If we don’t think there’s a natural match, if we’re not comfortable with somebody, it makes more sense to let them know before they get on our service so that they can know that and they can find their own means of production if that’s what they want. If we conclude that somebody is reliant on us for their means of production, and we’re no longer comfortable with them, we should try to manage it through a conversation so they can find a means of production that is an alternative if that’s what they choose. But ideally, you don’t want to call them up at noon and tell them they have two hours before they’re no longer on the internet.

I don’t know if you’re willing to deal in these hypotheticals, but would you have handled the Parler situation differently?

BS: Well, I would say in an ideal world, you would have an ongoing conversation and I’m not saying that AWS didn’t. Yeah, but you’d come to a mutual conclusion that says, “Look, this isn’t working. So, Parler, either you have to put in place a level of content moderation that doesn’t exist today, or you have to move.” I don’t think we would have necessarily thought about it fundamentally differently than AWS. Ideally, we would have had the conversations on an ongoing basis that would have enabled a company such as Parler either to put in place what it needed to do or to find an alternative in an orderly way.

Yeah, that’s fine. Yeah, I understand it’s hard for you to comment on that directly.

BS: Yeah.

I guess just to get back to what you said real quickly. I know we’re short on time, but you mentioned that the framework has evolved. What is different about the framework you just articulated compared to previously?

BS: Well, interestingly, a lot of this first emerged in the wake of the Christchurch terrorist attack. I was saying to Frank [Shaw] earlier, there was a particular meeting in San Francisco with the New Zealand and French governments and senior folks from Twitter, Amazon, Facebook, Google, and Microsoft, all five of us were with them in a room and we started to sketch out on a whiteboard what I’d call the technology stack, and at that point we could all look at it and say, “Yeah, this seems right in terms of who should do what.” I think the evolution is really, to some degree, around these three criteria. I think people in government circles and an industry circles and NGO circles are really continuing to focus more on that so I think that is a principle basis that may help people think more about new scenarios so I think that part is evolving and January 6th forced people to test that.

One last question, you mentioned the First Amendment before and one of theory that I have is the US has a very unique environment for corporations and one respect, there is obviously the First Amendment, but also I think just generally the freedom to act is generally greater and broader in the US and I think that can be a real benefit to start a formation and drawing some entrepreneurs here. But a theory I have that explains what happened that week and particularly in Corporate America, is that the way the US system works is checks and balances aren’t just constrained to the government, but it’s a broader societal thing, just like the First Amendment isn’t necessarily a law, it’s also a cultural grounding and the Spiderman meme “With great power comes great responsibility”, that there is a sense in which Corporate America felt compelled to step in in filling their role in society broadly, and whether it’s making statements or actually taking action. Do you think there’s something to that? Did Microsoft feel some responsibility, or do you think the tech industry did after January 6th, and do you think that was appropriate? Is that something that is explainable to other countries around the world that this is part of what is part of being an American company?

BS: I do think that there is, as embedded in the American business community, a sense of corporate social responsibility and I think that has grown over the last fifteen years. It’s not unique to the United States because I could argue that in certain areas, European business feels the same way. I would say that there are two factors that add to that sense of American corporate responsibility as it apply to technology and content moderation that are also important. One is, during the Trump administration, there was a heightened expectation by both tech employees and civil society in the United States that the tech sector needed to do more because government was likely to do less. And so I think that added to pressure as well as just a level of activity that grew over the course of those four years, that was also manifest on January 6th. Whether that will persist at a time when the government is doing more, is something that we’ll all now learn because I think the more people have confidence in government to regulate tech, the less they may look to tech to be taking the initiative vis-a-vis government. That’s one factor.

There is a second factor that I also think is relevant. There’s almost an arc of the world’s democracies that are creating common expectations through new laws outside the United States that I think are influencing then expectations for what tech companies will do inside the United States. And interestingly, the arc over the last few years has often started in the South Pacific with New Zealand and Australia. The biggest issue around content moderation really was in the wake of Christchurch. It started in New Zealand, moved really quickly to Australia, and then this arc goes Northeast, and it goes to Canada, the UK, France, and Germany, and then diffuses in the rest of Europe.

And if you then think back to what happened in Christchurch, in only two months from the 15th of March to the 15th of May was exactly two months, it picked up in that way, and the Christchurch Call itself was signed in the Élysée in Paris by the French government. But you had [Canadian Prime Minister Justin] Trudeau, who was there for the Canadian government, the Germans were there, the Norwegians were there, the Brits were there, and the US was not. But the fact that the US wasn’t there, actually just reduced the relevance of the US government. It didn’t change what US tech companies would do in the US itself because these countries had come together and created new global norms. So I think you have these three factors that come together. One is the one that you pointed to, this longstanding and growing sense of responsible business. The second is heightened, just momentum, almost the building of muscle through sustained exercise during the Trump administration. And the third is what is happening among a variety of democratic governments outside the United States.

So you would actually argue that the external influence into the US is that greater than the express concern about the exportation of US mores to the outside, particularly the fact that tech companies can kick the President off. That’s the direction it goes here.

BS: If you really look at what has been most impactful, I would argue that the forces outside the US have been the most significant. So the answer is yes.

Or is that perhaps because the default is the US model and it’s being moderated —

BS: Well, it’s actually, I think, it’s a reflection of the fact that if you’re a global technology business, most of the time, it is far more efficient and legally compliant to operate a global model than to have different practices and standards in different countries, especially when you get to things that are so complicated. It’s very hard to have content moderators make decisions about individual pieces of content under one standard, but to try to do it and say, “Well, okay, we’ve evaluated this piece of content and it can stay up in the US but go down in France.” Then you add these additional layers of complexity that add both cost and the risk of non-compliance which creates reputational risk.

So one of the points I made in the months that we were putting together the Christchurch Call, was ironically the fact that the US was sitting on the sidelines, in my opinion at the time, and I think it’s been true in the two years since, that the US staying on the sidelines was not likely to influence what happened in the US, it was just likely to mean that the US government itself was no longer a relevant participant in the conversation. The interesting question with another question, I’m not offering an opinion here, but under German law, under French law, would government itself have put certain content and certain speakers out of bounds long before Twitter and Facebook did it after January 6th?

Oh, of course. I think this is the matter of they want to make the decisions, not have private companies make the decision.

BS: Exactly, that’s why it’s interesting. The dispute was not with the decision, it was because of who made it. But the interesting thing is they actually contributed to a consensus over the course of a couple of years, that then actually made the decision by those companies more likely.


Thomas Kurian, Google Cloud CEO

So basically, we’re a couple of months on from January 6, and I think that from my perspective, a lot of the conversation in general has really focused on moderation at the consumer level. And I wrote a piece a couple of years ago, A Framework for Moderation, making the case that you needed to think about it differently based on where you were in the stack. And so what I wanted to write about next week is get a sense of how infrastructure companies are thinking about this particular issue, what sort of frameworks they have, et cetera. I talked to Microsoft yesterday, I’m talking to some other companies in the stack, like Stripe, and Shopify, and basically companies that are not necessarily consumer facing but support other entities that are. And so that’s the focus of questions I wanted to get to today, if that makes sense.

Well, I’m just curious, does Google have a framework in mind for thinking about this? Obviously you’re in the fire, but right now when things are relatively calm, what might happen if there’s an issue with an app running on GCP in particular. Is there a framework or a way that Google thinks about this issue?

Thomas Kurian: So when we look at GCP, it’s technology that provides infrastructure for other companies to run their applications or their workload, so their platforms on. Much of that technology, you could argue, has equivalent technological capabilities available for a customer to run those workloads at their premise. So for example, virtualization existed before cloud made it possible at scale, databases existed before cloud made that possible at scale. And so our approach is to encapsulate the policy under which we make our technology available through what we call our Acceptable Use Policy and our Terms of Service. These are documents that we are happy to make available to you, Ben, but they govern basically the use of the technology within certain parameters. Those parameters are broad brush, the legal framework of the country in which we’re delivering the service, second, there are certain things that we want to do to protect people’s safety, and we pass those obligations contractually to people through the ToS and the AUP.

So what goes into safety? Because you distinguish that from legal. So there’s legal and then there’s safety. So what’s the difference there?

TK: So for example, in our ToS, we don’t allow people to run, for example, weapons systems. We allow people to run, for example, solutions to protect against weapons, but not to build weapons and use weapons using our systems. That’s an example. We preclude people from selling firearms, for example, on our platforms. So there’s a well-defined definition of that. We’re happy to send that to you.

Has anything changed in your thinking about this moderation framework since January 6th?

TK: It’s a complicated question, if I can be frank, because at the same time, on the one hand we provide general purpose technology to people. Secondly, because of the threat from cyber security and other things, people increasingly want to encrypt data and encrypt communication so it becomes technically complex to moderate because we don’t own the encryption key, and frankly we don’t want to be in the business of looking at people’s data.

So our normal practice with ToS violations or AUP, if we receive a complaint from either an authorized third party or, for example, a recognized government institution — we have responsibility at that point to investigate, and we will investigate and take appropriate action if it’s truly a violation of our ToS or AUP. In some cases, there is some additional complexity with that process because we distribute our software both direct to customer but also through resellers and distributors and those resellers and distributors may be the ones that have the relationship with the end customer, so we can only advise them, we cannot tell them what to do. So we can advise them that we think this is a violation of the ToS.

And if it is a violation of the ToS, even if it’s several levels removed from you, you do definitely preserve the right to take that down — what’s your approach to that?

TK: If it’s a violation of our ToS, we have the obligation to talk to the reseller, but the reseller has the commercial relationship. In the past when we worked on this, resellers have been open to doing the right thing because their contract with us in turn is governed by ToS, because they’re passing those obligations on to the end customer, but we have so far not taken direct action to shut off the customer, the reseller has the controls to shut them off. Technically in some cases, Ben, it is also difficult because the customer happens to run under the reseller’s domain so it’s difficult for us to intercede, if you know what I mean, technically.

No, absolutely. I think these are actually some of the hairiest questions, I would say there’s three levels that you’re hinting at. One is Google obviously has properties that have consumers on it directly, or producers, could be YouTube or whatever it might be. Then you have customers that run directly on GCP, and what I’m hearing from you is those customers, you reserve the right to intervene with them and say, “Hey, this isn’t cool. We’re going to kick you off if you don’t fix this.” But then there’s a third level where you’re at least two levels removed, and right now your approach is to speak to the people you have a commercial relationship with, but given your level of removement from it, it’s more problematic to step in, in that case. Is that a fair summary of the different levels and the way you’re thinking about them?

TK: That’s fair. Yeah, both more problematic but also sometimes technically infeasible. I’ll give you an example: Imagine somebody wrote a multi-tenant software as a service application on top of GCP, and they’re offering it, and one of the tenants in that application is doing something that violates the ToS but others are not. It wouldn’t be appropriate or even legally possible for us to shut off the entire SaaS application because you can’t just say I’m going to throttle the IP addresses or the inbound traffic, because there’s no way that the tenant is below that level of granularity. In some cases it’s not even technically feasible, and so rather than do that, our model is to tell the customer, who we have a direct relationship with, “Hey, you signed an agreement that agreed to comply with our ToS and AUP, but now we have a report of a potential violation of that and it’s your responsibility to pass those obligations on to your end customer.”

Obviously I get the technical infeasibility of doing this, but to what extent does principle also play a role in this, in that theoretically you could go all the way down the stack and censor anything, I mean you could go down to the DNS level if you wanted to, and GCP feels a bit of a responsibility to ensure neutrality in the stack and that the correct level is at the level where the harm is happening.

TK: The important thing is we have to have some framework through which we encapsulate our policy. Meeting the legal standard in the countries in which we operate is the minimal acceptable, but there are some additional things on top of that, what we think is acceptable. In order to be able to manage that, we encapsulate that in the terms of service.

I get it. So here’s an example, you gave the multi-SaaS environment. Imagine, if you will, a social network where it’s a similar idea where the vast majority of content is fine, there is some content that perhaps Google finds objectionable, but that content is by definition at least two levels removed from Google because there’s the service in the middle. How would you think about that in the context of your ToS and how you’re going to approach that?

TK: That’s a complicated question because there’s no clear black-and-white answer. If that social platform, in your case the application, is promoting things that may cause harm as defined in our ToS, then they are technically in violation of our ToS, and we’d have to take action. If it’s a tiny percentage, for example, there have been scenarios, Ben, where a company, like a retailer, is running an application on a cloud and they didn’t put in the right security things, and somebody compromised one of the virtual machines and hosted something that we consider violation of our ToS, we’d remove that particular virtual machine, but we don’t kick out the entire retailer. Do you see what I mean? And so that’s a case by case evaluation. And to be honest, it’s such a nascent area, there’s no easy black and white answer I could give you, even though it would be easier to have such an answer.

Oh, no, absolutely. I mean I think there’s some perspective where right now, when there are no crises, fingers crossed, is a good time to articulate these things. And so I’m curious, what role does employee sentiment play in these decisions? That’s obviously people from the outside at Google looking in, wondering how do you guys actually figure this stuff out because there’s often a lot of ruckus. What’s your perspective on that?

TK: We evolve our Acceptable Use Policies on a periodic basis. Remember, we need to evolve it in a thoughtful way, not react to individual crisis. Secondly, we also need to evolve it in a way with a certain frequency, otherwise customers stop trusting the platform. They’d be like “Today, I thought I was accepted and tomorrow you changed it, and now I’m no longer accepted and I have to migrate off the platform”. So we have a fairly well thought out cadence, typically once every six months to once every twelve months, when we reevaluate that based on what we see. In that, we take a number of pieces of input from our legal departments. We have conversations on what other what I would call infrastructure providers. When I say infrastructure providers, you mentioned Stripe and people like that, they are more technology platforms as opposed to consumer moderation, social platforms.

Right, exactly. Which I think is an important distinction.

TK: Yeah. And we also take input from a number of our senior executives who provide guidance on what would the right use of technology be. So for example, we’re one of the first ones that said using image recognition to identify people may be an issue, and we said you can use our AI APIs on our cloud, but we don’t want you to use it for that particular purpose, that’s an example.

What I’m hearing from you is your perspective on Google was you want to be upfront about what your stuff can be used or not used for. And ideally, the more definitional you can be in that, then the easier it is after the fact. What’s your default though for stuff you haven’t specified? Where something comes up, is your default that we’re infrastructure, it should stay but for an exceptional circumstance, or is it our default is we need to prevent harm and be aware of this, and something along those lines?

TK: First of all, remember we also have an obligation to our customers. And so our customers, when they move stuff to a cloud, our cloud through technologies that we make available in open source, and Kubernetes, and something we’ve got that allows you to deploy across clouds and other things, we reduce the lock-in, but customers do have a challenge if they have depended on you and put your workload on you to be told, “Okay, you can no longer run.”

To be clear, I’m with you on that regard. My perspective on Stratechery is I’m concerned about these companies worldwide. Like after Twitter took Trump down, there was a lot of, I think, very justifiable pushback internationally saying why is US politics having an impact on whether these companies are going to provide service or not? I mean has that been something you’ve been concerned about or that you’ve seen an issue about?

TK: So just to close out your prior question, we try to be as prescriptive as possible so that people have as much clarity as possible with what can they do and what they can’t do. Secondly, when we run into something that is a new circumstance, because the boundary of these things continue to move, if it’s a violation of what is considered a legally acceptable standard, we will take action much more quickly. If it’s not a direct violation of law but more debatable on should you take action or not, as an infrastructure provider, our default is don’t take action, but we will then work through a process to update our AUP if we think it’s a violation, and then we make that available through that.

That makes sense to me. Do you get a sense that, given the American context with the First Amendment and the larger degree of freedom that American companies have — it makes sense to Americans that of course an American company can shut down the president, whereas a lot of countries are like, “We have no problem, we actually want more moderation, we want to be the ones to decide.” I think that’s a fair way to characterize the European attitude. Do you get the sense that American companies with that freedom to act do bear more responsibility to be more proactive in this case, or is it the opposite where they have to be particularly careful to not overstep?

TK: It’s a good question, Ben. Remember cloud is a global utility, so we’re making our technology available to customers in many, many countries, and in each country we have to comply with the sovereign law of that country. So it is a complex question because what is considered acceptable in one country may be considered non-acceptable in another country.

Your example of First Amendment in the United States and the way that other countries may perceive it gets complicated, and that’s one of the questions we’re working through as we speak, which we don’t have a clear answer on, which is what action do you take if it’s in violation of one country’s law when it is a direct contradiction in another country’s law? And for instance, because these are global platforms, if we say you’re not going to be allowed to operate in the United States, the same company could just as well use our regions in other countries to serve, do you see what I mean? There’s no easy answer on these things. We do feel that there are certain circumstances when the entire world would say this a violation of all legal frameworks in almost all countries, in which case of course we would act on it.

Right, those are the easy ones.

TK: But it’s not an easy dimension, and particularly in countries as cloud becomes a bigger and bigger part of the IT infrastructure of countries. When cloud was 2% of a country’s IT infrastructure, people didn’t feel a dependence on it.

Right, and they self-selected the folks that can handle it could handle it.

TK: Because they said you as a company in our country chose it, and the risk is yours. Now, as cloud becomes a bigger and bigger part of it, governments do feel that they want to govern according to what is considered acceptable to their legal framework and their citizens. And in some cases, that is not necessarily in agreement with what the citizens of the United States find acceptable, so it’s a complicated issue. There’s no easy way to say we consider this a violation. And we are trying to be as predictable as possible. Given that ambiguity, the only thing you can do is to be as crisp and predictable as possible. And that’s why we encapsulate it in our ToS and AUP.

Just one quick follow-up on that. Do you see the trend towards being a lowest common denominator outcome where, given you’re global, it just easier to try to get as generalizable principles as possible, or in the opposite direction where you’re actually going to be quite customized on a region or even down to a country basis, just because the expectations are so different?

TK: So far, we have tried to get to what’s common, and the reality is, Ben, it’s super hard on a global basis to design software that behaves differently in different countries. It is super difficult. And at the scale at which we’re operating and the need for privacy, for example, it has to be software and systems that do the monitoring. You cannot assume that the way you’re going to enforce ToS and AUPs is by having humans monitor everything, I mean we have so many customers at such a large scale. And so that’s probably the most difficult thing is saying virtual machines behave one way in Canada, and a different way in the United States, and a third way…I mean that’s super complicated.


Matthew Prince, Cloudflare CEO

We last talked a couple months after Christchurch, and you had just made the decision about 8Chan. I’m just curious how, if anything, has your view of the situation changed since then? Or, is it pretty much exactly what it was then?

MP: So first of all, just where did we get to? So I think at that time, we had gone from saying, basically, we really don’t think we’re the right place to be controlling content with the exception of obviously we have to be in compliance with laws, wherever they are around the world. We made what was a bit of a violation, I guess, of that original policy around The Daily Stormer, which was a group of, on one service level neo-Nazis but really just very talented internet trolls.

Yeah. And this was the one where you said you woke up on the wrong side of the bed, right?

MP: Yeah, that was an internal email that somehow leaked and it was a little bit tongue-in-cheek. I mean, it was something we had thought about for quite some time, but I think that the point that I was trying to make to our internal team was, at some level, these decisions are all arbitrary and they come down to one person and whether that’s the CEO of the company, or whoever the CEO has appointed to make the decisions. These are pretty arbitrary decisions when they get made and it’s pretty easy to find inconsistencies around them. We made that decision knowing very well, that it would be a controversial one, and then courting the controversy as an opportunity to talk about how challenging these decisions were. Immanuel Kant would roll over in his grave, but we were using neo-Nazis as a means to an end to figure out what the right policies were.

With figuring out, was that just internally figuring out? Or, is that also seeing what the response was and did that sort of impact what figuring it out was?

MP: It was almost entirely externally. Internally, I think we knew what the risks were. We knew what the dangers were. We understand the difference between hosting and platforms and DNS, and undersea cable operators. I think we had a pretty good conceptualization of that but when we would go to talk to the media about it, when we go to talk to politicians about it, one of two things would happen. Either, someone would say, “Yeah, what you’re saying makes sense, therefore, I’m not going to write the story or I’m not going to bring this up in a policy argument.” Or, they would say “Yes, but these are horrible people. You should be doing something about it.” And so, it felt like at some point we needed to almost provoke that conversation.

And again, these were horrible people and so it felt really good. When you fire neo-Nazis as customers, you get a lot of people telling you what a great job you’re doing and how courageous you were, frankly, it’s the least courageous thing we’ve ever done, and it’s easy to do that. What we then did was, we spent almost two years — so that was I believe 2016 or 2017 — we spent almost two years just talking with everyone about it. Lawmakers, civil society organizations, everyone and on all different sides. So, Southern Poverty Law Center, which was saying, “You should do more.” And then, the Cato Institute, which was saying, “You should do less.” And really trying to just say, “Listen, here’s the situation that we’re in.” I think we came to kind of a nuanced change in what our policy was, where we said, “Listen, we shouldn’t be the first line of defense.” And the reason we shouldn’t be the first line of defense, is that it is a stack and as you said, we sit pretty low in that stack. For example, Shopify is a customer of ours and so if we had early on in a conversation, just said, “Oh, we’re going to take a whole bunch of things off.” we’d have effectively then, making the decision on behalf of Shopify our customer, without them having any agency and setting what their own policies were. It feels like as you get closer to the individual, that it makes sense for that to be where the locus of content moderation is.

The ultimate responsibility for any piece of user-generated content that’s placed online is the user that generated it and then maybe the platform, and then maybe the host and then, eventually you get down to the network, which is where we are. What I think changed was, there were certain cases where there were certain platforms that were designed from the very beginning, to thwart all legal responsibility, all types of editorial. And so the individuals were bad people, their platforms themselves were bad people, the hosts were bad people. And when I say bad people, what I mean is, people who just ignore the rule of law, as a whole.

And purposely so.

MP: Purposely so. And so, at some level, when you get down to that it’s like, okay, you can’t rely on any of those layers, and obviously if there’s law, then we have to follow the law, but in those cases where there’s not, when it is a judgment call that we do reserve the right to say that we will pull that out. I think that’s a super rare case that you get that far down and what we’ve seen from customers is, even ones that are controversial are like, “Listen, we’re following the law and we’re trying to do these things.” And I think it’s in part our responsibility to, that’s why we’ve done things to help our customers better comply with whether it’s content restrictions or child sexual assault material and other things, to comply with what those responsibilities are in order to allow them to do the right thing and enable that. But every once in a while, there might be something that is so egregious and literally designed to be lawless, that it might fall on us. What was interesting because we weren’t really super involved this last time around.

Last time, being January 6th?

MP: Yeah.

Obviously, that’s what prompted this talk.

MP: Yeah. And we had people who were on us, that were yucky. I don’t like what they stand for, but what was interesting was as we saw all of these other platforms struggle with this and I think Apple, AWS, adn Google got a lot of attention and it was interesting to see that same framework that we had set out, being put out. I’m not sure it’s exactly the same. Was Parler all the way to complete lawlessness, or were they just over their skis in terms of content moderation?

I’m curious about this, there’s obviously lawlessness, and that’s always a bare requirement. Is there anything that Cloudflare sees above lawlessness, where you do still, if no one higher than you in the stack will step in, you do still see a need to step in. Have you defined that in any way?

MP: There’s all kinds of legal stuff, obviously. There might be all kinds of times where for instance, the passage of the anti-child trafficking law that happened in the United States, which basically made it illegal for any prostitution establishment to use US company’s infrastructure. Prostitution is legal in certain parts of the world but that then, as a US company, we had to say, “Listen, we’re kicking all of those things off.” That’s not a moral decision on our part, we’re a US company, we have to follow US law. But short of that, I think that so long as each of the different layers above you are complying with law and doing the right thing and cooperating with law enforcement and that, I think that I won’t say never because there are circumstances where you might decide that it’s the right thing but I think it should be an extremely high bar for somebody that sits as low in the stack as we do. Because that policy has ripple effects across all of the customers that rely on us and the number of customers that, when we did make the determination to fire what were lawless customers or neo-Nazis, when you had major financial institutions or others that said, “Hey, we just have to make sure that you’re never going to do something like that to us.”

I think that that’s different than if you’re Facebook firing an individual user, or even if you’re a hosting provider firing an individual platform. The different layers of the stack, I think do have different levels of responsibility and that’s not to say we don’t have any responsibility it’s just that we have to be very thoughtful and careful about it, I think more so than what you have to do as you move up further in the stack.

Is it fair to say that from your perspective, the default should be different in that you are going to default in almost all cases to keeping someone as a customer unless there’s proactive violations happening. Is that a fair way to put it?

MP: Yeah. I mean, there are times that we won’t take somebody’s money. But we have a free version of our service, and so people will do that but there are definitely times that we choose not to let somebody pay us for something.

That’s interesting. So basically the free version is an extreme for default, but when it comes to actually having a billing relationship, then you’ll draw maybe a little bit of a tighter line.

MP: Yeah, I’m hesitant to talk about it here, but there are times that there are people who’d pay us, who we think are disgusting and usually, what happens when that happens, is that very quietly —

This is still a self-serve payment. There’s no sales team right?

MP: Yeah. But there are very many people who pay us a lot, like tens of thousands of dollars where, we’re like, we just don’t like these people. We think that they are actively making the world a worse place, and they probably have the right to be on the internet, and whether we like it or not, the challenge is that it’s very hard to be on the internet without a service like Cloudflare. We’re not the only one that can do it, but there’s a relatively small set of companies today that have the resources to navigate the complexity that the internet has. And so if you believe that anyone has the right to be on the internet, you probably believe that anyone has the right to have some service like Cloudflare. And if that’s the case, every once in a while, people want some of our paid services and so when that happens, we typically will go to teams internally at the company, that are hurt by whatever the organization stands for. So if it was the anti-green people organization, then we would go to the green people in the company and would say, “Listen, who do you think best represents your interests, and is best the opposite of this?” And then, we’ll just dollar-for-dollar without netting anything out, just donate the cash from them to that and we ask that be kept confidential. We don’t make a big deal of it because we’re not doing it for the publicity, we’re doing it because it’s what we think is the right thing internally. And our team, I think, feels good about that.

Interesting. That’s very interesting. So you mentioned about the right to the internet and that, I think, is a very American conception. Cloudflare is an American company and this idea of rights and freedoms, it goes back to some of the stuff around January 6th, where I think a lot of the response from Europe in particular, was not that they were against moderation, it’s that, who are these tech executives to decide? Government should be deciding that. And I’m curious if you’ve seen or felt concerns from that perspective internationally, in particular, where it’s like, “Okay, our concern is the private decision-making, as opposed to moderation, generally.”

MP: I grew up in the US, my dad was a journalist for a part of his career, we spent time talking about the First Amendment, and I think that freedom of expression, if you can figure out how to muster the chaos that it creates, it’s probably the most long-term sustainable way to run a government and so I think that’s positive. That said, if I’m in Germany and I say, “Hey, what about the First Amendment?” They say very politely, “We understand that’s part of your tradition and it’s born out of your history, but we’ve had a very different history and so we have a very different tradition.” And I think you have to respect that. We are a company that operates, we have equipment running in over 100 countries around the world, we have to comply with the laws in all those countries, but more than that, we have to comply with the norms in all of those countries. What I’ve tried to figure out is, what’s the touchstone that gets us away from freedom of expression and gets us to something which is more universal and more fundamental around the world and I keep coming back to what in the US we call due process, but around the rest of the world, they’d call it, rule of law.

Yeah. I think we had this conversation, but continue.

MP: And I think that is a big piece of it. I think the interesting thing about the events of January were you had all these tech companies start controlling who is online and it was actually Europe that came out and said, “Whoa, Whoa, Whoa. That makes us super uncomfortable.” I’m kind of with Europe. I think actually that’s more aligned with our perspective than anywhere else, which is, we’re not saying that every piece of information should be available everywhere in the world. So for instance, there are places around the world that restrict content which is online, Germany is probably the least controversial example you can give. But if they do that, there’s lots of stuff we could do to try to thwart their restriction and make it more difficult for Germany to control what is seen in Germany.

But I think as long as the government is being transparent, consistent, and accountable to whatever it is. So it can’t just be, “Disappear this thing off the internet.” It has to be putting up something that says, “Due to German law, this thing has disappeared from the internet.” So long as the German rule doesn’t affect Canada then that seems like that’s really respecting the rule of law. Everywhere in the world, governments have some political legitimacy, and they certainly have a lot more political legitimacy than I do. And so, I think we’re not saying, everything should be available anywhere. We’re not anarchists. We have to follow the laws in all the countries we serve. The laws are different in each jurisdiction. It’s important that we comply with the laws in each jurisdiction in which we operate. We should help our customers comply with the laws in each jurisdiction we operate. And importantly though, and this is I think the piece where sometimes we lose Europe, the laws of one country can’t spill over into another country because while a country has a sovereign right to control the networks inside of its borders, subject to maybe some human rights floor and other things, but the minute that the laws then spill out beyond their borders, then that’s infringing on the sovereign right of some other country. That’s been our argument around the world, is “We’ll comply with the law but we need to make sure that, that law applies only in your specific country.”

Is there a case though, when it comes to the January 6th stuff where there’s a theory that I have, I will actually be interested in your response to this, which, is that the US has a very — in part, the First Amendment’s part of this — but there’s the level of freedom and autonomy afforded to US corporations in particular, is quite a bit greater than a lot of places in the world. And my sense is there’s this Spiderman meme, “With great power, comes great responsibility” bit to that where the US having checks and balances, it’s about more than just government, it’s broadly speaking and Corporate America is part of that. And there was a sense there, where Corporate America broadly speaking, and tech was just the one with the capability, decided that enough was enough after January 6th and that was the activity. And to your point, that was a uniquely American thing and Europe should perhaps be comforted that, this would only ever happen in America. I mean, there’s lots of stuff going on there, I’m curious what your sense of that is.

MP: I don’t know, that is a very charitable read of what went on. A less charitable read might be, there was a mob, both external, but probably more internal to those companies, that made them feel like they had to do something. And if there were a Catalonian independence movement in Spain, and it turned out that a set of employees didn’t like that, that very well, you could have Twitter shutting down one side or the other, or whatever it is. And so I think, the great heroic patriotic, “We did what was right for the country”, I mean, I would love that to be true, I’m not sure that if you actually dig into the reality of that, that it was that. As opposed to succumbing to the pressure of external, but more importantly, internal pressures.

Do you feel any sort of corporate responsibility, given this relatively wide latitude that you are afforded?

MP: Well, for sure and we do things all the time to do that. So in 2016, when we saw that both we had foreign interference in elections and we had peer companies, technical platforms, being used to subvert the electoral process. We sat and said, “Well, what can we do to deal with that?” And we launched something called the Athenian Project, which helped protect election infrastructure around the world and I’m super proud of the fact that cyber attacks were not a part of the 2020 Election, in any meaningful way and it was in part because of the work of companies like us and Microsoft and Google, and others that really stepped up and did that. We do that internationally with things like Project Galileo, where we said, “Let’s put together a group of civil society organizations to figure out what organizations don’t have the resources to afford our services.” And we provided them for free. We are doing that right now with vaccine distribution, where we’re providing our services for free for anyone who’s helping distribute that. And so again, I think that I do start with the premise that barring some legal restriction that the internet once upon a time was open to anyone, I think the internet it’s gone from a light sail around the Mediterranean to the North Sea, that is forcing more and more people to have to turn to services like Cloudflare in order to stay online. And so if you start with the premise that basically anyone should be online, less whatever the restrictions are in the local area that they’re able to do, then if we’re going to be able to continue to do that, then that means that the default rule should that just about anyone should have access to Cloudflare, unless they’re violating the law.

One quick question about you, you worry about spillover from Europe, right. How do you manage not ending up with a lowest common denominator sort of thing? Is there any worry about spillover culturally, whether it’s US culture spillover, whether it be wanting to control particular things online, or perhaps the opposite, you spill over the US wanting to allow anything online, there’s kind of like two currents going on.

MP: Yeah. I mean, I think that the two non-exclusive dystopian futures are either, one, that we all end up in our own little filter bubbles where — I mean, to me, it’s a miracle there’s not yet a Fox News search engine. I mean, it’s crazy to me that there are really only two national search engines, how Germany doesn’t have a national search engine is beyond me. And maybe we’re going to have, you can already feel it, where there’s a lot of people trying to figure out what is the conservative social network that’s going to exist and that’s kind of the natural state of things if you look at pre-internet media in every major town, everywhere around the world, there was a conservative newspaper and a liberal newspaper and so media tends to be polarized almost everywhere around the world. Television did a really good job and we talked about this last time, but television did a really good job of fighting this by imposing on themselves, equal time rules, and you saw this across cultures around the world. The BBC and NBC, ABC, CBS, they had similar rules in terms of trying to rise above the fray, because there was this new technology that was so incredibly powerful. But the downside of that was, if you do get that neutrality right, then there are a whole bunch of voices that don’t get heard there weren’t a lot of black, brown, gay, female voices on national news anywhere around the world for a long time and so we didn’t get as much coverage. So that’s the downside to that, that once you start to take the gatekeepers away, you also then get this polarization where that happens so I think that the key line that we have to maintain is that the rules of any — Germany can set whatever rules they want for Germany, but it has to be the rules inside of Germany.

And you can manage that okay. You can manage on a per country basis. You feel good about that?

MP Sure. I mean, for us, that’s easy. And then we can provide that to our customers as a function of what we’re doing. But I think that if you could say, German rules don’t extend beyond Germany and French rules don’t extend beyond France and Chinese rules don’t extend beyond China and that you have some human rights floor that’s in there.

Right. But given the nature of the internet, isn’t that the whole problem? Because, anyone in Germany can go to any website outside of Germany.

MP: That’s the way it used to be, I’m not sure that’s going to be the way it’s going to be in the future. Because, there’s a lot of atoms under all these bits and there’s an ISP somewhere, or there’s a network provider somewhere that’s controlling how that flows and so I think that, that we have to follow the law in all the places that are around the world and then we have to hold governments responsible to the rule of law, which is transparency, consistency, accountability. And so, it’s not okay to just say something disappears from the internet, but it is okay to say due to German law it disappeared from the internet. And if you don’t like it, here’s who you complain to, or here’s who you kick out of office so you do whatever you do. And if we can hold that, we can let every country have their own rules inside of that, I think that’s what keeps us from slipping to the lowest common denominator

And so, even a German firewall where Germany keeps out stuff, as long as it’s limited to Germany, you’re fine with that because they can decide.

MP: I think the right policy to run a country is to have as much information available as possible. I kind of like the US model, but I think we need to recognize that the US model is a radically libertarian version of freedom of expression. There is nowhere else in the world that has tried the US experiment. And so, trying to say, “Yes, we’re going to impose the US model on the rest of the world.” That doesn’t work either. So I think countries that have more freedom of expression will tend to win over the long-term and so I think the great arc of history will bend in that direction but in the short-term, it is better for whatever that regime is to be localized and for the policy to be set by people who are stakeholders in it. It would be weird if the German policy all of a sudden applied to the UK because very different stakeholder regimes there. But I think it would be very strange also, to live in a world where we’ve all kind of been in what has been the US version of this. From my perspective, I think the US version is probably the one that wins the day but at the end of the day –

At the end of the day, you’re not going to impose it on everybody else.

MP: You can’t. And boy, the alternative of the Chinese model, which sounds crazy until you just think about it as it’s just airwaves. If, you’re putting up a radio station before you broadcast the radio station you’ve got to get a license to use the spectrum, that’s how China thinks of the internet.

There’s a logic to it.

MP: There’s a logic to it, and the scary thing is, that logic is starting to seep out, you’re starting to hear that same rhetoric coming out of India, starting to hear that same rhetoric come out of Brazil and that seems like that’s going to be tough for us to maintain the sort of just pure US version of the internet. But I hope that we don’t go all the way to, “You have to have a license to put anything online.”


These interviews formed the basis of this Weekly Article: Moderation in Infrastructure.