Sovereign Writers and Substack

There has been a bit of controversy around Substack over the last week; I’m not going to get into the specifics of various accusations made about various individuals, or their responses; however, I do think that there are a few fundamental issues about the Substack model specifically, and the shift to sovereign writers generally, that are being misunderstood.

I’m going to anchor on this piece from Peter Kafka at Recode:

Substack’s main business model is straightforward. It lets newsletter writers sell subscriptions to their work, and it takes 10 percent of any revenue the writers generate (writers also have to fork over another 3 percent to Stripe, the digital payments company)…

The money that Substack and its writers are generating — and how that money is split up and distributed — is of intense interest to media makers and observers, for obvious reasons. But the general thrust isn’t any different from other digital media platforms we’ve seen over the last 15 years or so.

From YouTube to Facebook to Snapchat to TikTok, big tech companies have long been trying to figure out ways they can convince people and publishers to make content for them without having to hire them as full-time content creators. That often involves cash: YouTube set the template in 2007, when it set up a system that let some video makers keep 55 percent of any ad revenue their clips generated…Like Substack, YouTube and the other big consumer tech sites fundamentally think of themselves as platforms: software set up to let users distribute their own content to as many people as possible, with as little friction as possible.

I’m not sure the connection to Facebook and YouTube hold (even with Substack Pro, which I’ll get to in a moment); As Kafka notes, Substack “lets newsletter writers sell subscriptions to their work”; that, by definition, means that Substack is not “figur[ing] out ways they can convince people and publishers to make content for them without having to hire them as full-time content creators”. Just look at my credit card statement, where I happened to find charges for Casey Newton’s Platformer and Bill Bishop’s Sinocism next to each other:

Credit card charges from Platformer and Sinocism (not Substack)

Notice that the names of the merchant, the phone number of the merchant, and the location are different — that’s because they are different merchants. Substack is a tool for Newton and Bishop to run their own business, no different than, say, mine; Kafka writes:

To be clear: You don’t need to work with a company like Substack or Ghost to create and sell your own newsletter. Ben Thompson, the business and technology writer whose successful newsletter served as the inspiration for Substack, built his own infrastructure cobbling together several services; my former colleague Dan Frommer does the same thing for his New Consumer newsletter. And Jessica Lessin, the CEO of the Information, told me on the Recode Media podcast that she’d consider letting writers use the paid newsletter tech her company has built for free.

Here’s what you see on your credit card statement for Stratechery:

Credit card statement from Stratechery

My particular flavor of membership management software is Memberful, but Memberful is not Stratechery’s publisher; I am. Memberful is a tool I happen to use to run my business, but it has no ownership of or responsibility for what I write. Moreover, Memberful — like Substack — doesn’t hold my customer’s billing data; Stripe does, and that charge is from my Stripe account, just as the first two charges are from Newton and Bishop’s respective Stripe accounts.

This is what makes “the intent interest of media makers and observers” so baffling. There is a very easy and obvious answer to “how that money is split up and distributed”: subscriber money goes to the person or publication the subscriber subscribes to. That’s it! Substack is a tool for the sovereign writer; the sovereign writer is not a Substack employee, creator, contractor, nothing. Users quite literally pay writers directly, who pass on 10% to Substack; Substack doesn’t get any other say in “how that money is split up and distributed.”

But what about Substack Pro?

Substack Pro

Back in 2017 I wrote a post called Books and Blogs that explained why subscriptions were a much better model for writers than books:

A book, at least a successful one, has a great business model: spend a lot of time and effort writing, editing, and revising it up front, and then make money selling as many identical copies as you can. The more you sell the more you profit, because the work has already been done. Of course if you are successful, the pressure is immense to write another; the payoff, though, is usually greater as well: it is much easier to sell to customers you have already sold to before than it is to find customers for the very first time…

Since then it has been an incredible journey, especially intellectually: instead of writing with a final goal in mind — a manuscript that can be printed at scale — Stratechery has become in many respects a journal of my own attempts to understand technology specifically and the way in which it is changing every aspect of society broadly. And, it turns, out, the business model is even better: instead of taking on the risk of writing a book with the hope of one-time payment from customers at the end, Stratechery subscribers fund that intellectual exploration directly and on an ongoing basis; all they ask is that I send them my journals of said exploration every day in email form.

Recurring revenue is much better than selling a book once; however, just as you have to spend time to write a book before you can sell it, you need time to build up a subscriber base that supports a full-time subscription. I accomplished this by writing Stratechery on nights and weekends while working at Microsoft and Automattic, and then, when I started charging, basically jumping off of the deep end, but working writers can’t always do the former (I would bet that publications start being stricter about this going forward). This is where Substack Pro comes in; from Kafka:

But in some cases, Substack has also shelled out one-off payments to help convince some writers to become Substack writers, and in some cases those deals are significant. Yglesias says that when it lured him to the platform last fall, Substack agreed to pay him $250,000 along with 15 percent of any subscription revenue he generates; after a year, Yglesias’s take will increase to 90 percent of his revenue, but he won’t get any additional payouts from Substack.

As Yglesias told me via Slack (he stopped working as a Vox writer last fall but still contributes to Vox’s Weeds podcast), the deal he took from Substack is actually costing him money, for now. Yglesias says he has around 9,800 paying subscribers, which might generate around $860,000 a year. Had he not taken the Substack payment, he would keep 90 percent of that, or $775,000, but under the current deal, where he’ll keep the $250,000 plus 15 percent of the gross subscription revenue, his take will be closer to $380,000.

Substack has been experimenting with this kind of offer for some time, but last week, it began officially describing them as “Substack Pro” deals.

In short, the best analogy to Substack Pro are book advances, which are definitely something that publishers do. In that case publishers give an author a negotiated amount of money in advance of writing a book for reasons that can vary; in the case of famous people the advance represents the outcome of a bidding war for what is likely to be a hit, while for a new or unknown author an advance provides for the author’s livelihood while they actually write the book. The publisher then keeps all of the proceeds of the book until the advance is paid back, and splits additional proceeds with the author, usually in an 85/15 split (sound familiar?); of course we don’t know the exact details of book deals, because they are not disclosed.1

At the same time, Substack Pro isn’t like a book advance at all in a way that is much more advantageous to the writer. Book publishers own the publishing rights and control the royalties as long as it is in print; writers in the Substack Pro program still own their customers and all of the revenue past the first year, of which they can give 10% to Substack for continued use of their tool, or switch to another tool. This is where the comparison to YouTube et al falls flat: YouTube wants to be permanently in the middle of the creator-viewer relationship, while Substack remains to the side; from this perspective Substack Pro is more akin to an unsecured bank loan — success or failure is still determined by the readers.

The Real Scam

Now granted, there may be some number of Substack Pro participants who end up earning less than their advance, particularly if Substack sees Substack Pro as more of a marketing tool to shape who uses Substack; if Substack actually runs Substack Pro like a business, though, I would expect lots of deals like the Yglesias one, which has turned out to be quite profitable for Substack. As Yglesias himself noted:

Substack Pro made it possible for Yglesias to launch Slow Boring without worrying about paying the bills, and is making a profit as a reward for bearing the risk of Yglesias not succeeding or succeeding more slowly than he needed. As for Yglesias, he may end up missing out on several hundred thousand dollars this year, but given he’s not selling a book but rather a subscription he can look forward to a huge increase in revenue next year.

This, needless to say, is not a scam, which is what Annalee Newitz argued:

For all we know, every single one of Substack’s top newsletters is supported by money from Substack. Until Substack reveals who exactly is on its payroll, its promises that anyone can make money on a newsletter are tainted. We don’t have enough data to judge whether to invest our creative energies in Substack because the company is putting its thumb on the scale by, in Hamish’s own words, giving a secret group of “financially constrained writers the ability to start building a sustainable enterprise.” We are, not to put too fine a point on it, being scammed.

It is, for the reasons I laid out above, easier to get started with a subscription business if you have an advance. No question. But this take is otherwise completely nonsensical: Substack’s top newsletters are at the top because they have the most subscribers paying the authors directly. For example, look at the “Politics” leaderboard, where Yglesias is seventh:

Substack's 'Politics' leaderboard

We already know that “Thousands of Subscribers” to Slow Boring is 9,800; given that 9,800 * $8/month = $78,400/month, we can surmise that The Weekly Dish has at least 15,680 subscribers ($78,400/month ÷ $5/month). Those are real people paying real dollars of their own volition, not because Substack is somehow magically making them do it.

Frankly, it’s hard to shake the sense that Newitz and other Substack critics simply find it hard to believe that there is actually a market for independent writers, which I can understand: I had lots of folks tell me Stratechery was a stupid idea that would never work, but the beautiful thing about being my own boss is that they don’t determine my success; my subscribers do, just as they do every author on Substack.

Still, that doesn’t change the fact there is a real unfair deal in publishing, and it has nothing to do with Substack. Go back to Yglesias: while I don’t know what he was paid by Vox (it turns out that Substack, thanks to their leaderboards, is actually far more transparent about writers’ income than nearly anywhere else), I’m guessing it was a lot less than the $380,000 he is on pace for, much less the $775,000 he would earn had he forgone an advance.2 It was Vox, in other words, that was taking advantage of Yglesias.

This overstates things, to be sure; while Yglesias built his following on his own, starting with his own blog in 2002, Vox, which Yglesias co-founded, was a team effort, including capital from Vox Media. Still, if we accept the fact that Yglesias charging readers directly is the best measurement of the value those readers ascribe to his writing, then by definition he was severely under-compensated by Vox. The same story applies to Andrew Sullivan, the author of the aforementioned The Weekly Dish; Ben Smith reported:

But Mr. Sullivan is, as his friend Johann Hari once wrote, “happiest at war with his own side,” and in the Trump era, he increasingly used the weekly column he began writing in New York magazine in 2016 to dial up criticism of the American left. When the magazine politely showed him the door last month, Mr. Sullivan left legacy media entirely and began charging his fans directly to read his column through the newsletter platform Substack, increasingly a home for star writers who strike out on their own. He was not, he emphasizes, “canceled.” In fact, he said, his income has risen from less than $200,000 to around $500,000 as a result of the move.

Make that nearly $1 million, i.e. $800,000 of surplus value that New York Magazine showed the door.

Substack Realities

Of course things aren’t so simple; Sullivan, like several of the other names on that leaderboard, are, to put it gently, controversial. That he along with other lightning-rod writers ended up on Substack is more a matter of where else would they go? Again, the entire point is that Sullivan’s readers are paying Sullivan, which means Substack was an attractive option precisely because they don’t decide who gets paid what — or if they get paid at all.

Just because Sullivan was forced to be a sovereign writer, though, doesn’t change the fact that writers who can command a paying audience have heretofore been significantly underpaid. That points to the real reason why the media has reason to fear Substack: it’s not that Substack will compete with existing publications for their best writers, but rather that Substack makes it easy for the best writers to discover their actual market value.

This is where Substack really is comparable to Facebook and the other big tech companies; the media’s revenue problems are a function of the Internet unbundling editorial and advertising. The fact that Google and Facebook now make a lot of money from advertising is unrelated. Similarly, media’s impending cost problem — as in they will no longer be able to afford writers that can command a paying audience — is a function of the Internet making it possible to go direct; that Substack is one of many tools competing to make this easier will be similarly unrelated.

This explains three other Substack realities:

  • First, Substack is going to have a serious problem retaining its most profitable writers unless it substantially reduces its 10% take.
  • Second, Substack is less threatened by Twitter and Facebook than many think; the problem with the social networks is that they want to own the reader, but the entire point of the sovereign writer is that they own their audience. Substack’s real threat will be lower-priced competitors.
  • Third, it would be suicidal for Substack to kick any successful writers off of its platform for anything other than gross violations of the law or its terms of service. That would be a signal for every successful writer to seek out a platform that is not just cheaper, but also freer (i.e. open-source).

This is also why Substack Pro is a good idea. To be honest, I was a tad bit generous above: signing up someone like Yglesias is closer to the “popular author bidding war” side of the spectrum, and may not be worth the trouble; what would be truly valuable is helping the next great writer build a business, perhaps in exchange for more lock-in or the rights to bundle their work. Ideally these writers would be the sort of folks who would have never gotten a shot in traditional media, because they don’t fit the profile of a typical person in media, and/or want to cover a niche that no one has ever thought to cover (these are almost certainly related).

The Sovereign Writer

I am by no means an impartial observer here; obviously I believe in the viability of the sovereign writer. I would also like to believe that Stratechery is an example of how this model can make for a better world: I went the independent publishing route because I had no other choice (believe me, I tried).

At the same time, I suspect we have only begun to appreciate how destructive this new reality will be for many media organizations. Sovereign writers, particularly those focused on analysis and opinion, depend on journalists actually reporting the news. This second unbundling, though, will divert more and more revenue to the former at the expense of the latter. Maybe one day Substack, if it succeeds, might be the steward of a Substack Journalism program that offers a way for opinion writers and analysts to support those that undergird their work.3

What is important to understand, though, is that Substack is not in control of this process. The sovereign writer is another product of the Internet, and Substack will succeed to the extent it serves their interests, and be discarded if it does not.

I wrote a follow-up to this article in this Daily Update.


  1. Substack reportedly pays 15% of all revenue, not just revenue above-and-beyond the advance. 

  2. There is an anonymous Google Doc with self-reported media salaries; only three individuals make more than $380,000, and none more than $775,000 

  3. I am still bullish on the Local News Business Model

Moderation in Infrastructure

It was Patrick Collison, Stripe’s CEO, who pointed out to me that one of the animating principles of early 20th-century Progressivism was guaranteeing freedom of expression from corporations:

If you go back to 1900, part of the fear of the Progressive Era was that privately owned infrastructure wouldn’t be sufficiently neutral and that this could pose problems for society. These fears led to lots of common carrier and public utilities law covering various industries — railways, the telegraph, etc. In a speech given 99 years ago next week, Bertrand Russell said that various economic impediments to the exercise of free speech were a bigger obstacle than legal penalties.

Russell’s speech, entitled Free Thought and Official Propaganda, acknowledges “the most obvious” point that laws against certain opinions were an obvious imposition on freedom, but says of the power of big companies:

Exactly the same kind of restraints upon freedom of thought are bound to occur in every country where economic organization has been carried to the point of practical monopoly. Therefore the safeguarding of liberty in the world which is growing up is far more difficult than it was in the nineteenth century, when free competition was still a reality. Whoever cares about the freedom of the mind must face this situation fully and frankly, realizing the inapplicability of methods which answered well enough while industrialism was in its infancy.

It’s fascinating to look back on this speech now. On one hand, the sort of beliefs that Russell was standing up for — “dissents from Christianity, or belie[f]s in a relaxation of marriage laws, or object[ion]s to the power of great corporations” — are freely shared online or elsewhere; if anything, those Russell objected to are more likely today to insist on their oppression by the powers that be. What is certainly true is that those powers, at least in terms of social media, feel more centralized than ever.

This power came to the fore in early January 2021, when first Facebook, and then Twitter, suspended/banned the sitting President of the United States from their respective platforms. It was a decision I argued for; from Trump and Twitter:

My highest priority, even beyond respect for democracy, is the inviolability of liberalism, because it is the foundation of said democracy. That includes the right for private individuals and companies to think and act for themselves, particularly when they believe they have a moral responsibility to do so, and the belief that no one else will. Yes, respecting democracy is a reason to not act over policy disagreements, no matter how horrible those policies may be, but preserving democracy is, by definition, even higher on the priority stack.

This is, to be sure, a very American sort of priority stack; political leaders across the world objected to Twitter’s actions, not because they were opposed to moderation, but because it was unaccountable tech executives making the decision instead of government officials:

I do suspect that tech company actions will have international repercussions for years to come, but for the record, there is reason to be concerned from an American perspective as well: you can argue, as I did, that Facebook and Twitter have the right to police their platform, and, given their viral nature, even an obligation. The balance to that power, though, should be the openness of the Internet, which means the infrastructure companies that undergird the Internet have very different responsibilities and obligations.

A Framework for Moderation

I have made the case in A Framework for Moderation that moderation decisions should be based on where you are in the stack; with regards to Facebook and Twitter:

At the top of the stack are the service providers that people publish to directly; this includes Facebook, YouTube, Reddit, and other social networks. These platforms have absolute discretion in their moderation policies, and rightly so. First, because of Section 230, they can moderate anything they want. Second, none of these platforms have a monopoly on online expression; someone who is banned from Facebook can publish on Twitter, or set up their own website. Third, these platforms, particularly those with algorithmic timelines or recommendation engines, have an obligation to moderate more aggressively because they are not simply distributors but also amplifiers.

This is where much of the debate on moderation has centered; it is also not what this Article is about;

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up, the more discretion and even responsibility there should be for content:

A drawing of The Position In the Stack Matters for Moderation

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

The problem is that I skipped the part between broadcasting and access; today’s Article is about the big piece in the middle: infrastructure. As that simple illustration suggests, there is more room for action than the access layer, but more reticence is in order relative to broadcast platforms. To figure out how infrastructure companies should think about moderation, I talked to four CEOs at various layers of infrastructure:

  • Patrick Collison, the CEO of Stripe, which provides payment services to both individual companies and to platforms. link to interview
  • Brad Smith, the President of Microsoft, which owns and operates Azure, on which a host of websites, apps, and services are run. link to interview
  • Thomas Kurian, the CEO of Google Cloud, on which a host of websites, apps, and services are run. link to interview
  • Matthew Prince, the CEO of Cloudflare, which offers networking services, including free self-serve DDoS protection, without which many websites, apps, and services, particularly those not on the big public clouds, could not effectively operate. link to interview

What I found compelling about these interviews was the commonality in responses; to that end, instead of my making pronouncements on how infrastructure companies should think about issues of moderation, I thought it might be helpful to let these executives make their own case.

The Line in the Stack

The first overarching principle was very much in line with the argument I laid out above: infrastructure is very different from user-facing applications, and should be approached differently.

Microsoft’s Brad Smith:

The industry is looking at the stack and almost putting it in two layers. At the top half of the stack are services that basically tend to meet three criteria or close to it. One is it is a service that has control over the postings or removal of individual pieces of content. The second is that the service is publicly facing, and the third is that the content itself has a greater proclivity to go viral. Especially when all three or even when say two of those three criteria are met, I think that there is an expectation, especially in certain content categories like violent extremism or child exploitative images and the like that the service itself has a responsibility to be reviewing individual postings in some appropriate way.

Cloudflare’s Matthew Prince:

I think that so long as each of the different layers above you are complying with law and doing the right thing and cooperating with law enforcement…it should be an extremely high bar for somebody that sits as low in the stack as we do…I think that that’s different than if you’re Facebook firing an individual user, or even if you’re a hosting provider firing an individual platform. The different layers of the stack, I think do have different levels of responsibility and that’s not to say we don’t have any responsibility, it’s just that we have to be very thoughtful and careful about it, I think more so than what you have to do as you move up further in the stack.

Stripe’s Patrick Collison:

We’re different from others. We’re financial services infrastructure, not a content platform. I’m not sure that the kind of neutrality that companies like Stripe should uphold is necessarily best for Twitter, YouTube, Facebook, etc. However, I do think that in some of the collective reckoning with the effects of social media, the debate sometimes underrates the importance of neutrality at the infrastructure level.

Your Layer = Your Responsibility

The idea of trusting and empowering platforms built on infrastructure to take care of their layer of the stack was a universal one. Collison made the case that to try and police the entire stack was unworkable in a free society (and that this explained why Stripe kicked the Trump campaign off after January 6th, but still supported the campaign indirectly):

This gets into platform governance, which is one of the most important dimensions of all of this, I think. We suspended the campaign accounts that directly used Stripe — the accounts where we’re the top-of-the-stack infrastructure. We didn’t suspend all fundraising conducted by other platforms that benefitted his campaign. We expect platforms that are built on Stripe to implement their own moderation and governance policies and we think that they should have the latitude to do so. This idea of paying attention to your position in the stack is obviously something you’ve written about before and I think things just have to work this way. Otherwise, we’re ultimately all at the mercy of the content policies of our DNS providers, or something like that.

Google’s Thomas Kurian noted there were practical considerations as well:

Imagine somebody wrote a multi-tenant software as a service application on top of GCP, and they’re offering it, and one of the tenants in that application is doing something that violates the ToS but others are not. It wouldn’t be appropriate or even legally possible for us to shut off the entire SaaS application because you can’t just say I’m going to throttle the IP addresses or the inbound traffic, because there’s no way that the tenant is below that level of granularity. In some cases it’s not even technically feasible, and so rather than do that, our model is to tell the customer, who we have a direct relationship with, “Hey, you signed an agreement that agreed to comply with our ToS and AUP [Acceptable Use Policy], but now we have a report of a potential violation of that and it’s your responsibility to pass those obligations on to your end customer.”

Still, Smith didn’t completely absolve infrastructure companies of responsibility:

The platform underneath doesn’t tend to meet those three criteria, but that doesn’t absolve the platform of all responsibility to be thinking about these issues. The test for, at the platform level, is therefore whether the service as a whole has a reasonable infrastructure and is making reasonable efforts to fulfill its responsibilities with respect to individualized content issues. So whether you’re a GCP or AWS or Azure or some other service, what people increasingly expect you to do is not make one-off decisions on one-off postings, but asking whether the service has a level of content moderation in place to be responsible. And if you’re a gab.ai and you say to Azure, “We don’t and we won’t,” then as Azure, we would say, “Well look, then we are not really comfortable as being the hosting service for you.”

Proactive Process

This middle way — give responsibility to the companies and services on top of your infrastructure, but if they fail, have a predictable process in place to move them off of the platform — requires a proactive approach. Smith again:

Typically what we try to do is identify these issues or issues early on. If we don’t think there’s a natural match, if we’re not comfortable with somebody, it makes more sense to let them know before they get on our service so that they can know that and they can find their own means of production if that’s what they want. If we conclude that somebody is reliant on us for their means of production, and we’re no longer comfortable with them, we should try to manage it through a conversation so they can find a means of production that is an alternative if that’s what they choose. But ideally, you don’t want to call them up at noon and tell them they have two hours before they’re no longer on the internet.

Kurian made the same argument against arbitrary decision-making and in favor of proactive process:

We evolve our Acceptable Use Policies on a periodic basis. Remember, we need to evolve it in a thoughtful way, not react to individual crisis. Secondly, we also need to evolve it in a way with a certain frequency, otherwise customers stop trusting the platform. They’d be like “Today, I thought I was accepted and tomorrow you changed it, and now I’m no longer accepted and I have to migrate off the platform”. So we have a fairly well thought out cadence, typically once every six months to once every twelve months, when we reevaluate that based on what we see…

We try to be as prescriptive as possible so that people have as much clarity as possible with what can they do and what they can’t do. Secondly, when we run into something that is a new circumstance, because the boundary of these things continue to move, if it’s a violation of what is considered a legally acceptable standard, we will take action much more quickly. If it’s not a direct violation of law but more debatable on should you take action or not, as an infrastructure provider, our default is don’t take action, but we will then work through a process to update our AUP if we think it’s a violation, and then we make that available through that.

What About Parler?

This makes sense as I write this on March 16, 2021, while the world is relatively calm; the challenge is holding to a commitment to this default when the heat is on, like it was in January. Prince noted:

We are a company that operates, we have equipment running in over 100 countries around the world, we have to comply with the laws in all those countries, but more than that, we have to comply with the norms in all of those countries. What I’ve tried to figure out is, what’s the touchstone that gets us away from freedom of expression and gets us to something which is more universal and more fundamental around the world and I keep coming back to what in the US we call due process, but around the rest of the world, they’d call it, rule of law. I think the interesting thing about the events of January were you had all these tech companies start controlling who is online and it was actually Europe that came out and said, “Whoa, Whoa, Whoa. That makes us super uncomfortable.” I’m kind of with Europe.

Collison made a similar argument about AWS’s decision to stop serving Parler:

Parler was a good case study. We didn’t revoke Parler’s access to Stripe. They’re a platform themselves and it certainly wasn’t clear to us in the moment that Parler should be held responsible for the events. (I’m not making a final assessment of their culpability — just saying that it was impossible for anyone to know immediately.) I don’t want to second guess anyone else’s decisions — we’re doing this interview because these questions are hard! — but I think it’s very important that infrastructure players are willing to delegate some degree of moderation authority to others higher in the stack. If you don’t, you get these problematic choke points…These sudden, unpredictable flocking events can create very intense pressure and I think responding in the moment is usually not a recipe for good decision making.

I was unable to speak with anyone from AWS; the company said in a court filing that it had given the social networking service multiple warnings that it needed more stringent moderation from November 2020 on. AWS said in its filing:

This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature. AWS notified Parler repeatedly that its content violated the parties’ agreement, requested removal, and reviewed Parler’s plan to address the problem, only to determine that Parler was both unwilling and unable to do so. AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.

This echoes the argument that Prince made in 2019 in the context of Cloudflare’s decision to remove its (free) DDoS service from 8chan. Prince explained:

The ultimate responsibility for any piece of user-generated content that’s placed online is the user that generated it and then maybe the platform, and then maybe the host and then, eventually you get down to the network, which is where we are. What I think changed was, there were certain cases where there were certain platforms that were designed from the very beginning, to thwart all legal responsibility, all types of editorial. And so the individuals were bad people, their platforms themselves were bad people, the hosts were bad people. And when I say bad people, what I mean is, people who just ignore the rule of law…[So] every once in a while, there might be something that is so egregious and literally designed to be lawless, that it might fall on us.

In other words, infrastructure companies should defer to the services above them, but if no one else will act, they might have no choice; still, Prince added:

What was interesting was as we saw all of these other platforms struggle with this and I think Apple, AWS, and Google got a lot of attention, and it was interesting to see that same framework that we had set out, being put out. I’m not sure it’s exactly the same. Was Parler all the way to complete lawlessness, or were they just over their skis in terms of content moderation?

This is where I go back to Smith’s arguments about proactive engagement, and Kurian’s focus on process: if AWS felt Parler was unacceptable, it should have moved sooner, like Microsoft appears to have done with Gab several years ago. The seeming arbitrariness of the decision was directly related to the lack of proactive management on AWS’s part.

U.S. Corporate Responsibility

An argument I made about the actions of tech companies after January 6 is that they are best understood as a part of America’s checks-and-balances; corporations have a wide range of latitude in the U.S. — see the First Amendment, for example, or the ability to kick the President off of their platform — but as Spider-Man says, “With great power comes great responsibility”. To that end U.S. tech companies were doing their civic duty in ensuring an orderly transfer of power, even though it hurt their self-interest. I put this argument to Collison:

I think you’re getting at the idea that companies do have some kind of ultimate responsibility to society, and that that might occasionally lead to quite surprising actions, or even to actions that are inconsistent with their stated policies. I agree. It’s important to preserve the freedom of voluntary groups of private citizens to occasionally act as a check on even legitimate power. If some other company decided that the events of 1/6 simply crossed a subjective threshold and that they were going to withdraw service as a result, well, I think that’s an important right for them to hold, and I think that the aggregate effect of such determinations, prudently and independently applied, will ultimately be a more robust civic society.

Prince was more skeptical

That is a very charitable read of what went on…I think, the great heroic patriotic, “We did what was right for the country”, I mean, I would love that to be true, I’m not sure that if you actually dig into the reality of that, that it was that. As opposed to succumbing to the pressure of external, but more importantly, internal pressures.

Smith took the question in a different direction, arguing that tech company actions in the U.S. are increasingly guided by expectations from other countries:

I do think that there is, as embedded in the American business community, a sense of corporate social responsibility and I think that has grown over the last fifteen years. It’s not unique to the United States because I could argue that in certain areas, European business feels the same way. I would say that there are two factors that add to that sense of American corporate responsibility as it apply to technology and content moderation that are also important. One is, during the Trump administration, there was a heightened expectation by both tech employees and civil society in the United States that the tech sector needed to do more because government was likely to do less. And so I think that added to pressure as well as just a level of activity that grew over the course of those four years, that was also manifest on January 6th…

There is a second factor that I also think is relevant. There’s almost an arc of the world’s democracies that are creating common expectations through new laws outside the United States that I think are influencing then expectations for what tech companies will do inside the United States.

Kurian immediately jumped to the international implications as well:

Cloud is a global utility, so we’re making our technology available to customers in many, many countries, and in each country we have to comply with the sovereign law of that country. So it is a complex question because what is considered acceptable in one country may be considered non-acceptable in another country.

Your example of First Amendment in the United States and the way that other countries may perceive it gets complicated, and that’s one of the questions we’re working through as we speak, which we don’t have a clear answer on, which is what action do you take if it’s in violation of one country’s law when it is a direct contradiction in another country’s law? And for instance, because these are global platforms, if we say you’re not going to be allowed to operate in the United States, the same company could just as well use our regions in other countries to serve, do you see what I mean? There’s no easy answer on these things.

The Global Internet?

If there was universal agreement on the importance of understanding the different levels of the stack, the question of how to answer these questions globally provided the widest range of responses.

Collison argued that a bias towards neutrality was the only way a service could operate globally:

When it comes to moderation decisions and responsibilities as internet infrastructure, that pushes you to an approach of relative neutrality precisely so that you don’t supersede the various governmental and democratic oversight and civil society mechanisms that will (and should) be applied in different countries.

Kurian highlighted the technical challenges in any other approach:

We have tried to get to what’s common, and the reality is it’s super hard on a global basis to design software that behaves differently in different countries. It is super difficult. And at the scale at which we’re operating and the need for privacy, for example, it has to be software and systems that do the monitoring. You cannot assume that the way you’re going to enforce ToS and AUPs is by having humans monitor everything, I mean we have so many customers at such a large scale. And so that’s probably the most difficult thing is saying virtual machines behave one way in Canada, and a different way in the United States, and a third way, I mean that’s super complicated.

Smith made the same argument:

If you’re a global technology business, most of the time, it is far more efficient and legally compliant to operate a global model than to have different practices and standards in different countries, especially when you get to things that are so complicated. It’s very hard to have content moderators make decisions about individual pieces of content under one standard, but to try to do it and say, “Well, okay, we’ve evaluated this piece of content and it can stay up in the US but go down in France.” Then you add these additional layers of complexity that add both cost and the risk of non-compliance which creates reputational risk.

You can understand Google and Microsoft’s desire for consistency: what makes the public cloud so compelling is its immense scale, but you lose many of those benefits if you have to operate differently in every country. Prince, though, thinks that is inevitable:

I think that if you could say, German rules don’t extend beyond Germany and French rules don’t extend beyond France and Chinese rules don’t extend beyond China and that you have some human rights floor that’s in there.

But given the nature of the internet, isn’t that the whole problem? Because, anyone in Germany can go to any website outside of Germany.

That’s the way it used to be, I’m not sure that’s going to be the way it’s going to be in the future. Because, there’s a lot of atoms under all these bits and there’s an ISP somewhere, or there’s a network provider somewhere that’s controlling how that flows and so I think that, that we have to follow the law in all the places that are around the world and then we have to hold governments responsible to the rule of law, which is transparency, consistency, accountability. And so, it’s not okay to just say something disappears from the internet, but it is okay to say due to German law it disappeared from the internet. And if you don’t like it, here’s who you complain to, or here’s who you kick out of office so you do whatever you do. And if we can hold that, we can let every country have their own rules inside of that, I think that’s what keeps us from slipping to the lowest common denominator


Prince’s final conclusion is along the same lines of where I landed in Internet 3.0 and the Beginning of Tech History. To date the structure of the Internet has been governed by technological and especially economic incentives that drove towards centralization and Aggregation; after the events of January, though, political considerations will increasingly drive decision-making.

For many internet service providers this provides an opportunity to abstract away this complexity for other companies; Stripe, to take a pertinent example, adroitly handles different payment methods and tax regimes with a single API. The challenge is more profound for the public clouds, though, which achieve their advantage not by abstracting away complexity, but by leveraging the delivery of universal primitives at scale.

This is why I take Smith’s comments as more of a warning: a commitment to consistency may lead to the lowest common denominator outcome Prince fears, where U.S. social media companies overreach on content, even as competition is squeezed out at the infrastructure level by policies guided by non-U.S. countries. It’s a bad mix, and public clouds in particular would be better off preparing for geographically-distinct policies in the long run, even as they deliver on their commitment to predictability and process in the meantime, with a strong bias towards being hands-off. That will mean some difficult decisions, which is why it’s better to make a commitment to neutrality and due process now.

You can read the interviews from which this Article was drawn here.

The Roblox Microverse

The degree to which Stratechery Weekly Articles are pre-planned varies; Eddy on Twitter, though, had this article pegged:

Tomorrow, after a number of delays, including switching from an IPO to a Direct Listing, is finally the day that Roblox goes public; it’s a company I can’t wait to write about.

The Evolution of Video Games

The article Eddy was replying to was Clubhouse’s Inevitability, particularly this chart:

The most obvious difference between Clubhouse and podcasts is how much dramatically easier it is to both create a conversation and to listen to one. This step change is very much inline with the shift from blogging to Twitter, from website publishing to Instagram, or from YouTube to TikTok.

A drawing of Clubhouse's Similarity to Twitter, Instagram, and TikTok

Secondly, like those successful networks, Clubhouse centralizes creation and consumption into a tight feedback loop. In fact, conversation consumers can, by raising their hand and being recognized by the moderator, become creators in a matter of seconds.

Compare this to how Roblox describes their business in their S-1:

An average of 36.2 million people from around the world come to Roblox every day to connect with friends. Together they play, learn, communicate, explore, and expand their friendships, all in 3D digital worlds that are entirely user-generated, built by our community of nearly 7 million active developers. We call this emerging category “human co-experience,” which we consider to be the new form of social interaction we envisioned back in 2004. Our platform is powered by user-generated content and draws inspiration from gaming, entertainment, social media, and even toys.

Here’s the question, though: with regard to Eddy’s question, how might you fit video games onto this evolution of media framework? You could start with analog games, progress to video games, and then to casual games, as Eddy suggested, but it’s worth noting that video games preceded the web by many years; I think it makes more sense to make traditional video games the base unit. More importantly, that graphic was about creation, not consumption, and in that regard, video games fit quite nicely:

  • Step 0 — Pre-Internet: The primary way to distribute video games was on consoles, which were controlled by the console makers; computer gaming was more open, but still required significant distribution capabilities. This was the newspaper era of video game publishing.

  • Step 1 — Democratization: The Internet made it possible to distribute games directly, meaning that anyone could be a publisher; over time the increase in broadband penetration made casual cloud-gaming (originally via Flash and later on Facebook) much more accessible.

  • Step 2 — Aggregation: Mobile dramatically increased the market for video games by ensuring nearly everyone had a video game device in their pockets. Note that this increased the market in two ways: first, there were more potential players, and two, all potential players had more opportunities to play games. Mobile, though, meant App Stores. This was a boon in that it made it easy to distribute video games in a way that customers were willing to trust and experiment with, and built-in payments unlocked entirely new ways of making money. It also meant that App Stores were the only way to reach customers, and you had to pay 30% for the privilege (these advantages and disadvantages are, of course, the exact same).

  • Step 3 — Transformation: This step is when the medium in question becomes something fundamentally different because of the Internet. I explained in Clubhouse’s Inevitability:

Even with the explosion of content resulting from democratizing publishing, what was actually published was roughly analogous to what might have been published in the pre-Internet world. A blog post was just an article; an Instagram post was just a photo; a YouTube video was just a TV episode; a podcast was just radio show. The final step was transformation: creating something entirely new that was simply not possible previously.

Along those lines, a lot of video games, particularly mobile games, are just mobile versions of what has been available for decades; at the same time, video games have incorporated a lot of things that are only possible on the Internet. Multi-player games have been widespread for over twenty years, and the entire concept of in-app purchases has transformed business models. Both are uniquely enabled by the Internet. Even with that caveat, though, Roblox is something entirely new.

The Metaverse

Back to the Roblox S-1:

Some refer to our category as the metaverse, a term often used to describe the concept of persistent, shared, 3D virtual spaces in a virtual universe. The idea of a metaverse has been written about by futurists and science fiction authors for over 30 years. With the advent of increasingly powerful consumer computing devices, cloud computing, and high bandwidth internet connections, the concept of the metaverse is materializing.

The term “Metaverse” was coined by Neal Stephensen in his seminal 1992 science fiction novel Snow Crash:

Like any place in Reality, the Street is subject to development. Developers can build their own small streets feeding off of the main one. They can build buildings, parks, signs, as well as things that do not exist in Reality, such as vast hovering overhead light shows, special neighborhoods where the rules of three-dimensional spacetime are ignored, and free-combat zones where people can go to hunt and kill each other.

The only difference is that since the Street does not really exist — it’s just a computer-graphics protocol written down on a piece of paper somewhere — none of these things is being physically built. They are, rather, pieces of software, made available to the public over the worldwide fiber-optics network. When Hiro goes into the Metaverse and looks down the Street and sees buildings and electric signs stretching off into the darkness, disappearing over the curve of the globe, he is actually staring at the graphic representations — the user interfaces — of a myriad different pieces of software that have been engineered by major corporations.

The Roblox is but one company — more on this in a moment — but the way in which it describes its platform certainly approaches Stephenson’s vision:

The Roblox Platform has a number of key characteristics:

  • Identity: All users have unique identities in the form of avatars that allow them to express themselves as whoever or whatever they want to be. These avatars are portable across experiences.
  • Friends: Users interact with friends, some of whom they know in the real world and others who they meet on Roblox.
  • Immersive: The experiences on Roblox are 3D and immersive. As we continue to improve the Roblox Platform, these experiences will become increasingly engaging and indistinguishable from the real world.
  • Anywhere: Users, developers and creators on Roblox are from all over the world. Further, the Roblox Client operates on iOS, Android, PC, Mac, and Xbox, and supports VR experiences on PC using Oculus Rift, HTC Vive and Valve Index headsets.
  • Low Friction: It is simple to set up an account on Roblox, and free for users to enjoy experiences on the platform. Users can quickly traverse between and within experiences either on their own or with their friends. It is also easy for developers to build experiences and then publish them to the Roblox Cloud so that they are then accessible to users on the Roblox Client across all platforms.
  • Variety of Content: Roblox is a vast and expanding universe of developer and creator-built content. As of September 30, 2020, there were over 18 million experiences on Roblox, and in the twelve months ended September 30, 2020, over 12 million of these were experienced by our community. There are also millions of creator-built virtual items with which users can personalize their avatars.
  • Economy: Roblox has a vibrant economy built on a currency called Robux. Users who choose to purchase Robux can spend the currency on experiences and on items for their avatar. Developers and creators earn Robux by building engaging experiences and compelling items that users want to purchase. Roblox enables developers and creators to convert Robux back into real-world currency.
  • Safety: Multiple systems are integrated into the Roblox Platform to promote civility and ensure the safety of our users. These systems are designed to enforce real-world laws, and are designed to extend beyond minimum regulatory requirements.

Growth at Roblox has been driven primarily by a significant investment in technology and two mutually reinforcing network effects: content and social.

In short, Roblox isn’t a game at all: it is world in which one of the things you can do is play games, with a persistent identity, persistent set of friends, persistent money, all disconnected from the device that you use to access the world. That is the transformational change.

Think about all the previous evolutions of gaming:

  • Video games required a console, PC, or phone
  • Networked games connected your console or computer or phone to another console or computer or phone
  • In-app purchases transformed the business model of games by leveraging the payment system provided by the console, PC, or phone

It should be pointed out that while consoles and phones have fairly similar models, the open nature of the PC left room for Steam to capture the distribution and payment functionality; still, the device was the center of your gaming experience, and most games were silos. Some games have sought to break these walls down; Fortnite has been particularly aggressive in enabling cross-platform play.

Roblox, though, isn’t simply the same game everywhere, it’s the same persistent world everywhere, from PC to console (Xbox, not PlayStation) to smartphone, in which games happen to exist. It’s a metaverse…kind of.

The Microverse

The problem with invoking the “Metaverse” in the context of Roblox is that the traditional conception was a virtual world that rivaled the real world; anyone could plug into it from anywhere, with full interoperability. Roblox, though, is only Roblox.

That’s actually a benefit: by controlling everything Roblox can bring all of the disparate parts of gaming into one place; instead of one app for social interactions, another app for purchases, and a different app for every different game, everything is all in the same place. This also makes Roblox easier to develop for: by constraining graphics to a consistent toolbox it is very easy to build something new.

Three Roblox games

This creates the conditions for the interlocking feedback loops that characterize transformational products; by reducing the prominence and feature set of games, Roblox made it possible to create something bigger. A microverse.

This actually fits the patterns of other transformational products. The feed, for instance, relies on reducing all types of content, from posts to photos to links, to the same format, such that they can all be incorporated into a greater whole. It’s a reason why I think that Clubhouse being all audio actually gives it an advantage relative to Twitter: that leaves more room for user entrepreneurship, both in the kinds of rooms created and also norms around behavior (Twitter realized the same benefits relative to blogs with its 140-character constraint). Similarly, Roblox games aren’t really competitive with non-Roblox games: they’re “worse” in any sort of traditional sense, because the things that make them “better” are the parts that are enabled by imposing constraints.

Roblox and the App Store

That leads to perhaps the most surprising thing about Roblox:

The Roblox "App Store"

That’s the screen you see when you launch the app, and I have to say, it looks an awful lot like an App Store! That’s a problem because Apple states in its App Store Guidelines that “Creating an interface for displaying third-party apps, extensions, or plug-ins similar to the App Store or as a general-interest collection” is “unacceptable”.

On one hand, perhaps Roblox is fine because these are not 3rd-party App Store apps, unlike, say, the rejected Facebook Gaming app. But then again, Xbox Game Pass wants to launch 3rd-party games that run in the cloud, not on the iPhone at all, and Apple also said no. I guess it is enough that these are, without question, Roblox games. They are sui generis.

Of course Apple (and Google) is still taking its share; Roblox has to pay 30% on every purchase of Robux, its in-game currency, as will every other would-be platform on top of their platform (like Clubhouse or Twitter). I very much hope, though, that Apple will be content with that: the reality is that Roblox apps really are something different, simpler in isolation than any one iOS app, yet something new and innovative as a collective. It’s a perfect example of the opportunity I wrote about in 2020’s The End of the Beginning:

The implication of this view should at this point be obvious, even if it feels a tad bit heretical: there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.

Indeed, this is exactly what we see in consumer startups in particular: few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.

Roblox is the exact sort of platform that is only possible when you accept the reality that the platforms on which it rests aren’t going anywhere. The responsibility of those foundational platforms is to give room to let these microverses flourish, without legislating or taxing them to death.

I wrote a follow-up to this article in this Daily Update.

The Web’s Missing Interoperability

It was 2004, and Tim O’Reilly needed a name for his new conference, and so “Web 2.0” was hatched in a brainstorming session devoted to finding a name. A year later O’Reilly would flesh the concept out with his definitive 2005 post What is Web 2.0, but given the fact so many Web 2.0 applications were about creating platforms that were then made valuable with user-generated content, it feels appropriate that O’Reilly made a name first and added the content to justify it later. And, in the spirit of Web 2.0, I’m not going to quote O’Reilly’s article but rather the Wikipedia entry for a definition:

Web 2.0…refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatible with other products, systems, and devices) for end users.

Seventeen years on and there is more user-generated content than ever, in part because it is so easy to generate: you can type an update on Facebook, post a photo on Instagram, make a video on TikTok, or have a conversation on Clubhouse. That, though, points to Web 2.0’s failure: interoperability is nowhere to be found. Twitter, which has awoken from its years-long stupor to launch or announce whole host of new products, provides an excellent lens with which to examine the drivers of this centralization.

Super Follows and the Content-Creation Loop

While the specific details of Super Follows are still hazy, the idea of Twitter users being able to charge followers for special access makes all kinds of sense for Twitter the company. First, the items of value, particularly exclusive tweets or the ability to interact, are by definition exclusive to Twitter; only Twitter has tweets! Second, the best place to find Super Follows will be amongst your regular followers; access to the ideal user acquisition channel is built in. Third, Twitter will be able to make money coming and going: the obvious advertising mechanism for finding new Super Follows would be Twitter’s own ad products; Twitter could even make advertising “free”, collecting its payment from future subscription revenue.

This points to the first factor driving centralization: as I explained two weeks ago in Clubhouse’s Inevitability, the most compelling apps for user-generated content tie creation and consumption into a tight loop, bound together with network effects and presented with a feed that neither creators nor consumers want to leave. This isn’t nefarious, it’s simply good product design.

Revue and Distribution

The logic of Twitter getting into newsletters with its purchase of Revue is not quite as obvious, but still compelling: long-form content is very different from 280 characters, and there is a certain allure to a company like Substack that is completely focused on newsletters. At the same time, the most challenging part of building a subscription business is customer acquisition, and Twitter is an obvious channel to do so.1

This is the second factor driving centralization: in a world where distribution is free the real cost is user acquisition, which means it is often easier to give existing users new functionality than it is for a new service based on the functionality to acquire new users. The canonical example of this dynamic is Stories: while Instagram didn’t kill Snapchat, the audacity with which Facebook copied Stories stopped the latter’s growth, at least for a few years.

Business concerns are obviously a major driver here as well: ad-based services want to keep users on their own platform instead of sending them somewhere else, even if it incurs short-term costs; engagement is the recipe for long-term revenue growth.

Spaces and Social Graphs

I’m less convinced than many that Spaces, Twitter’s Clubhouse competitor, will crush the startup like Periscope, Twitter’s live streaming service, once crushed Meerkat. I think the audio streaming market is much larger, for one, and Clubhouse has much more of a head start. Still, I can understand the argument: when it comes to a social network the most compelling feature is if you know people using it, and Twitter already has an existing social graph, as well as a good idea of what you are already interested in based on whom you follow.

That is also why it is so important that Clubhouse incentivizes its users to import their contacts: the startup is bootstrapping a social network off of your phone’s address book, in stark contrast to Meerkat, which directly imported your Twitter contacts, right up until the date when Twitter cut it off. Twitter had learned its lesson from Instagram, which booted up its social network on top of the Twitter graph; Twitter eventually cut off Instagram in-line image sharing, but by then it was too late. To that end, the most Twitter could do to Clubhouse is stop its links from unfurling; it can’t stop Clubhouse’s notifications or use of contacts.

Censorship and Competition

New features are a welcome respite to the reason that Twitter is usually in the news: controversy and charges/pleas for censorship. Some folks want Twitter to control more speech, some want Twitter to control less, but nearly all are convinced that Twitter is on the side of their enemies. Still, to be fair, Trump supporters have the stronger argument in that regard, given the fact that Twitter suspended the former President’s account (a decision that I ultimately argued for, even though it was very close).

The problem is that the solution proposed by many Republicans — revoking Section 230 — makes absolutely zero sense. Section 230 doesn’t protect the rights of private companies to make their own moderation decisions; the First Amendment does! In fact, removing Section 230 protection from tech platforms would lead them to censor far more, the better to avoid the inevitable flood of pointless lawsuits that would follow.

A better solution is more competition,2 but the reality of social networking is that new services that succeed do so by focusing on a different aspect of human communication. Twitter is primarily text, for example, while Instagram is pictures, and TikTok is video (this is another reason why I am bullish on Clubhouse relative to Twitter: I think being focused on audio is a big advantage). Facebook is the record of your life, while Snapchat is about ephemerality, and messaging apps about groups and logistics. The first time I tried to map the gamut of social media experiences, I opened with that famous Walt Whitman line from Song of Myself:3

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

This is why the FTC’s hilarious attempt to define Facebook’s market in its absurd antitrust lawsuit was so misguided; of course Facebook has a monopoly on being Facebook, just as Twitter has a monopoly on being Twitter. It is quite clear, though, that no service has a monopoly on user-generated content or human interaction.

Still, that doesn’t help the “competition as an answer to censorship” problem, because the entire point is that if Twitter stops you from tweeting the ban is absolute; the issue isn’t simply the form of the tweets, but the network undergirding the service. That, though, is why the Instagram and Meerkat stories are so interesting: we already have evidence about just how powerful it is when a service lets its social graph be exported, and how limiting it is when it doesn’t. To that end, it seems clear to me that the only way to build a direct competitor to any of these services, like Twitter, is to have direct access to the Twitter network.

Interoperability and Privacy

To that end, to the extent regulators want to engender competition in the social networking space, whether for political reasons or simply because they think Facebook is too powerful, the solution is clear: force existing social networks to open up their social graphs. Clubhouse shouldn’t have needed to upload your contacts, which aren’t even that great of a resource, filled as they are with your dentist from ten years ago at best and your abusive ex at worst; Facebook and Twitter (and Snapchat and all the rest) should have had to expose their social graphs to Clubhouse just like Twitter once did for Instagram. There is nothing else that would do more to spur competition.

The problem, of course, is privacy; European lobbyist Alexander Hanff described on LinkedIn how Clubhouse violated GDPR (all punctuation and capitalization from the original):

Clubhouse is a REALLY bad idea for private users, companies and investors. As private users they are asking you to break the law by providing access to your address book in order to invite your friends to use the platform with you…Clubhouse is a shining example of HOW TO BREAK EU LAW — they are so good at it they could and probably should, write a book on the subject.

Hanff isn’t wrong! One of the reasons why GDPR is such a disaster is that it makes it all but impossible for a new social media company to ever be started in Europe; I explained in 2017:

Several folks have suggested that the GDPR’s requirements around data portability, including that it be machine accessible (i.e. not just a PDF) will help new networks form, but in fact the opposite is the case. Note this section from the Guidelines on the right to data portability…

This forbids what I proposed: the easy re-creation of one’s social graph on other networks. Moreover, it’s a reasonable regulation: my friend on Facebook didn’t give permission for their information to be given to Snapchat, for example. It does, though, make it that much more difficult to bootstrap a Facebook competitor: the most valuable data (from a business perspective, anyways) is the social graph, not the updates and pictures that must now be portable, which means that again, thanks to (reasonable!) regulation, Facebook’s position is that much more secure.

I get the argument around banning contact exports; unsurprisingly, there are calls that Apple do exactly that in the wake of Clubhouse’s rise (never mind the fact that contacts have been accessible — and thus have been accessed! — in this way for years). What people making these calls — and these laws — need to be more honest about, though, is that they killing competition. If you want to ensure that Twitter wins in audio, or that Facebook wins everywhere else, then elevating privacy over everything else, ignoring both tradeoffs (like killing competition in social networks) and facts on the ground (like the reality that your contacts have long since ceased to be private), is an excellent way to accomplish exactly that.

Look no further than e-commerce.

The Sad Story of Shopify and Facebook

Two years ago I wrote about one of the most exciting examples of competition on the Internet: Shopify and the Power of Platforms. After explaining how Walmart had failed in its futile attempt to challenge Amazon head-on, I highlighted the fact that Shopify was pursuing a very different strategy:

At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed. The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

A drawing of The Shopify Platform

This means they have to stand out not in a search result on Amazon.com, or simply offer the lowest price, but rather earn customers’ attention through differentiated product, social media advertising, etc. Many, to be sure, will fail at this: Shopify does not break out merchant churn specifically, but it is almost certainly extremely high. That, though, is the point.

Unlike Walmart, currently weighing whether to spend additional billions after the billions it has already spent trying to attack Amazon head-on, with a binary outcome of success or failure, Shopify is massively diversified. That is the beauty of being a platform: you succeed (or fail) in the aggregate. To that end, I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.

This is how Shopify can both in the long run be the biggest competitor to Amazon even as it is a company that Amazon can’t compete with: Amazon is pursuing customers and bringing suppliers and merchants onto its platform on its own terms; Shopify is giving merchants an opportunity to differentiate themselves while bearing no risk if they fail.

The strategy is working: while Shopify’s Gross Merchandise Volume (GMV) was about a quarter the size of Amazon’s in 2020, it was only 18% in 2019, and a mere 15% in 2018;4 in other words, even during the pandemic Shopify grew faster than Amazon, surely welcome news to those concerned about Amazon’s power, and validation to people like me who believe in the power of the Internet to unlock niche businesses that were never before possible.

That is also why this news in the Wall Street Journal kind of bums me out:

Shopify Inc., a commerce platform for businesses, is bringing its checkout and payment processing system, Shop Pay, to some Facebook Inc. platforms. The Shop Pay option will first be available to Instagram users on Tuesday and will roll out on Facebook Shops, the social-media company’s platform for small businesses, in the next few weeks.

Consumers will be able to use Shop Pay to complete purchases, expanding on existing options to use PayPal Holdings Inc.’s PayPal or manually enter credit or debit card information. All these methods are offered via the Facebook Pay payment system. Shop Pay, which stores credit card and shipping information to speed online checkout, hasn’t previously been available outside the e-commerce stores of Shopify clients.

Narrowly speaking, this is a huge win for Shopify; I fretted last year in Platforms in an Aggregator World that Facebook Shops would be bad for Shopify, in large part because it would limit usage of Shop Pay, itself a part of “Merchant Solutions”, the company’s biggest growth driver. I’m very happy to have gotten this wrong!

At the same time, from a big picture perspective this is clearly a case of Shopify, one of the most exciting companies in tech and the seeming leader of The Anti-Amazon Alliance, effectively moving into Facebook’s garden, because the web is increasingly a barren wasteland for small businesses. The cause is Apple: its approach to cookies makes platform-based web storefronts increasingly difficult to monetize effectively (Shop Pay performed magic in this regard), and its attack on “tracking” — which goes far beyond the IDFA — makes it increasingly impossible to acquire users in one place and convert them in another. The solution is to do user acquisition and user conversion all in one app — i.e. on Facebook — which is why Shopify is helping merchants move off the web and onto Facebook.

Again, it’s a good solution for Shopify, and Facebook deserves credit for recognizing that Shopify is a complement to their service, not a competitor, but I find it disappointing that once again elevating privacy above every other tradeoff is entrenching Facebook, the biggest incumbent of all.


The most frustrating aspect of the entire privacy debate is that the most ardent advocates of an absolutist position tend to describe anyone who disagrees with them as a Facebook defender. My motivation, though, is not to defend Facebook; quite the opposite, in fact: I want to see the social networking giant have more competition, not less, and I despair that the outcome of privacy laws like GDPR, or App Store-enforced policies from Apple, will be to damage Facebook on one hand, and destroy all of its long-term competitors on the other.

I worry even more about small businesses uniquely enabled by the Internet; forcing every company to act like a silo undoes the power of platforms to unlock collective competition (a la Shopify versus Amazon), whether that be in terms of advertising, payments, or understanding their users. Regulators that truly wish to limit tech power and unlock the economic potential of the Internet would do well to prioritize competition and interoperability via social graph sharing, alongside a more nuanced view of privacy that reflects reality, not misleading ads; I would settle for at least admitting there are tradeoffs being made.

I wrote a follow-up to this article in this Daily Update.


  1. In my experience word-of-mouth is more powerful than Twitter; both benefit from and build on Stratechery’s freemium approach where Articles like this are free-to-read on the web 

  2. More on questions of infrastructure, which apply to the Parler situation, soon 

  3. Be kind; the map in that article is from 2013! 

  4. Shopify reports GMV; Amazon’s are based on estimates 

Clubhouse’s Inevitability

To what extent are new companies, particularly those in new spaces, pushed versus pulled into existence? Last week I wrote about how Tesla is a Meme Company:

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

Electrification of personal vehicles would have happened at some point; it seems fair to argue that Musk accelerated the timeline significantly. Clubhouse, meanwhile, Silicon Valley’s hottest consumer startup, feels like the opposite case: in retrospect its emergence feels like it was inevitable — if anything, the question is what took so long for audio to follow the same path as text, images, and video.

Step 1: Democratization

The grandaddy of independent publishing on the Internet was the blog: suddenly anyone could publish their thoughts to the entire world! This was representative of the Internet’s most obvious impact on media of all types: democratization.

  • Distributing text no longer required a printing press, but simply blogging software:
    From print to blogs
  • Distributing images no longer required screen-printing, but simply a website:
    From magazines to Instagram
  • Distributing video no longer required a broadcast license, but simply a server:
    From TV to YouTube
  • Distributing audio no longer required a radio tower, but simply an MP3:
    From radio to podcasts

Businesses soon sprang up to make this process easier: Blogger for blogging, Flickr for photo-sharing, YouTube for video, and iTunes for podcasting (although, in a quirk of history, Apple never actually provided centralized hosting for podcasts, only a directory). Now you didn’t even need to have your own website or any particular expertise: simply pick a username and password and you were a publisher.

Step 2: Aggregation

Making anyone into a publisher resulted in an explosion of content; this shifted value to entities able to help consumers find what they were interested in. In text the big winner was Google, which indexed pre-existing publications, independent blogs, and everything in-between. The big winner in photos, meanwhile, ended up being Instagram: users “came for the tool and stayed for the network”, as Chris Dixon memorably put it:

Instagram’s initial hook was the innovative photo filters. At the time some other apps like Hipstamatic had filters but you had to pay for them. Instagram also made it easy to share your photos on other networks like Facebook and Twitter. But you could also share on Instagram’s network, which of course became the preferred way to use Instagram over time.

The Internet creates a far tighter feedback loop between content creation and consumption than analog media; Instagram leveraged this loop to become the dominant photo network. YouTube accomplished a similar feat, although the relative difficulty in creating video meant that the ratio of viewers to creators was much more extreme than in the case of photo-sharing. That, though, is exactly what made YouTube so dominant: creators knew that that was where all of their would-be viewers were.

Spotify is trying to do something similar for audio, particularly podcasts. I wrote in a Daily Update after the streaming service signed Joe Rogan to an exclusive contract:

Spotify, meanwhile, has its eyes on an absolute maxima — a podcast industry that monetizes at a rate befitting its share of attention — but as I have explained, that will only be possible with a Facebook-like model that dynamically matches advertisers and listeners in real-time, as they are streaming a podcast…This, by extension, means that Spotify needs a much larger share of the market, so that they can start generating advertising payouts that are better than the current stunted model, thus convincing podcasters to give up their current ads and use Spotify’s platform to monetize instead.

In this view the motivation for the Rogan deal is obvious: Spotify doesn’t just want to capture new listeners, it wants to actively take them from Apple and other podcast players. And, if it can take a sufficient number, the company surely believes it can create a superior monetization mechanism such that the rest of the podcast creator market shifts to Spotify out of self interest.

Capture enough of the audience and the creators will follow.

Step 3: Transformation

Still, even with the explosion of content resulting from democratizing publishing, what was actually published was roughly analogous to what might have been published in the pre-Internet world. A blog post was just an article; an Instagram post was just a photo; a YouTube video was just a TV episode; a podcast was just radio show. The final step was transformation: creating something entirely new that was simply not possible previously.

Start with text: Twitter is not discrete articles but a stream of thoughts, 280 characters long. It was the stream that was uniquely enabled by the Internet: there is no real world analogy to being able to ingest the thoughts of hundreds or thousands of people from all over the world in real-time, and to have the diet be different for every person.

From blogging to Twitter

What is interesting is the effect this transformation had on blogging; Twitter all but killed it, for three reasons:

  • First, Twitter was even more accessible than blogging ever was. Just type out your thoughts, no matter how half-formed they may be, and hit tweet.
  • Second, because blogging was so distributed and imperfectly aggregated it was hard to build an audience; Twitter, on the other hand, combined creation and consumption like any other social network, which dramatically increased the reward and motivation for posting your thoughts there instead of on your blog.
  • Third, Twitter, thanks to the way it combined a wide variety of creators in an easily-consumable stream, was just a lot more interesting than most blogs; this completed a virtuous cycle, as more consumers led to more creators which led to more consumers.

Instagram, meanwhile, had always had that transformational feed, which carried the service to its first 500 million users; it was Stories, though, that re-ignited growth:

Instagram's Monthly Active Users

Stories — which Instagram audaciously copied from Snapchat — combined the customized nature of the feed with the ephemerality inherent in digital’s abundance; the problem with posting what you had for lunch was not that it was boring, but that no one wanted it to stick around forever.

From feed to stories

This too appears to have reduced usage of what came before; while Facebook has never disclosed Stories usage relative to feed viewing, that chart above is from this August 2018 Article about Facebook’s Story Problem — and Opportunity, where I observed:

While more people may use Instagram because of Stories, some significant number of people view Stories instead of the Instagram News Feed, or both in place of the Facebook News Feed. In the long run that is fine by Facebook — better to have users on your properties than not — but the very same user not viewing the News Feed, particularly the Facebook News Feed, may simply not be as valuable, at least for now.

The opportunity came from the fact that dramatically increasing inventory would surely lead to significant growth in the long run, which is exactly what has happened. It didn’t matter that Stories were not nearly as well-composed as pictures in the Instagram feed; in fact, that made them even more valuable, because Stories were easier to both produce and consume.

TikTok is doing the same thing with video; in this case the transformative technology is its algorithm. I explained in The TikTok War:

All of this explains what makes TikTok such a breakthrough product. First, humans like video. Second, TikTok’s video creation tools were far more accessible and inspiring for non-professional videographers. The crucial missing piece, though, is that TikTok isn’t really a social network…

ByteDance’s 2016 launch of Douyin — the Chinese version of TikTok — revealed another, even more important benefit to relying purely on the algorithm: by expanding the library of available video from those made by your network to any video made by anyone on the service, Douyin/TikTok leverages the sheer scale of user-generated content to generate far more compelling content than professionals could ever generate, and relies on its algorithms to ensure that users are only seeing the cream of the crop.

YouTube has invested heavily in its own algorithm to keep you on the site, but its level of immersion is still gated by its history of serving discrete videos from individual creators; TikTok, on the other hand, drops you into a stream of videos that quickly blur together into a haze of engagement and virality.

From YouTube to TikTok

There is nothing like it in the real world.

Podcasts and Blogs

What is striking about audio is how stunted its development is relative to other mediums. Yes, podcasts are popular, but the infrastructure and business model surrounding podcasts is stuck somewhere in the mid-2000’s, a point I made in 2019 in Spotify’s Podcast Aggregation Play:

The current state of podcast advertising is a situation not so different from the early web: how many people remember this?

The old "punch the monkey" display ad

These ads were elaborate affiliate marketing schemes; you really could get a free iPod if you signed up for several credit cards, a Netflix account, subscription video courses, you get the idea. What all of these marketers had in common was an anticipation that new customers would have large lifetime values, justifying large payouts to whatever dodgy companies managed to sign them up.

The parallels to podcasting should be obvious: why is Squarespace on seemingly every podcast? Because customers paying monthly for a website have huge lifetime values. Sure, they may only set up the website once, but they are likely to maintain it for a very long time, particularly if they grabbed a “free” domain along the way. This makes the hassle of coordinating ad reads and sponsorship codes across a plethora of podcasts worth the trouble; it’s the same story with other prominent podcast sponsors like ZipRecruiter or SimpliSafe.

The problem is that the affiliated marketing for large lifetime-value purchases segment is not a particularly large one

One of the takeaways of that piece was that monetization was holding podcasts back, and that Spotify appeared to be positioning itself to expand the podcast advertising market via centralization. Looking back, though, I should have realized that but for a few exceptions, advertising never ended up working out for blogs; the premise behind 2015’s Blogging’s Bright Future was that subscriptions made far more sense as a business model:

Forgive me if this article read a bit too much like an advertisement for Stratechery; the honest truth is my fervent belief in the individual blog not only as a product but also as a business is what led to my founding this site, not the other way around. And, after this past weekend’s “blogging-is-dead” overdose, I almost feel compelled to note that my conclusion — and experience — is the exact opposite of Klein’s and all the others’: I believe that Sullivan’s The Daily Dish will in the long run be remembered not as the last of a dying breed but as the pioneer of a new, sustainable journalism that strikes an essential balance to the corporate-backed advertising-based “scale” businesses that Klein (and the afore-linked Smith) is pursuing.

Interestingly enough, of the three authors cited in that paragraph, both Ezra Klein — formerly of Vox — and Ben Smith — formerly of BuzzFeed — are now at the New York Times, which is thriving with a subscription model. Sullivan, meanwhile, is at Substack — itself modeled after Stratechery — where within a month of launch he had reached a $500,000 run rate.

When you think about the Twitter-driven shake-out of blogging this evolution makes sense: Twitter captured the long-tail of blogs, in the process dramatically expanding the market for publishing text, but that by definition meant that the blogs that remained popular had readers that would jump through hoops — or at least click a link — to consume their content. It makes sense that the most sustainable way for those bloggers to pay the bills was by directly charging their readers, who already had demonstrated an above-average interest in their content.

My personal bet is that podcasts will follow a similar path. Podcasts, even more than blogs, require a commitment on the part of the listener, but that commitment is rewarded by a connection to the podcast host that feels even more authentic; host-read podcast advertising leverages this authenticity, but for most medium-sized podcasts charging listeners directly will make more sense in the long run.

Implicit in this prediction, though, is that podcasts actually fade in relative importance and popularity to an alternative that doesn’t simply further democratize audio publishing, but also transforms it. Enter Clubhouse.

Clubhouse’s Opening

The most obvious difference between Clubhouse and podcasts is how much dramatically easier it is to both create a conversation and to listen to one. This step change is very much inline with the shift from blogging to Twitter, from website publishing to Instagram, or from YouTube to TikTok.

Clubhouse is similar to Twitter, Instagram, and TikTok

Secondly, like those successful networks, Clubhouse centralizes creation and consumption into a tight feedback loop. In fact, conversation consumers can, by raising their hand and being recognized by the moderator, become creators in a matter of seconds.

This capability is enabled by the “only on the Internet” feature that makes Clubhouse transformational: the fact that it is live. In many mediums this feature would be fatal: one isn’t always free to watch a live video, and believe me, it is not very exciting to watch me type. However, the fact that audio can be consumed while you are doing something else allows the immediacy and vibrancy of live conversation to shine.

Being live also feeds back into the first quality: Clubhouse is far better suited than podcasts to discuss events as they are happening, or immediately afterwards. For example, both Clubhouse and Locker Room, its sports-focused competitor, have become go-to destinations for sports reaction conversations, both during and after games; it’s only a matter of time before secondary market of play-by-play announcers develops, and not only for sports: anything that is happening can be narrated and discussed.

Make no mistake, most of these conversations will be terrible. That, though, is the case for all user-generated content. The key for Clubhouse will be in honing its algorithms so that every time a listener opens the app they are presented with a conversation that is interesting to them. This is the other area where podcasts miss the mark: it is amazing to have so much choice, but all too often that choice is paralyzing; sometimes — a lot of times! — users just want to scroll their Twitter feed instead of reading a long blog post, or click through Stories or swipe TikToks, and Clubhouse is poised to provide the same mindless escapism for background audio.

COVID, China, and Controversy

Much of what I’ve written is perhaps obvious; to me that lends credence to the idea that Clubhouse is onto something substantial. To that end, though, why now?

One reason is hardware:

The fact that Clubhouse makes it so easy to drop in and out of conversation is matched by how easy AirPods make it to drop into and out of audio-listening mode.

An even more important reason, though, is probably COVID. Clubhouse launched last April in the midst of a worldwide lockdown, and despite its very rough state it provided a place for people to socialize when there were few other options. This was likely crucial in helping Clubhouse achieve its initial breakthrough. At the same time, just because COVID helped Clubhouse get off the ground does not mean its end will herald the end of the audio service, any more than improved iPhone cameras heralded the end of Instagram simply because its filters were no longer necessary; the question is if the crisis was sufficient to bootstrap the network.

I suspect so. For one there is the brazenness with which Clubhouse is leveraging the iPhone’s address book to build out its network; getting on the app requires an invitation, or signing up for the waiting list and hoping someone in your address book is already on the service, which lets you “jump the line”. This incentivizes both existing and prospective members to allow Clubhouse to ingest their contacts and get their friends on as quickly as possible.

Secondly, any suggestion that Clubhouse is limited to Silicon Valley is very much off the mark. I almost fell out of my chair while playing board games when my not-at-all-technical sister-in-law started listening to a Clubhouse while we were playing board games over the weekend, and by all accounts Taiwan is one of a whole host of markets where the app has taken off. Locker Room, as noted, appears to be the app of choice for NBA Twitter, but I suspect that is a function of Clubhouse being both gated and iPhone-only; I expect both to be rectified sooner-rather-than-later. And, of course, there is the fact the service has been banned in China.

Unfortunately, that is not the only China angle when it comes to Clubhouse; the service is powered by Agora, a Shanghai-based company. The Stanford Internet Observatory investigated:

The Stanford Internet Observatory has confirmed that Agora, a Shanghai-based provider of real-time engagement software, supplies back-end infrastructure to the Clubhouse App. This relationship had previously been widely suspected but not publicly confirmed. Further, SIO has determined that a user’s unique Clubhouse ID number and chatroom ID are transmitted in plaintext, and Agora would likely have access to users’ raw audio, potentially providing access to the Chinese government. In at least one instance, SIO observed room metadata being relayed to servers we believe to be hosted in the PRC, and audio to servers managed by Chinese entities and distributed around the world via Anycast. It is also likely possible to connect Clubhouse IDs with user profiles.

That certainly puts Clubhouse’s aggressive contact collection in a more sinister light; it also very much fits the stereotype of a new social network scrambling to capture the market first, and worrying about potential downsides later. Given the importance of network effects, I’m not surprised, but the choice of a Chinese infrastructure provider in particular is disappointing for a service launching in 2020.

The perhaps sad reality, though, is that most users probably won’t care: the payoff from uploading contacts is clear, and even if you don’t, you still need a phone number to register, which means that Clubhouse is probably reconstructing your contact list from your friends who did. The company has been far more aggressive in implementing blocking and user-reported content violations mechanism; I suspect this reflects the reality that content controversies are, in the current environment, more damaging than China connections, despite the fact that the former are an inescapable reality of user-generated content, while the latter is a choice.

Whither Facebook?

The one social network that I have barely mentioned in this Article is the social network that the FTC has sued for being a monopoly. That sentence, on close examination, certainly seems to raise some rather obvious questions about the strength of the FTC’s case.

Still, the discussion of all of these different networks really does highlight how Facebook is unique: while Twitter, Instagram, YouTube, and TikTok are all first and foremost about the medium, and only then the network, Facebook is about the network first. That is how the service has evolved from text to images to video and, I wouldn’t be surprised, to audio. This also explains why Facebook managed the shift to mobile so well; for these other networks, meanwhile, it was mobile that was the foundation for their transformative breakthroughs.

That is why I would actually give Facebook’s upcoming Clubhouse competitor a better chance than Twitter’s already-launched offering. Facebook takes innovations developed in different apps for interest-based networks and adds them to its relationship-based network; at the same time, this also means that Facebook is never going to be a real competitor for Clubhouse, which seems more likely to recreate Twitter’s interest-based network than Twitter is likely to recreate the vibrancy of Clubhouse.

The other way that Facebook looms large in the social networking discussion is monetization: it is obvious that there is an endless human appetite for social networks, but advertisers would much rather focus on Facebook’s integrated suite of properties. It is not clear that Clubhouse will even pursue advertising, though; the company has announced its intention to help creators monetize via mechanisms like tipping. This has already been proven out on platforms like Twitch in the West, and is a massive success in China (there is a reason, I should note, why the best available live streaming technology was offered by a Chinese company). It’s a smart move for Clubhouse to move in this direction early, both as a means of locking in creators, and also going where Facebook is less likely to follow.

One potential loser, meanwhile, is Spotify; the company has bet heavily on podcasts, which could be similar to betting on blogs in 2007. Still, the fact the company’s most important means of monetization is subscriptions may be its saving grace; it may turn out that Spotify is the obvious home for highly produced content, available in a more consumer-friendly bundle than the a la carte pricing that followed from blogging’s decentralized nature.


For now I don’t expect Clubhouse to be too concerned about the competition; the company said on its website when it reportedly became a unicorn:

We’ve grown faster than expected over the past few months, causing too many people to see red error messages when our servers are struggling. A large portion of the new funding round will go to technology and infrastructure to scale the Clubhouse experience for everyone, so that it’s always fast and performant, regardless of how many people are joining.

That is, obviously, the best sort of problem to have, and one that evinces product-market fit (the only thing missing is a fail whale); the fact it all seems so obvious is simply because we have seen this story before.

I wrote a follow-up to this article in this Daily Update.

Mistakes and Memes

Paul Krugman, back in 1998, was pretty sure he had this Internet thing figured out:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law” — which states that the number of potential connections in a network is proportional to the square of the number of participants — becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

It is obviously easy to dunk on this statement, but back in 2015, I came, ever so slightly, to Krugman’s defense; my argument was that the reason he was wrong was not because he overrated user-generated content, but rather because he underrated the effectiveness of Aggregators giving people what they wanted:1

Still, this outcome depends on Facebook driving ever-more engagement, and I’m not convinced that more “content posted by the friends [I] care about” is the best path to success. Everyone loves to mock Paul Krugman’s 1998 contention about the limited economic impact of the Internet…[but] it’s worth considering…just how much users value what their friends have to say versus what professional media organizations produce.

Again, as I noted above, Facebook made the 2013 decision to increase the value of newsworthy links for a reason, and in the time since, BuzzFeed in particular has proven that there is a consistent and repeatable way to not only reach a large number of people but to compel them to share content as well. Was Krugman wrong because he didn’t appreciate the relative worth people put on what folks in their network wanted to say, or because he didn’t appreciate that people in their network may not have much to say but a wealth of information to share?

I was definitely more right than Krugman — easy to say about an article written in 2015 versus 1998, of course — but the truth is that I too missed the boat, both narrowly in the case of Facebook, and broadly in the case of the Internet.

Mistake One: Focusing on Demand

In that article where I quoted Krugman I added:

I suspect that Zuckerberg for one subscribes to the first idea: that people find what others say inherently valuable, and that it is the access to that information that makes Facebook indispensable. Conveniently, this fits with his mission for the company. For my part, though, I’m not so sure. It’s just as possible that Facebook is compelling for the content it surfaces, regardless of who surfaces it. And, if the latter is the case, then Facebook’s engagement moat is less its network effects than it is that for almost a billion users Facebook is their most essential digital habit: their door to the Internet.

That phrase, “Facebook is compelling for the content it surfaces, regardless of who surfaces it”, is oh-so-close to describing TikTok; the error is that the latter is compelling for the content it surfaces, regardless of who creates it. I noted in a Daily Update last year:

What I got right [in that 2015 Article] is that your social network doesn’t generate enough compelling content, but, like Facebook, I assumed that the alternative was professional content makers. The key to TikTok, as I explained yesterday, is that it is not a social network at all; the better analogy is YouTube, and if you think of TikTok as being a mobile-first YouTube, the strategic opening is obvious.

To put it another way, I was too focused on demand — the key to Aggregation Theory — and didn’t think deeply enough about the evolution of supply. User-generated content didn’t have to be simply picture of pets and political rants from people in one’s network; it could be the foundation of a new kind of network, where the payoff from Metcalfe’s Law is not the number of connections available to any one node, but rather the number of inputs into a customized feed.2

Mistake Two: Misunderstanding Supply

The second mistake is much more fundamental, because it touches on what makes the Internet so profound. From The Internet and the Third Estate:

What makes the Internet different from the printing press? Usually when I have written about this topic I have focused on marginal costs: books and newspapers may have been a lot cheaper to produce than handwritten manuscripts, but they are still not-zero. What is published on the Internet, meanwhile, can reach anyone anywhere, drastically increasing supply and placing a premium on discovery; this shifted economic power from publications to Aggregators.

Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

Who, for example, imagined this?

The GameStop stock chart

There have been a thousand stories about what the GameStop saga has been about: a genuine belief in GameStop, a planned-out short squeeze, populist anger against Wall Street, boredom and quarantine, greed, hedge fund pile-ons, you name it there is an article arguing it. I suspect that most everyone is right, much as the proverbial blind men feeling an elephant are all accurate in their descriptions, even though they are completely different. What seems clear is that the elephant is the Internet.

This is hardly a profound observation. It has been clear for many years that the Internet made it possible for new communities to form, sometimes with astonishing speed. The problem, as Zeynep Tufekci explained in Twitter and Tear Gas, is that the ease of network formation made these communities more fragile, particularly in the face of determined opposition:

The ability to use digital tools to rapidly amass large numbers of protesters with a common goal empowers movements. Once this large group is formed, however, it struggles because it has sidestepped some of the traditional tasks of organizing. Besides taking care of tasks, the drudgery of traditional organizing helps create collective decision-making capabilities, sometimes through formal and informal leadership structures, and builds a collective capacities among movement participants through shared experience and tribulation. The expressive, often humorous style of networked protests attracts many participants and thrives both online and offline, but movements falter in the long term unless they create the capacity to navigate the inevitable challenges.

Tufekci contrasts many modern protest movements with the U.S. Civil Rights movement:

What gets lost in popular accounts of the civil rights movement is the meticulous and lengthy organizing work sustained over a long period that was essential for every protest action. The movement’s success required myriad tactical shifts to survive repression and take advantage of opportunities created as the political landscape changed during the decade.

In Tufekci’s telling, the March on Washington was the culmination of years of institution building, in stark contrast to more sudden uprisings like Occupy Wall Street:

In the networked era, a large, organized march or protest should not be seen as the chief outcome of previous capacity building by a movement; rather, it should be looked at as the initial moment of the movement’s bursting onto the scene, but only the first stage in a potentially long journey. The civil rights movement may have reached a peak in the March on Washington in 1963, but the Occupy movement arguably began with the occupation of Zuccotti Park in 2011.

It’s interesting to reflect on the Occupy movement in light of the Gamestop episode two weeks ago; on one hand, you could squint and draw a line from Zuccotti Park to /r/WallStreetBets, but the only thread that actually exists is expressions of anger, at least from some number of Redditors, with Wall Street and its role in the 2008 financial crisis and the recession that followed. There was certainly no organizational building in the meantime that culminated in taking on the shorts (and, unsurprisingly, enriching others on Wall Street along the way).

What actually happened — to the extent anything meaningful happened at all — is that instead of storming the barricades of Wall Street i.e. living in a tent two blocks away from a building long since removed from actual trading, /r/WallStreetBets used the system against itself. They didn’t protest short sellers; they took the other side of the bet.

The Meme Stock

Who, though, is “they”? Sure, there was /u/DeepFuckingValue, who according to a profile in the Wall Street Journal started buying GameStop stock in 2019, and making YouTube videos to promote his comeback thesis last summer. There was also /u/Jeffamazon, who made the case on Reddit that GameStop was primed for a short squeeze last fall. There were all of those folks who wanted to get back at Wall Street, and those that simply wanted to make a buck. There was Twitter, and CNBC, and there were hedge funds themselves.

This is why everyone has a story about what happened with GameStop, and why they are all true. The 2019 story was correct, but so was the summer 2020 story, and the fall 2020 story, and the January 2021 story. None of those stories, though, existed in isolation: they built on the stories that came before, duplicating and mutating them along the way. This is my second mistake: it turns out the Internet isn’t a cheap printing press; it’s a photocopier, albeit one that is prone to distorting every nth copy.

Go back to the time before the printing press: while a limited number of texts were laboriously preserved by monks copying by hand, the vast majority of information transfer was verbal; this left room for information to evolve over time, but that evolution and its impact was limited by just how long it took to spread. The printing press, on the other hand, by necessity froze information so that it could be captured and conveyed.

This is obviously a gross simplification, but it is a simplification that was reflected in civilization in Europe in particular: local evolution and low conveyance of knowledge with overarching truths aligns to a world of city-states governed by the Catholic Church; printing books, meanwhile, gives an economic impetus to both unifying languages and a new kind of gatekeeper, aligning to a world of nation-states governed by the nobility.

The Internet, meanwhile, isn’t just about demand — my first mistake — nor is it just about supply — my second mistake. It’s about both happening at the same time, and feeding off of each other. It turns out that the literal meaning of “going viral” was, in fact, more accurate than its initial meaning of having an article or image or video spread far-and-wide. An actual virus mutates as it spreads, much as how over time the initial article or image or video that goes viral becomes nearly unrecognizable; it is now a meme.

This is why, in the end, the best way to describe what happened to Gamestop is that it was a meme: its meaning was anything, and everything, evolving like oral traditions of old, but doing so at the speed of light. The real world impact, though, was very real, at least for those that made and/or lost money on Wall Street. That’s the thing with memes: on their own they are fleeting; like a virus, they primarily have an impact if they infiltrate and take over infrastructure that already exists.

The Meme President

In 2016 the Washington Post documented how 4chan reacted to Donald Trump being elected president:

4chan’s /pol/ boards have, for much of the 2016 campaign, felt like an alternate reality, one where a Donald Trump presidency was not only possible but inevitable. At some point Tuesday evening, the board’s Trump-loving, racist memers began to realize that they were actually right. “I’m f—— trembling out of excitement brahs,” one 4channer wrote Tuesday night, adding a very excited Pepe the Frog drawing. “We actually elected a meme as president.”

Trump, with his calls for protectionism and opposition to immigration, was compared throughout the campaign to previous presidential candidates like Ross Perot and Pat Buchanan; the reason why Trump was successful, though, was because he managed to infiltrate and take over infrastructure — the Republican Party — that already existed.

I wrote about this process in 2016 in The Voters Decide; it turned out that Party power was rooted in the power of the media over the spread of information. Once the media lost its gatekeeper status, however, the parties lost their mechanisms of control:

There is no one dominant force when it comes to the dispersal of political information, and that includes the parties described in the previous section. Remember, in a Facebook world, information suppliers are modularized and commoditized as most people get their news from their feed. This has two implications:

  • All news sources are competing on an equal footing; those controlled or bought by a party are not inherently privileged
  • The likelihood any particular message will “break out” is based not on who is propagating said message but on how many users are receptive to hearing it. The power has shifted from the supply side to the demand side

A drawing of Aggregation Theory and Politics

This is a big problem for the parties as described in The Party Decides. Remember, in Noel and company’s description party actors care more about their policy preferences than they do voter preferences, but in an aggregated world it is voters aka users who decide which issues get traction and which don’t. And, by extension, the most successful politicians in an aggregated world are not those who serve the party but rather those who tell voters what they most want to hear.

But what did Republican voters want? There were and have been any number of theories put forth, but I suspect the “truth” is similar to GameStop: all of the theories are true. After all, primary voters were electing a meme, and that meme, having infiltrated and taken over infrastructure that already existed, took over the Presidency.

Not that this is exclusively a right wing phenomenon; Portuguese politician and author Bruno Maçães wrote in History Has Begun in a chapter about the emergence of socialism in America:

Like all quests, the Green New Deal is set in a kind of dreamland, which allows its authors to increase the dramatic elements of struggle and conflict, but at the obvious cost of divorcing themselves from social and historical reality. The detailed consideration of technological and economic forces, for example, is ignored. The causal nexus between problem and solution appears fanciful. Since the authors of the Green New Deal have invented the story, they alone know the logic of events. There are perplexing elements in this logic. Why, for example, would one want to rebuild every building in America in order to increase energy efficiency when we are getting all our energy from zero-emission sources anyway? But a dream world follows different rules from those we know from the real one, which only increases its powers of attraction…

In an interview for the podcast Pod Save America in August 2019, Ocasio-Cortez explained the foundations of her political philosophy: “We have to become master storytellers. Everyone in public service needs to be a master storyteller. My advice is to make arguments with your five senses and not five facts. Use facts as supporting evidence, but we need to show we are having the same human experience. You have to tell the story of me, us, and now. The America we had even under Obama is gone. That is the nature of time. We have to tell the story of the crossroads.”

Millennial socialism sees history as a struggle to control the memes of production.

In Maçães’s telling, this is not a weakness, but a strength; that Joe Biden could simultaneously embrace the Green New Deal on his campaign website while insisting he didn’t support it in a debate is due to the fact that a meme can be whatever you want it to be.

The Meme Master

Last Monday, in an appearance on Clubhouse, Elon Musk was asked about memes:

It is safe to say that you might be the master of the memes, and in fact you had a quote that “Who controls the memes controls the universe.” Can you just explain what you meant by that?

EM: It’s a play on words from Dune, “Who controls the spice controls the universe”, and if memes are spice then it’s memes. I mean there’s a little bit of truth to it in that what is it that influences the zeitgeist? How do things become interesting to people? Memes are actually kind of a complex form of communication. Like a picture says a thousand words, and maybe a meme says ten thousand words. It’s a complex picture with a bunch of meaning in it and it can be aspirationally funny. I don’t know, I love memes, I think they can be very insightful, and you know throughout history, symbolism in general has powerfully affected people.

Musk’s point about the relevant power of memes to pictures makes perfect sense; what is notable is that the unit of measurement — words — are so clearly insufficient for the reasons I just documented. The power of memes is not simply the amount of information they convey, but the malleability with which they convey it. They have shades of Heisenberg’s uncertainty principle, in that the very meaning of a meme is altered based on where it is encountered, and from whom. It’s meaning can be anything, and everything.

That, though, is why mastering memes is so powerful. Wired described quantum computing like this:

Instead of bits, quantum computers use qubits. Rather than just being on or off, qubits can also be in what’s called ‘superposition’ — where they’re both on and off at the same time, or somewhere on a spectrum between the two.

Take a coin. If you flip it, it can either be heads or tails. But if you spin it — it’s got a chance of landing on heads, and a chance of landing on tails. Until you measure it, by stopping the coin, it can be either. Superposition is like a spinning coin, and it’s one of the things that makes quantum computers so powerful. A qubit allows for uncertainty.

That sounds a lot like how I have described memes, which means to master memes is to stop the coin on your terms, and there is no better example of what this means than Tesla. I wrote one Weekly Article about Tesla, 2016’s It’s a Tesla, and I think it holds up pretty well. This is the key paragraph:

The real payoff of Musk’s “Master Plan” is the fact that Tesla means something: yes, it stands for sustainability and caring for the environment, but more important is that Tesla also means amazing performance and Silicon Valley cool. To be sure, Tesla’s focus on the high end has helped them move down the cost curve, but it was Musk’s insistence on making “An electric car without compromises” that ultimately led to 276,000 people reserving a Model 3, many without even seeing the car: after all, it’s a Tesla.

Later in that article I compared Tesla to Apple, its only rival as far as brand is concerned:

To that end, the significance of electric to Tesla is that the radical rethinking of a car made possible by a new drivetrain gave Tesla the opportunity to make the best car: there was a clean slate. More than that, Tesla’s lack of car-making experience was actually an advantage: the company’s mission, internal incentives, and bottom line were all dependent on getting electric right.

Again the iPhone is a useful comparison: people contend that Microsoft lost mobile to Apple, but the reality is that smartphones required a radical rethinking of the general purpose computer: there was a clean slate. More than that, Microsoft was fundamentally handicapped by the fact Windows was so successful on PCs: the company could never align their mission, incentives, and bottom line like Apple could.

This comparison works as far as it goes, but it doesn’t tell the entire story: after all, Apple’s brand was derived from decades building products, which had made it the most profitable company in the world. Tesla, meanwhile, always seemed to be weeks from going bankrupt, at least until it issued ever more stock, strengthening the conviction of Tesla skeptics and shorts.

That, though, was the crazy thing: you would think that issuing stock would lead to Tesla’s stock price slumping; after all, existing shares were being diluted. Time after time, though, Tesla announcements about stock issuances would lead to the stock going up. It didn’t make any sense, at least if you thought about the stock as representing a company.

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

Memes and the Future

This is not, to be very clear, analysis about Tesla the business. The reason I only ever wrote one Article about Tesla and a handful of Daily Updates is because I was confused about the divorce between the company’s real world fortunes and its stock price. That confusion — like the weirdness about a company focused on sustainability buying Bitcoin — still exists!

Now, though, I realize that this confusion stems from making the same mistake I did in that old analysis of Facebook: I was looking to the real world as a guide to understanding the Internet, when it was in fact inevitable that the Internet would, over time, come to impact the real world. Some of that impact will be fleeting, like many of the protests Tufekci documented; some will have short term effects, particularly in places, like Wall Street, that easily translate sentiment into prices. The biggest impact, at least for the next few years, will likely come from memes capturing existing infrastructure, like Trump did the Republican party, and Ocasio-Cortez, to a lesser extent, the Democratic party. The most intriguing people, though, both for the potential upside and the potential downside, are those that leverage memes to build something new.

I wrote a follow-up to this article in this Daily Update.


  1. This article was actually written a few months before I coined the term Aggregation Theory, but it was clearly top of mind in that time period 

  2. This, by the way, is actually a much more accurate manifestation of Metcalfe’s Law, which is about potential contacts in a network, not actual contacts; a long-standing criticism of using Metcalfe’s Law to describe social networks is that the attractiveness of most social networks is a function of how many people you know that are on the network, not how many you might know. That is why, for example, LINE is a much more valuable chat app for me in Taiwan than is WeChat, even though WeChat has vastly more users; more people I know are on LINE. TikTok, though, surfaces content from anyone, which is to say its value hews much more closely to Metcalfe’s Law. 

The Relentless Jeff Bezos

It’s the refunds that blow my mind.

You may be surprised to know that, despite the fact I live in Taiwan, I have an Amazon Prime account. It turns out that Amazon ships to Taiwan, which is particularly useful if you need something rather obscure and you’re not quite sure where to buy it locally:

An email from Amazon about a refund for customs fees

Amazon doesn’t simply make buying thing easy from the other side of the world, they make dealing with shipping and customs trivial; they do it all, and refund me if they over-estimated the cost. It’s so easy that it is easy to forget how much work went into making this possible. Moreover, the company isn’t standing still:

Amazon's announcement of free shipping to Taiwan

The only thing more surprising than this new benefit that arrived completely out of the blue is that there is a part of me that wasn’t surprised; for customers Amazon just keeps getting better, and why not add free shipping to Taiwan? Indeed, I expect the minimum price to decrease over time, which is my way of explaining why so many stories about Amazon — including, I guess, this one — start with the fact that relentless.com redirects to Amazon.com.

What is clear, though, is that any attempt to understand the relentlessness of the company redirects to their founder, Jeff Bezos, who announced plans to step down as CEO after leading the company for twenty-seven years. He is arguably the greatest CEO in tech history, in large part because he created three massive businesses, all of which generate enormous consumer surplus and enjoy impregnable moats: Amazon.com, AWS, and the Amazon platform (this is a grab-all term for the Amazon Marketplace and Fulfillment offerings; it is lumped in with Amazon.com in the company’s reporting). These three businesses are the result of Bezos’ rare combination of strategic thinking, boldness, and drive, and the real world manifestations of Amazon’s three most important tactics: leverage the Internet, win with scale, and being your first best — but not only — customer.

Amazon.com and Leveraging the Internet

While the mythology of founders centers around a tortured genius solving a problem for themselves and only then discovering product-market fit, Bezos started with the solution and then looked for a problem. That solution, broadly considered was the Internet, and more specifically “the everything store”, a concept that crystallized in discussions Bezos had with David Shaw, the founder of the eponymous hedge fund where he worked. The problem was how to start, and Bezos settled on books; he explained in a speech at Lake Forest College in 1998:

In the spring of 94 web usage was growing at 2300% a year. You have to keep in mind human beings aren’t good at understanding exponential growth; it’s just not something we see in our everyday life. But things don’t grow this fast outside of Petri dishes. It just doesn’t happen. When I saw this, I said, okay, what’s a business plan that might make sense in the context of that growth. I made a list of 20 different products that you might be able to sell online. I was looking for the first best product, and I chose books for lots of different reasons, but one primary reason. And that is that there are more items in the book space than there are items in any other category by far. There are over 3 million different books worldwide in all languages. The number two product category in that regard is music, and there are about 300,000 active music CDs. And when you have this huge catalog of products, you can build something online that you just can’t build any other way. The largest physical bookstores, the largest superstores, and these are huge stores, often converted from bowling alleys and movie theaters, can only carry about 175,000 titles. There are only a few that large. In our online catalog, we’re able to list over two and a half million different titles and give people access to those titles.

Being able to do something online that you can’t do in any other way is important. It’s all about the fundamental tenet of building any business, which is creating a value proposition for the customer, and online, especially three years ago, but even today and for the next several years, the value proposition that you have to build for customers is incredibly large. That’s because the web is a pain to use today! We’ve all experienced the modem hangups and the browsers crash — there are all sorts of inconveniences: websites are slow, modem speeds are slow. So if you’re going to get people to use a website in today’s environment, you have to offer them overwhelming compensation for this primitive infant technology. And I would claim that that compensation has to be so strong that it’s basically the same as saying, you can only do things online today that simply can’t be done any other way. And that’s why this huge number of products looked like a winning combination online. There’s no other way to have a two-and-a-half-million-title bookstore. You can’t do it in a physical store, but you also can’t do it in a print catalog. If you were to print the M’s on a card catalog, it would be the size of more than 40 New York city phone books.

What is so impressive about this formulation — in which, humorously, Bezos dramatically understated the Internet’s growth rate of 230000%, not 2300% (exponents are hard!) — is the way in which Bezos grasped both the opportunities and the pitfalls the Internet presented to a new business, and took them to their logical conclusions. This sort of strategic thinking, in which Bezos didn’t simply understand a particular point of business but also its implications both now and in the future, was how Bezos transformed Amazon into a true tech company.

AWS and Tech Economics

That Amazon was even considered a tech company, at least for its first decade, was in many respects an accident of timing: there simply weren’t very many online businesses when Bezos got started, so anyone with a website was a tech company. The truth, though, is that Amazon was very much a retailer, with a bunch of managers it recruited from Walmart, and, after the dot-com bubble burst, under tremendous pressure to raise prices in the pursuit of profitability.

The turning point came in two parts. The first was a meeting Bezos had with Costco founder Jim Sinegal in 2001; from Brad Stone’s The Everything Store:

Sinegal explained the Costco model to Bezos: it was all about customer loyalty…”The membership fee is a onetime pain, but it’s reinforced every time customers walk in and see forty-seven-inch televisions that are two hundred dollars less than anyplace else,” Sinegal said. “It reinforces the value of the concept. Customers know they will find really cheap stuff at Costco.” Costco’s low prices generated heavy sales volume, and the company then used its significant size to demand the best possible deals from suppliers and raise its per-unit gross profit dollars.

Bezos immediately cut prices, but Sinegal’s core insight only truly crystallized later that year at an offsite with (soon-to-be-published) Good to Great author Jim Collins:

Drawing on Collins’s concept of a flywheel, or self-reinforcing loop, Bezos and his lieutenants sketched their own virtuous cycle, which they believed powered their business. It went something like this: Lower prices led to more customer visits. More customers increased the volume of sales and attracted more commission-paying third-party seller to the site. That allowed Amazon to get more out of fixed costs like the fulfillment centers and the servers needed to run the website. This greater efficiency then enabled it to lower prices further. Feed any part of this flywheel, they reasoned, and it should accelerate the loop.

Thus was the famous Bezos napkin diagram born:

This insight is what transformed Amazon from a retailer that leveraged the Internet to a tech company masquerading as a retailer. What is fascinating about this transition, though, is that AWS was still several years down the road; Amazon still looked like a retailer! The difference is that just as Bezos had started with a solution (the Internet) looking for a problem (books), in this case Bezos started with tech economics and applied them to retail.

The first tech CEO to truly grasp how tech economics were different from most other businesses was Bob Noyce (the future Intel founder) at Fairchild Semiconductor. From Michael Malone in The Intel Trinity:

In the spring of 1965…Noyce got up before a major industry conference and in one fell swoop destroyed the entire pricing structure of the electronics industry. Noyce may have had trouble deciding between conflicting claims of his own subordinates, but when it came to technology and competitors, he was one of the most ferocious risk takers in high-tech history. And this was one of his first great moves. The audience at the conference audibly gasped when Noyce announced that Fairchild would henceforth price all of its major integrated circuit products at one dollar. This was not only a fraction of the standard industry price for these chips, but it was also less than it cost Fairchild to make them.

The reason this was possible is that the true cost of integrated circuits came from the R&D costs to design them, and capital costs to manufacture them; the actual materials cost was practically zero. This meant that the best route to profitability was to make it up in volume. This equation is even more powerful in software, which has only R&D costs, and zero material costs. This overall concept, that tech is governed by a world of zero marginal costs, is what makes tech economics fundamentally different from most businesses.

Bezos, though, had one big problem: Amazon had to actually pay for the products it sold! To effectively pursue a tech economics strategy, i.e. bet everything on volume in an attempt to gain leverage on huge fixed costs, was exceptionally risky in an arena with inescapable marginal costs. Bezos, though, prioritized boldness; he wrote in his famous 1997 Letter to Shareholders that he would attach to every subsequent letter:

We will make bold rather than timid investment decisions where we see a sufficient probability of gaining market leadership advantages. Some of these investments will pay off, others will not, and we will have learned another valuable lesson in either case.

This was the exact approach Bezos took to the initial launch of AWS; from The Everything Store:

Bezos wanted AWS to be a utility with discount rates, even if that meant losing money in the short term. Willem van Biljon…proposed pricing EC2 instances at fifteen cents an hour, a rate that he believed would allow the company to break even on the service. In an S Team meeting before EC2 launched, Bezos unilaterally revised that to ten cents. “You realize you could lose money on that for a long time,” van Viljon told him. “Great,” Bezos said.

A decade later Amazon would finally break out AWS in its reporting and it was an absolute juggernaut; I called it The AWS IPO:

This is why Amazon’s latest earnings were such a big deal: for the first time the company broke out AWS into its own line item, revealing not just its revenue (which could be teased out previously) but also its profitability. And, to many people’s surprise, and despite all the price cuts, AWS is very profitable: $265 million in profit on $1.57 billion in sales last quarter alone, for an impressive (for Amazon!) 17% net margin.

Fast forward to Q4 2020, where AWS just reported $3.6 billion in profit on $12.7 billion in revenue; perhaps more tellingly in terms of Amazon’s transformation into a tech company, it is Andy Jassy, the longtime head of AWS, who is succeeding Bezos (yet still, reflecting Amazon’s business-centric roots, Jassy is an MBA, not an engineer).

Amazon’s Platform and the First Best Customer

AWS was not, as urban legend has it, borne out of the desire to utilize holiday capacity; after all, it is not as if Amazon kicked off AWS customers every Christmas! In fact, it took years for Amazon.com to fully run on AWS. AWS, though, was created to solve the internal problems that Amazon.com presented: how to create new customer experiences without constantly waiting for the infrastructure team to set up dedicated capacity.

The solution was primitives: instead of building monolithic applications, Amazon’s infrastructure team should build basic compute offerings — like the EC2 instances above — so that their developers could make anything they wanted to. And then, having solved the problem for itself, Amazon could solve it for everyone. From The Amazon Tax:

The “primitives” model modularized Amazon’s infrastructure, effectively transforming raw data center components into storage, computing, databases, etc. which could be used on an ad-hoc basis not only by Amazon’s internal teams but also outside developers:

A drawing of The AWS Layer

This AWS layer in the middle has several key characteristics:

  • AWS has massive fixed costs but benefits tremendously from economies of scale.
  • The cost to build AWS was justified because the first and best customer is Amazon’s e-commerce business.
  • AWS’s focus on “primitives” meant it could be sold as-is to developers beyond Amazon, increasing the returns to scale and, by extension, deepening AWS’ moat.

This last point was a win-win: developers would have access to enterprise-level computing resources with zero up-front investment; Amazon, meanwhile, would get that much more scale for a set of products for which they would be the first and best customer.

As I noted in that article, this exact same approach increasingly applied to the e-commerce side of the business:

Prime is a super experience with superior prices and superior selection, and it too feeds into a scale play. The result is a business that looks like this:

A drawing of The Transformation of Amazon’s E-Commerce Business

That is, of course, the same structure as AWS — and it shares similar characteristics:

  • E-commerce distribution has massive fixed costs but benefits tremendously from economies of scale.
  • The cost to build-out Amazon’s fulfillment centers was justified because the first and best customer is Amazon’s e-commerce business.
  • That last bullet point may seem odd, but in fact 40% of Amazon’s sales (on a unit basis) are sold by 3rd-party merchants; most of these merchants leverage Fulfilled-by-Amazon, which means their goods are stored in Amazon’s fulfillment centers and covered by Prime. This increases the return to scale for Amazon’s fulfillment centers, increases the value of Prime, and deepens Amazon’s moat.

Over the last five years Amazon’s investment in fulfillment has ballooned into a multitude of new distribution centers, sortation centers, an Air Hub for Amazon’s growing fleet of airplanes, trucking, and a massive delivery operation that largely bypasses UPS and Fedex (while leveraging USPS for far-flung deliveries). At the same time the number of products sold by third-party merchants is now over 50% on a unit basis, and last quarter third-party seller services — where Amazon charges a merchant to stock and ship their goods — accounted for $27.3 billion in revenue.

Relentlessness and the Real World

This transformation in one respect brings the Bezos story full circle: the vision was an “Everything Store” — something that could only exist on the Internet — and while books were the best place to begin, they were only that — a beginning. Thanks in large part to Amazon’s platform the company has achieved Bezos’s vision.

What is somewhat ironic, though, is that while the Internet is unquestionably a critical component of what makes Amazon Amazon, what makes the company so valuable and seemingly impregnable is the way it has integrated backwards into the world of atoms. Real moats are built with real dollars, and Bezos has been relentless in pushing the company to continually invest in solving problems with real world costs, from delivery trucks to data centers and everything in-between. This application of tech economics to the real world is what sets Bezos apart.

The world, particularly the United States, has been a massive beneficiary, especially in 2020: when people were stuck at home, what was the company they depended on more than any other to deliver supplies? When companies had to go remote overnight, what was the infrastructure that made that possible? When time needed to be filled with entertainment, games, or video calls, where did those services run? The answer in all cases was Amazon, and the companies that rose up to compete with it. That is one heck of a crowning achievement for a career that changed the industry and the world.

Publishing is Back to the Future

Andreessen Horowitz is going direct. From the a16z website, natch:

People want to learn about the future. If Software really is Eating the World, there needs to be a place that is dedicated to explaining and tracking it. So we are doing just that: we are building a new and separate media property about the future that makes sense of technology, innovation, and where things are going — and now, we’re expanding and opening up our platform to do this on a much bigger scale. We want to be the go-to place for understanding and building the future, for anyone who is building, making, or curious about tech.

Journalists, needless to say, were not amused; this tweet from the Washington Post technology columnist Geoffrey A. Fowler was representative:

For the record, I doubt that journalists have much to worry about in terms of a16z monopolizing the market; Andreessen Horowitz, under the guidance of former Wired editor Sonal Chokshi, has built out an entire podcast network that has excelled by doing pretty much the same thing promised with this initiative: explaining and tracking technology and how it might impact the future. Given the fact the podcast launched in 2014, a year after Stratechery, I for one can attest that the venture firm’s efforts have not drowned out “more independent views.”

Still, I get the concern. Andreessen Horowitz, at the end of the day, succeeds or fails on the basis of its investment returns. It is unreasonable to expect the company to cast a truly critical eye towards new technologies or companies given how significant its conflicts of interest are. At the same time, this also explains why Fowler’s tweet isn’t quite right: the tech industry, from the smallest startup, to the loudest venture capitalist, to the largest behemoth, has no problem when it comes to the motivation necessary to “grow and improve its products”. Competition will quite quickly kill the startup, the returns, or the long-term competitive position of any entity that ignores market forces.

Just look at journalism.

Traditional Journalism’s Insulation

There was a fascinating exchange between Nilay Patel, the editor-in-chief of The Verge, and Marques Brownlee, the brilliant YouTuber behind the MKBHD tech review channel, on the most recent episode of Patel’s podcast Decoder:

Nilay Patel: This kind of gets into one thing that I personally was very happy to see on your channel. You made an entire video about your ethics policy and what you will and won’t do with advertisers. Just to draw the stark contrast, I have almost nothing to do with the revenue of The Verge. I’m a very traditional journalist in that mold. Like, I know who our sales team is and sometimes they parade me out in front of executives to seem fancy. But I don’t know what our rates are. I’m very insulated from sales. It’s your inbox, it’s your deals, you’re setting the rates. How do you balance that tension, because it’s a very YouTube-specific tension in one way, but I think we’re seeing it across the entire kind of creator ecosystem?

Marques Brownlee: The way you phrase it is interesting. I think our rates for an ad in a video are pretty fluent. But it’s a balancing act, because you don’t want to overdo it or pick the wrong thing or not consider some part of this that should be considered. You want to make more, so that you can pay for that camera car.

But the other end of that seesaw is a channel that does way too many ads…I never put three mid-rolls in a video. It’s zero or maybe one. That applies to the ads that we build in and make ourselves, too. If it’s a bad product, it’s not worth doing it at all, even if we would’ve made a ton of money. If it’s a bad integration or if it’s a bad company to work with, I have to say no, because it just doesn’t fit. So that fit is often more important than the math of the per-minute or per-project basis.

I agree with Brownlee that the way the question was phrased is “interesting”; from a certain perspective there is an insinuation of small-scale corruption, or at the very least, the markers of something less important than “journalism”. After all, journalists don’t know how their businesses work, and they are proud of it!

Joseph Pulitzer

To be fair, I really don’t think that Patel meant anything quite so damning; he was simply stating what any “traditional journalist” thinks; no less an authority than Joseph Pulitzer wrote in The School of Journalism about his vision for Columbia Journalism School:

Not to teach typesetting, not to explain the methods of business management, not to reproduce with trivial variations the course of a commercial college. This is not university work. It needs no endowment. It is the idea of work for the community, not commerce, not for one’s self, but primarily for the public, that needs to be taught. The School of Journalism is to be, in my conception, not only not commercial, but anti-commercial. It is to exalt principle, knowledge, culture, at the expense of business if need be. It is to set up ideals, to keep the counting-room in its proper place, and to make the soul of the editor the soul of the paper…

Here is the problem, though: traditional journalism, at least in the 20th century, was predicated on geographic monopolies that bundled editorial and advertising. There was no need to know how the business worked, because, as Warren Buffett noted in a 1991 letter to shareholders, newspapers weren’t even businesses, but franchises:

An economic franchise arises from a product or service that: (1) is needed or desired; (2) is thought by its customers to have no close substitute and; (3) is not subject to price regulation. The existence of all three conditions will be demonstrated by a company’s ability to regularly price its product or service aggressively and thereby to earn high rates of return on capital. Moreover, franchises can tolerate mis-management. Inept managers may diminish a franchise’s profitability, but they cannot inflict mortal damage.

In contrast, “a business” earns exceptional profits only if it is the low-cost operator or if supply of its product or service is tight. Tightness in supply usually does not last long. With superior management, a company may maintain its status as a low-cost operator for a much longer time, but even then unceasingly faces the possibility of competitive attack. And a business, unlike a franchise, can be killed by poor management.

Until recently, media properties possessed the three characteristics of a franchise and consequently could both price aggressively and be managed loosely. Now, however, consumers looking for information and entertainment (their primary interest being the latter) enjoy greatly broadened choices as to where to find them. Unfortunately, demand can’t expand in response to this new supply: 500 million American eyeballs and a 24-hour day are all that’s available. The result is that competition has intensified, markets have fragmented, and the media industry has lost some — though far from all — of its franchise strength.

Note again the date: 1991. That was when Buffett perceived that media properties were losing their status as franchises and becoming merely businesses, which is to say that the rationale for journalists not caring about how the money was made was disappearing before the World Wide Web even existed.1

The Web dramatically accelerated the trends of increased supply — infinite supply, in fact — while sites like Craigslist destroyed the classifieds business that was newspaper’s biggest moneymaker. Even mentioning Craigslist, though, anthropomorphizes what was a secular outcome of the Internet: that fact you could make a site for anything accessible by anyone — and searchable, to boot — meant that of course newspapers were going to lose the classifieds business (and the display advertising business after that). They only had it because newspapers had been a franchise, and now they were a business, and a struggling one at that.

Someone like Brownlee, on the other hand, has been focused on building a business from video one. Sure, he is the star of his videos, but he is also the CEO of his company; of course he makes business deals, because it is business deals that make the content production possible. This isn’t a YouTube-specific tension: it’s the marker of any modern media business.

Australia’s Media Bargaining Code

The problem with not understanding that not knowing or caring about how the business worked was a luxury afforded by media’s franchise status, is that it lays the groundwork for other, more pernicious untruths. Look no further than Australia, and its proposed media bargaining code; here is what Fowler’s Washington Post had to say about it:

Google on Friday warned Australian lawmakers it would stop offering its search engine in the country if they went ahead with a proposed law that would force the Internet giant to pay news organizations for showing their stories in its search results. The threat is the latest and most intense in a long-running battle that has pitted Australian lawmakers and news organizations against U.S.-based tech giants Google and Facebook. For years, news organizations in Australia have argued they should be paid when Internet companies aggregate news stories on their websites. Google and Facebook say their sites help people find news, and the resulting traffic to news websites is valuable on its own…

The rise of Google and Facebook has massively disrupted the news business all over the world. The steady advertising revenue newspapers relied on for decades has almost entirely gone online, and news organizations have struggled for years to adjust to the new reality, with many going out of business or severely downsizing. The proposed law is written to apply to all “digital platforms,” but Facebook and Google are specifically mentioned in the text and have been at the center of the debate.

This is what happens when you don’t understand how your own business works: you create a myth wherein Google and Facebook decimated the news business, when in reality they came along years after the business — already not a franchise — had been decimated by the Internet. That Google and Facebook became hugely profitable by virtue of helping users make sense of infinite content — Aggregators focus on discovery, not distribution — meant that their responsibility for the state of news, to the extent it exists, is exactly what they say it is: directing huge amounts of traffic to publisher websites.

The Australian Competition & Consumer Commission’s Digital Platforms Inquiry, which undergirds the proposed media bargaining code, does, to its credit, have a somewhat more realistic view of what happened to publishers:

For many news media businesses, the expanded reach and the reduced production costs offered by digital platforms have come at a significant price. For traditional print (now print/online) media businesses in particular, the rise of the digital platforms has marked a continuation of the fall in advertising revenue that began with the loss of classified advertising revenue in the early days of the internet. Without this advertising revenue, many print/online news media businesses have struggled to survive and have reduced their provision of news and journalism. New digital-only publications have not replaced what has been lost and many news media businesses are still searching for a viable business model for the provision of journalism online.

This at least gets the timeline right; the problem is that phrase “marked a continuation of the fall” is treated as “caused a continuation of the fall” in the rest of the report, and the bill that followed. The Explanatory Memorandum for the bill states:

The ACCC found in its Digital Platform Inquiry (July 2019) that there is a bargaining power imbalance between digital platforms and news media businesses so that news media businesses are not able to negotiate for a share of the revenue generated by the digital platforms and to which the news content created by the news media businesses contributes. Government intervention is necessary because of the public benefit provided by the production and dissemination of news, and the importance of a strong independent media in a well-functioning democracy.

This idea, that news media businesses contribute to Google and Facebook’s revenues and have a claim to it, is rooted in the same mindset that undergird the separation of editorial and business: faith that money for news is a fact, and thus of no concern to journalism, at least until it disappeared. Then the industry looked around, saw who was making money, and in effect declared that money to be theirs.

The memorandum continues:

This Bill establishes a mandatory code of conduct to help support the sustainability of the Australian news media sector by addressing bargaining power imbalances between digital platforms and Australian news businesses…

The memorandum uses this hypothetical to explain what a bargaining power imbalance is:

The Daily Chronicle (DC) is a registered news business that receives a benefit from referrals to its website from Digiplat, a designated digital platform service that holds a significant bargaining power imbalance in its commercial relationships with Australian news businesses including DC. When assessing both parties’ final offers, the panel considers how the benefit that DC receives from Digiplat is affected by this bargaining power imbalance derived from Digiplat’s status as an ‘unavoidable trading partner’ for Australian news businesses.

To do this, the arbitrator considers arguments in the final offers about the size of the benefit that would likely be provided by Digiplat to DC when compared to a hypothetical scenario where there is an absence of any bargaining power imbalance.

The hypothetical scenario the panel decides is appropriate in this circumstance is one in which audiences may reach DC through other means (such as users directly visiting DC’s website or accessing it through other news aggregators) and where DC and other Australian news businesses are not reliant on Digiplat to reach those audiences.

This is also what happens when you base a law on a myth; you create hypotheticals that, if taken seriously, would not change anything about the current situation. The fact of the matter is that every person in Australia — indeed, nearly every person in the world — “may reach DC through other means (such as users directly visiting DC’s website or accessing it through other news aggregators)”; Google and Facebook don’t decide what websites users visit, users do. That they choose not to “directly visit[] DC’s website or access[] it through other news aggregators” is the result of publishers losing in the market with more choice and less lock-in than any other in history. Which, again, is the real problem: businesses built on being the only choice don’t do well when choice suddenly appears.

To that end, when the Australian government references “bargaining power imbalances”, what they mean is that publishers need Google and Facebook much more than Google and Facebook need the publishers. Publishers could, if they wanted to, de-list themselves from Google’s index; if their content were as valuable as they claim, there is absolutely nothing stopping readers from seeking them out. It’s the fact they won’t that is the real problem, which is why the bill also prevents Google and Facebook from simply declining to list sites that are demanding payment.

The New York Times’ Brilliance

I am being pretty hard on publishers here, but the truth is that news is a very tough business on the Internet. The reason why readers don’t miss any one news source, should it disappear, is that news, the moment it is reported, immediately loses all economic value as it is reproduced and distributed for free, instantly. This was always true, of course; journalists just didn’t realize that people were paying for paper, newsprint, and delivery trucks, not their reporting, and that advertisers were paying for the people. Not that they cared about how the money was made, per tradition.

The publication that has figured this out better than anyone is the New York Times; that is why the newspaper, to its immense credit, has been clear about the importance of aligning its editorial approach with its business goals. From 2017’s 2020 Report:

We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it. Of course, this strategy is also deeply in tune with our longtime values. Our incentives point us toward journalistic excellence…

Our journalism must change to match, and anticipate, the habits, needs and desires of our readers, present and future. We need a report that even more people consider an indispensable destination, worthy of their time every day and of their subscription dollars.

The way in which the New York Times has covered the Australian media bargaining code is a great example of how this approach has paid off; this article, published around the same time as the afore-linked Washington Post one, is far better reported: unlike most of American media, which has framed this dispute as being about paying for news, the New York Times acknowledges that Google and Facebook pay for news elsewhere, and are instead objecting to the specific ways the Australian code is implemented.

It is the description of those objections, though, that reveal the downside; note this section about algorithms, which, I would note, were not even mentioned by the Washington Post:

One potentially groundbreaking element of the proposed legislation involves the secret sauce of Facebook, Google and subsidiaries like YouTube: the algorithms that determine what people see when they search or scroll through the platforms. Early drafts of the bill would have required that tech companies give their news media partners 28 days’ notice before making any changes that would affect how users interact with their content.

Google and Facebook said that would be impossible because their algorithms are always changing in ways that can be difficult to measure for a subset like news, so in the latest draft, lawmakers limited the scope.

If the bill passes in one form or another, which seems likely, the digital platforms will have to give the media 14 days’ notice of deliberate algorithm changes that significantly affect their businesses. Even that, some critics argue, is not enough for Big Tech.

“I think Google and Facebook are seriously worried that other countries will join in Australia’s effort,” said Johan Lidberg, a professor of media at Monash University in Melbourne. “This could eventually cause substantial revenue losses globally and serious loss of control, exemplified by the algorithm issue.”

But, he added, using threats to bully lawmakers will not do them any good.

“Google’s overreaction perfectly illustrates why the code is needed,” he said, “and beyond that, the dire need for all governments, across the globe, to join in efforts in reining in and limiting the power of these companies that is completely out of hand.”

That is how the article ends, and the point of view — delivered through a quote, of course — could not be clearer. Google and Facebook are a menace, and need to be reined in, and their objections are proof of their guilt. Never mind the fact that there are millions of sites spending untold amounts of money trying to game the companies’ algorithms, and that revealing changes in advance would be unworkable, unfair, and fraught with unintended consequences. The article is “match[ing] and anticipat[ing] the habits, needs, and desires” of the New York Times readers, who, as I noted in Never-ending Niches, assume that everything tech does is bad.

NYT & a16z

At the same time, as I wrote in that Article, I don’t begrudge the New York Times their approach; in fact, I quite admire it:

I wrote in a piece called In Defense of the New York Times, after the company wrote an exposé about Amazon’s working conditions:

The fact of the matter is that the New York Times almost certainly got various details of the Amazon story wrong. The mistake most critics made, though, was in assuming that any publication ever got everything completely correct. Baquet’s insistence that good journalism starts a debate may seem like a cop-out, but it’s actually a far healthier approach than the old assumption that any one publication or writer or editor was ever in a position to know “All the News That’s Fit to Print.”

I’d go further: I think we as a society are in a far stronger place when it comes to knowing the truth than we have ever been previously, and that is thanks to the Internet…the New York Times doesn’t have the truth, but then again, neither do I, and neither does Amazon. Amazon, though, along with the other platforms that, as described by Aggregation Theory, are increasingly coming to dominate the consumer experience, are increasingly powerful, even more powerful than governments. It is a great relief that the same Internet that makes said companies so powerful is architected such that challenges to that power can never be fully repressed, and I for one hope that the New York Times realizes its goal of actually making sustainable revenue in the process of doing said challenging.

Indeed they have, and I see the ongoing criticism of tech as a feature, not a bug.

To put it another way, what the New York Times has become is not so different from what Andreessen Horowitz is proposing to build. Margit Wennmachers said in that introductory post that “Our lens is rational optimism about technology and the future”; as a long time subscriber of the New York Times, I think it is fair to call their lens rational skepticism about technology and its effects. What is notable about both is that their lenses are perfectly aligned with their business models (and, I would note, both claim to be motivated to change the world).

Again, this is a good thing. While I am open to the argument that governments ought to tax Aggregators and fund news — this is what Australia’s code is in practice, and I would be much less harsh about it if it admitted as such — I think that a news organization like the New York Times thriving on its own is much better (and I have faith that readers will adapt).

After all, it is not as if the alternative to business incentives is unimpeachable objectivity. Go back to Pulitzer: the man was an unabashed political activist; the entire reason he sold the St. Louis Post-Dispatch was because his managing editor shot the law partner of the candidate Pulitzer was opposing dead (the law partner had entered the newsroom armed after the managing editor had called him a coward). Pulitzer took over the New York World and his energetic campaigning was credited with Grover Cleveland being elected president. This is the way newspapers operated when readers paid the bills, and while an advertising business model encouraged a veneer of objectivity, those days are long gone. Publishing is back to the future, even if awareness of that reality is not evenly distributed.


  1. This assumes that August 1991 was the starting date of the World Wide Web, which to be fair, is a bit fuzzy. 

Intel Problems

One of the first Articles on Stratechery, written on the occasion of Intel appointing a new CEO, was, in retrospect, overly optimistic. Just look at the title:

The Intel Opportunity

The misplaced optimism is twofold: first there is the fact that eight years later Intel has again appointed a new CEO (Pat Gelsinger), not to replace the one I was writing about (Brian Krzanich), but rather his successor (Bob Swan). Clearly the opportunity was not seized. What is more concerning is that the question is no longer about seizing an opportunity but about survival, and it is the United States that has the most to lose.

Problem One: Mobile

The second reason why that 2013 headline was overly optimistic is that by that point Intel was already in major trouble. The company — contrary to its claims — was too focused on speed and too dismissive of power management to even be in the running for the iPhone CPU, and despite years of trying, couldn’t break into Android either.

The damage this did to the company went deeper than foregone profits; over the last two decades the cost of building ever smaller and more efficient processors has sky-rocketed into the billions of dollars. That means that companies investing in new node sizes must generate commensurately more revenue to pay off their investment. One excellent source of increased revenue for the industry has been billions of smartphones sold over the last decade; Intel, though, hasn’t seen any of that revenue, even as PC sales have flatlined for years.

What has kept the company prospering — when it comes to the level of capital investment necessary to build next-generation fabs, you are either prospering or going bankrupt — has been the explosion in mobile’s counterpart: cloud computing.

Problem Two: Server Success

It wasn’t that long ago that Intel was a disruptor; whereas the server space was originally dominated by integrated companies like Sun, with prices to match, the explosion in PC sales meant that Intel was rapidly improving performance even as it reduced price, particularly relative to performance. Sure, PCs didn’t match the reliability of integrated servers, but around the turn of the century Google realized that the scale and complexity entailed in offering its service meant that building a truly reliable stack was impossible; the solution was to build with the assumption of failure, which in turn made it possible to build its data centers on (relatively) cheap x86 processors.

The datacenter transition from proprietary to commodity hardware

Over the following two decades Google’s approach was adopted by every major datacenter operator, and x86 became the default instruction set for servers; Intel was one of the biggest beneficiaries for the straightforward reason that it made the best x86 processors, particularly for server applications. This was both due to Intel’s proprietary designs as well as its superior manufacturing; AMD, Intel’s IBM-mandated competitor, occasionally threatened the incumbent on the desktop, but only on the low end for laptops, and not at all in data centers.

In this way Intel escaped Microsoft’s post-PC fate: Microsoft wasn’t simply shut out of mobile, they were shut out of servers as well, which ran Linux, not Windows. Sure, the company tried to prop up Windows as long as they could, both on the device side (via Office) and on the server side (via Azure); conversely, what has fueled the company’s recent growth has been The End of Windows, as Office has moved to the cloud with endpoints on all devices, and Azure has embraced Linux. In both cases Microsoft had to accept that their differentiation had flipped from owning the API to having the capability to serve their already-existing customers at scale.

The Intel Opportunity that I referenced above would have entailed a similar flip for Intel: whereas the company’s differentiation had long been based on its integration of chip design and manufacturing, mobile meant that x86 was, like Windows, permanently relegated to a minority of the overall computing market. That, though, was the opportunity.

Most chip designers are fabless; they create the design, then hand it off to a foundry. AMD, Nvidia, Qualcomm, MediaTek, Apple — none of them own their own factories. This certainly makes sense: manufacturing semiconductors is perhaps the most capital-intensive industry in the world, and AMD, Qualcomm, et al have been happy to focus on higher margin design work.

Much of that design work, however, has an increasingly commoditized feel to it. After all, nearly all mobile chips are centered on the ARM architecture. For the cost of a license fee, companies, such as Apple, can create their own modifications, and hire a foundry to manufacture the resultant chip. The designs are unique in small ways, but design in mobile will never be dominated by one player the way Intel dominated PCs.

It is manufacturing capability, on the other hand, that is increasingly rare, and thus, increasingly valuable. In fact, today there are only four major foundries: Samsung, GlobalFoundries, Taiwan Semiconductor Manufacturing Company (TSMC), and Intel. Only four companies have the capacity to build the chips that are in every mobile device today, and in everything tomorrow.

Massive demand, limited suppliers, huge barriers to entry. It’s a good time to be a manufacturing company. It is, potentially, a good time to be Intel. After all, of those four companies, the most advanced, by a significant margin, is Intel. The only problem is that Intel sees themselves as a design company, come hell or high water.

My recommendation did not, by the way, entail giving up Intel’s x86 business; I added in a footnote:

Of course they keep the x86 design business, but it’s not their only business, and over time not even their primary business.

In fact, the x86 business proved far too profitable to take such a radical step, which is the exact sort of “problem” that leads to disruption: yes, Intel avoided Microsoft’s fate, but that also means that the company never felt the financial pain necessary to make such a dramatic transformation of its business at a time when it might have made a difference (and, to be fair, Andy Grove needed the memory crash of 1984 to get the company to fully focus on processors in the first place).

Problem Three: Manufacturing

Meanwhile, over the last decade the modular-focused TSMC, fueled by the massive volumes that came from mobile and a willingness to work with — and thus share profits with — best of breed suppliers like ASML, surpassed Intel’s manufacturing capabilities.

This threatens Intel on multiple fronts:

  • Intel has already lost Apple’s Mac business thanks in part to the outstanding performance of the latter’s M1 chip. It is important to note, though, that while some measure of that performance is due to Apple’s design chops, the fact that it is manufactured on TSMC’s 5nm process is an important factor as well.
  • In a similar vein, AMD chips are now faster than Intel on the desktop, and extremely competitive in the data center. Again, part of AMD’s improvement is due to better designs, but just as important is the fact that AMD is manufacturing chips on TSMC’s 7nm process.
  • Large cloud providers are increasingly investing in their own chip designs; Amazon, for example, is on the second iteration of their Graviton ARM-based processor, which Twitter’s timeline will run on. Part of Graviton’s advantage is its design, but part of it is — you know what’s coming! — the fact that it is manufactured by TSMC, also on its 7nm process (which is competitive with Intel’s finally-launched 10nm process).

In short, Intel is losing share in PCs, even as it is threatened by AMD for x86 servers in the datacenter, and even as cloud companies like Amazon integrated backwards into the processor; I haven’t even touched on the increase in other specialized datacenter operations like GPU-based applications for machine learning, which are designed by companies like Nvidia and manufactured by Samsung.

What makes this situation so dangerous for Intel is the volume issue I noted above: the company already missed mobile, and while server chips provided the growth the company needed to invest in manufacturing over the last decade, the company can’t afford to lose volume at the very moment it needs to invest more than ever.

Problem Four: TSMC

Unfortunately, this isn’t even the worst of it. The day after Intel named its new CEO TSMC announced its earnings and, more importantly, its Capex guidance for 2021; from Bloomberg:

Taiwan Semiconductor Manufacturing Co. triggered a global chip stock rally after outlining plans to pour as much as $28 billion into capital spending this year, a staggering sum aimed at expanding its technological lead and constructing a plant in Arizona to serve key American customers.

This is a staggering amount of money that is only going to increase TSMC’s lead.

The envisioned spending spree sent chipmaking gear manufacturers surging from New York to Tokyo. Capital spending for 2021 is targeted at $25 billion to $28 billion, compared with $17.2 billion the previous year. About 80% of the outlay will be devoted to advanced processor technologies, suggesting TSMC anticipates a surge in business for cutting-edge chipmaking. Analysts expect Intel Corp., the world’s best-known chipmaker, to outsource manufacture to the likes of TSMC after a series of inhouse technology slip-ups.

That’s right: Intel likely has, at least for now, given up on process leadership. The company will keep its design-based margins and foreclose the AMD threat by outsourcing cutting edge chip production to TSMC, but that will only increase TSMC’s lead, and does nothing to address Intel’s other vulnerabilities.

Problem Five: Geopolitics

Intel’s vulnerabilities aren’t the only ones to be concerned about; I wrote last year about Chips and Geopolitics:

The international status of Taiwan is, as they say, complicated. So, for that matter, are U.S.-China relations. These two things can and do overlap to make entirely new, even more complicated complications.

Geography is much more straightforward:

A map of the Pacific

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only.

The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel’s focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don’t require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC.

This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC’s foundries — and Samsung’s — are within easy reach of Chinese missiles is a major issue.

The context of that article was TSMC’s announcement that it would (eventually) open a 5nm fab in Arizona; yes, that is cutting edge today, but it won’t be in 2024, when the fab opens. Still, it will almost certainly be the most advanced fab in the U.S. focused on contract manufacturing; Intel will, hopefully, have surpassed that fab’s capabilities by the time it opens.

Note, though, that what matters to the United States is different than what matters to Intel: while the latter cares about x86, the U.S. needs cutting-edge general purpose fabs on U.S. soil. To put it another way, Intel will always prioritize design, while the U.S. needs to prioritize manufacturing.

This, by the way, is why I am more skeptical today than I was in 2013 about Intel manufacturing for others. The company may be financially compelled to do so to get the volume it needs to pay back its investments, but the company will always put its own designs at the front of the line.

Solution One: Breakup

This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a strait-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.

The key thing to understand about chips is that design has much higher margins; Nvidia, for example, has gross margins between 60~65%, while TSMC, which makes Nvidia’s chips, has gross margins closer to 50%. Intel has, as I noted above, traditionally had margins closer to Nvidia, thanks to its integration, which is why Intel’s own chips will always be a priority for its manufacturing arm. That will mean worse service for prospective customers, and less willingness to change its manufacturing approach to both accommodate customers and incorporate best-of-breed suppliers (lowering margins even further). There is also the matter of trust: would companies that compete with Intel be willing to share their designs with their competitor, particularly if that competitor is incentivized to prioritize its own business?

The only way to fix this incentive problem is to spin off Intel’s manufacturing business. Yes, it will take time to build out the customer service components necessary to work with third parties, not to mention the huge library of IP building blocks that make working with a company like TSMC (relatively) easy. But a standalone manufacturing business will have the most powerful incentive possible to make this transformation happen: the need to survive.

Solution Two: Subsidies

This also opens the door for the U.S. to start pumping money into the sector. Right now it makes no sense for the U.S. to subsidize Intel; the company doesn’t actually build what the U.S. needs, and the company clearly has culture and management issues that won’t be fixed with money for nothing.

That is why a federal subsidy program should operate as a purchase guarantee: the U.S. will buy A amount of U.S.-produced 5nm processors for B price; C amount of U.S. produced 3nm processors for D price; E amount of U.S. produced 2nm processors for F price; etc. This will not only give the new Intel manufacturing spin-off something to strive for, but also incentivize other companies to invest; perhaps Global Foundries will get back in the game,1 or TSMC will build more fabs in the U.S. And, in a world of nearly free capital, perhaps there will finally be a startup willing to take the leap.

This prescription over-simplifies the problem, to be sure; there is a lot that goes into chip manufacturing beyond silicon. Packaging, for example, which long ago moved overseas in the pursuit of lower labor costs, is now fully automated; incentives to move that back may be more straightforward. What is critical to understand, though, is that regaining U.S. competitiveness, much less leadership, will take many years; the federal government has a role, but so does Intel, not by seizing its opportunity, but by accepting the reality that its integrated model is finished.

I wrote a follow-up to this article in this Daily Update.


  1. Global Foundries is AMD’s former manufacturing arm; they bowed out of the cutting edge race at 10nm 

Internet 3.0 and the Beginning of (Tech) History

Francis Fukuyama’s The End of History and the Last Man is, particularly relative to its prescience, one of the most misunderstood books of all time. Aris Roussinos explained at UnHerd:

Now that history has returned with the vengeance of the long-dismissed, few analyses of our present moment are complete without a ritual mockery of Fukuyama’s seemingly naive assumptions. The also-rans of the 1990s, Samuel P. Huntington’s The Clash of Civilisations thesis and Robert D. Kaplan’s The Coming Anarchy, which predicted a paradigm of growing disorder, tribalism and the breakdown of state authority, now seem more immediately prescient than Fukuyama’s offering.

Yet nearly thirty years later, reading what Fukuyama actually wrote as opposed to the dismissive précis of his ideas, we see that he was right all along. Where Huntington and Kaplan predicted the threat to the Western liberal order coming from outside its cultural borders, Fukuyama discerned the weak points from within, predicting, with startling accuracy, our current moment.

Consider this paragraph from the book:

Experience suggests that if men cannot struggle on behalf of a just cause because that just cause was victorious in an earlier generation, then they will struggle against the just cause. They will struggle for the sake of struggle. They will struggle, in other words, out of a certain boredom: for they cannot imagine living in a world without struggle. And if the greater part of the world in which they live is characterized by peaceful and prosperous liberal democracy, then they will struggle against that peace and prosperity, and against democracy.

It was hard to not think of that paragraph as scenes emerged from last week’s invasion of the U.S. Capitol in an attempt to overturn a democratic election, particular those members of the mob LARPing a special forces military operation, and in the following days when it became clear just how many members of the mob were otherwise well-off members of society. Was belief that President Trump won the election a sufficient motivation to attack the Capitol, or, underneath it all, was there something more?

Two Capitol invaders in military gear
Win McNamee via Getty Images

I won’t pretend to know the answers to that question — this is a blog about technology and strategy, not philosophy and history. The events that followed Wednesday, though, bring to mind Fukuyama’s warning that history may be restarted by those unsatisfied with its end.

The End of the Beginning

One year ago I wrote The End of the Beginning, which posited that the history of information technology was not, as popularly believed, one of alternating epochs disrupted by new paradigms, but rather a continuous shift along two parallel axis:

A drawing of The Evolution of Computing
The evolution of computing from the mainframe to cloud and mobile

The place we compute shifted from a central location to anywhere; the time in which we compute shifted from batch processes to continuous computing. The implication of viewing the shift from mainframe computing, to personal computing on a network, to mobile connections to the cloud, as manifestations of a single trend was just as counterintuitive:

What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

Another way to think about the current state of affairs is that it is the inevitable economic endpoint of the technological underpinnings of the Internet.

Internet 1.0: Technology

The vast majority of the technologies undergirding the Internet were in fact developed decades ago. TCP/IP, for example, which undergirds the World Wide Web, email, and a whole host of familiar technologies, was first laid out in a paper in 1974; DNS, which translates domain names to numerical IP addresses, was introduced in 1985; HTTP, the application layer for the Web, was introduced in 1991. The year these technologies came together from an end user perspective, though, was 1993 with the introduction of Mosaic, a graphical web browser developed by Marc Andreessen at the University of Illinois.

Over the next few years websites proliferated rapidly, as did dreams about what this new technology might make possible. This mania led to the dot-com bubble, which, critically, fueled massive investments in telecoms infrastructure. Yes, the companies like Worldcom, NorthPoint, and Global Crossing making these investments went bankrupt, but the foundation had been laid for widespread high speed connectivity.

Internet 2.0: Economics

Google was founded in 1998, in the middle of the dot-com bubble, but it was the company’s IPO in 2004 that, to my mind, marked the beginning of Internet 2.0. This period of the Internet was about the economics of zero friction; specifically, unlike the assumptions that undergird Internet 1.0, it turned out that the Internet does not disperse economic power but in fact centralizes it. This is what undergirds Aggregation Theory: when services compete without the constraints of geography or marginal costs, dominance is achieved by controlling demand, not supply, and winners take most.

Aggregators like Google and Facebook weren’t the only winners though; the smartphone market was so large that it could sustain a duopoly of two platforms with multi-sided networks of developers, users, and OEMs (in the case of Android; Apple was both OEM and platform provider for iOS). Meanwhile, public cloud providers could provide back-end servers for companies of all types, with scale economics that not only lowered costs and increased flexibility, but which also justified far more investments in R&D that were immediately deployable by said companies.

The network effects of iOS and Android are so strong, and the scale economics of Amazon, Microsoft, and Google so overwhelming, that I concluded in The End of the Beginning:

The implication of this view should at this point be obvious, even if it feels a tad bit heretical: there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

This, though, is where I am reminded of The End of History and the Last Man; Fukuyama writes in the final chapter:

If it is true that the historical process rests on the twin pillars of rational desire and rational recognition, and that modern liberal democracy is the political system that best satisfies the two in some kind of balance, then it would seem that the chief threat to democracy would be our own confusion about what is really at stake. For while modern societies have evolved toward democracy, modern thought has arrived at an impasse, unable to come to a consensus on what constitutes man and his specific dignity, and consequently unable to define the rights of man. This opens the way to a hyperintensified demand for the recognition of equal rights, on the one hand, and for the re-liberation of megalothymia on the other. This confusion in thought can occur despite the fact that history is being driven in a coherent direction by rational desire and rational recognition, and despite the fact that liberal democracy in reality constitutes the best possible solution to the human problem.

Megalothymia is “the desire to be recognized as superior to other people”, and “can be manifest both in the tyrant who invades and enslaves a neighboring people so that they will recognize his authority, as well as in the concert pianist who wants to be recognized as the foremost interpreter of Beethoven”; successful liberal democracies channel this desire into fields like entrepreneurship or competition, including electoral politics.

In the case of the Internet, we are at the logical endpoint of technological development; here, though, the impasse is not the nature of man, but the question of sovereignty, and the potential re-liberation of megalothymia is the likely refusal by people, companies, and countries around the world to be lorded over by a handful of American giants.

Big Tech’s Power

Last week, in response to the violence at the Capitol and the fact it was incited by Trump, first Facebook and then Twitter de-platformed the President; a day later Apple, Google, and Amazon kicked Parler, another social network where Trump supporters congregated and in-part planned Wednesday’s action, out of their App Stores and hosting service, respectively, effectively killing the service.

After years of defending Facebook and Twitter’s decisions to keep Trump on their services, I called for him to be kicked off last Thursday, and I explained yesterday why tech’s collective action in response to last Wednesday’s events was a uniquely American solution to a genuine crisis:

So Facebook and Twitter and Apple and Google and Amazon and all of the rest were wrong, right? Well, again, context matters, and again, the context here was an elected official encouraging his supporters to storm the Capitol to overturn an election result and his supporters doing so. What I believe happened this weekend was a uniquely American solution to the problem of Trump’s refusal to concede and attempts to incite violence: all of corporate America collectively decided that enough was enough, and did what Congress has been unable to do, effectively ending the Trump presidency. Parler, to be honest, was just as much a bystander casualty as it was a direct target. That the tech sector is the only one with the capabilities to actually make a difference is what makes the industry stand out.

I am not, to be clear, saying that this is some sort of ideal solution. As I noted last week impeachment is the way this is supposed to go, and hopefully that still occurs. And, as I also noted last week, if this triggers a debate about the power of tech companies, all the better. This solution was, though, a pragmatic and ultimately effective one, even if the full costs will take years to materialize (again, more on the long-term repercussions soon).

Soon is today; this Article is not about the rightness and wrongness of these decisions — again, please see the two articles I just linked — but rather about the implications of tech companies taking the actions they did last weekend.

Start with Europe; from Bloomberg:

Germany and France attacked Twitter Inc. and Facebook Inc. after U.S. President Donald Trump was shut off from the social media platforms, in an extension of Europe’s battle with big tech. German Chancellor Angela Merkel objected to the decisions, saying on Monday that lawmakers should set the rules governing free speech and not private technology companies.

“The chancellor sees the complete closing down of the account of an elected president as problematic,” Steffen Seibert, her chief spokesman, said at a regular news conference in Berlin. Rights like the freedom of speech “can be interfered with, but by law and within the framework defined by the legislature — not according to a corporate decision.”

The German leader’s stance is echoed by the French government. Junior Minister for European Union Affairs Clement Beaune said he was “shocked” to see a private company make such an important decision. “This should be decided by citizens, not by a CEO,” he told Bloomberg TV on Monday. “There needs to be public regulation of big online platforms.” Finance Minister Bruno Le Maire earlier said that the state should be responsible for regulations, rather than “the digital oligarchy,” and called big tech “one of the threats” to democracy.

Make no mistake, Europe is far more restrictive on speech than the U.S. is, including strict anti-Nazi laws in Germany, the right to be forgotten, and other prohibitions on broadly defined “harms”; the difference from the German and French perspective, though, is that those restrictions come from the government, not private companies.

This sentiment, as I noted yesterday, is completely foreign to Americans, who whatever their differences on the degree to which online speech should be policed, are united in their belief that the legislature is the wrong place to start; the First Amendment isn’t just a law, but a culture. The implication of American tech companies serving the entire world, though, is that that American culture, so familiar to Americans yet anathema to most Europeans, is the only choice for the latter.

Politicians from India’s ruling party expressed similar reservations; from The Times of India:

BJP leaders expressed concern on Saturday over the permanent suspension of US President Donald Trump’s Twitter account by the social media giant, saying it sets a dangerous precedent and is a wake-up call for democracies about the threat from unregulated big tech companies…”If they can do this to the President of the US, they can do this to anyone. Sooner India reviews intermediaries’ regulations, better for our democracy,” BJP’s youth wing president Tejaswi Surya said in a tweet.

Tech companies would surely argue that the context of Trump’s removal was exceptional, but when it comes to sovereignty it is not clear why U.S. domestic political considerations are India’s concern, or any other country’s. The fact that the capability exists for their own leaders to be silenced by an unreachable and unaccountable executive in San Francisco is all that matters, and it is completely understandable to think that countries will find this status quo unacceptable.

Companies, meanwhile, will note the fate of Parler. Sure, few have any intention of dealing with user-generated content, but the truth is that here the shift has already started: most retailers, for example, have been moving away from AWS for years; this will be another reminder that when push comes to shove, the cloud providers will act in their own interests first.

Meanwhile, there remain the tens of millions of Americans who voted for Trump, and the (significantly) smaller number that were on Parler; sure, they may be (back) on Twitter or Facebook, but this episode will not soon be forgotten: Congress may have not made a law abridging the freedom of speech, but Mark Zuckerberg and Jack Dorsey did, and Apple, Google, and Facebook soon fell in line. That all of those companies will be viewed with a dramatically heightened sense of suspicion should hardly be a surprise.

Internet 3.0: Politics

This is why I suspect that Internet 2.0, despite its economic logic predicated on the technology undergirding the Internet, is not the end-state. When I called the current status quo The End of the Beginning, it turns out “The Beginning” I was referring to was History. The capitalization is intentional; Fukuyama wrote in the Introduction of The End of History and the Last Man:

What I suggested had come to an end was not the occurrence of events, even large and grave events, but History: that is, history understood as a single, coherent, evolutionary process, when taking into account the experience of all peoples in all times…Both Hegel and Marx believed that the evolution of human societies was not open-ended, but would end when mankind had achieved a form of society that satisfied its deepest and most fundamental longings. Both thinkers thus posited an “end of history”: for Hegel this was the liberal state, while for Marx it was a communist society. This did not mean that the natural cycle of birth, life, and death would end, that important events would no longer happen, or that newspapers reporting them would cease to be published. It meant, rather, that there would be no further progress in the development of underlying principles and institutions, because all of the really big questions had been settled.

It turns out that when it comes to Information Technology, very little is settled; after decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality. I wrote in The Internet and the Third Estate:

What makes the Internet different from the printing press? Usually when I have written about this topic I have focused on marginal costs: books and newspapers may have been a lot cheaper to produce than handwritten manuscripts, but they are still not-zero. What is published on the Internet, meanwhile, can reach anyone anywhere, drastically increasing supply and placing a premium on discovery; this shifted economic power from publications to Aggregators.

Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

It is difficult to believe that the discussion of these implications will be reserved for posts on niche sites like Stratechery; the printing press transformed Europe from a continent of city-states loosely tied together by the Catholic Church, to a continent of nation-states with their own state churches. To the extent the Internet is as meaningful a shift — and I think it is! — is inversely correlated to how far along we are in the transformation that will follow — which is to say we have only gotten started. And, after last week, the world is awake to the stakes; politics — not economics — will decide, and be decided by, the Internet.

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols.1 This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

Internet 3.0 will return to open and decentralized technology

This process will take years; I would expect governments in Europe in particular to initially try and build their own centralized alternatives. Those efforts, though, will founder for a lack of R&D capabilities, and be outstripped by open alternatives that are perhaps not as full-featured and easy-to-use as big tech offerings, at least in the short to medium-term, but possess the killer feature of not having a San Francisco kill-switch.


  1. Crypto projects are one manifestation of this, but not the only ones