Twitter Has a New CEO; What About a New Business Model?

From CNBC:

Twitter CEO Jack Dorsey is stepping down as chief of the social media company, effective immediately. Parag Agrawal, Twitter’s chief technology officer, will take over the helm, the company said Monday. Shares of Twitter closed down 2.74% on the day.

Dorsey, 45, was serving as both the CEO of Twitter and Square, his digital payments company. Dorsey will remain a member of the board until his term expires at the 2022 meeting of stockholders, the company said. Salesforce President and COO Bret Taylor will become the chairman of the board, succeeding Patrick Pichette, a former Google executive, who will remain on the board as chair of the audit committee.

“I’ve decided to leave Twitter because I believe the company is ready to move on from its founders,” Dorsey said in a statement, though he didn’t provide any additional detail on why he decided to resign.

On one hand, congratulations to Twitter for its first non-messy CEO transition in its history; on the other hand, this one was a bit weird in its own way: CNBC broke the news at 9:23am Eastern, just in time for the markets to open and the stock to shoot up around 10% as feverish speculation broke out about who the successor was; two hours and 25 minutes later Dorsey confirmed the news and announced Agrawal as his successor, and the sell-off commenced.

The missing context in Dorsey’s announcement was Elliott Management, the activist investor that took a stake in Twitter in early 2020 and demanded that Dorsey either focus on Twitter (instead of Square, where he is still CEO) or step down; Twitter gave Elliott and Silver Lake, who was working with Elliott, two seats on the board a month later. That agreement, though, came with the condition that Twitter grow its user base, speed up revenue growth, and gain digital ad market share.

Twitter has made progress: while the company’s monthly active users have been stagnant for years — which is probably why the company stopped reporting them in 2019 — its “monetizable daily active users” have increased from 166 million in Q1 2020 to 211 million last quarter, and its trailing twelve-month revenue has increased from $3.5 billion in Q1 2020 to $4.8 billion in Q3 2021. The rub is digital ad market share: Snap, for example, grew its TTM revenue from $1.9 billion to $4.0 billion over the same period, as the pandemic proved to be a massive boon for many ad-driven platforms.

That boon was driven by the surge in e-commerce, which is powered by direct response marketing, where there is a tight link between seeing an ad and making a purchase; Twitter, though, has struggled for years to build a direct response business, leaving it dependent on brand advertising for 85% of its ad revenue. That meant the company was not only not helped by the pandemic, but hurt worse than most (and, on the flip side, was less affected by Apple’s iOS 14 changes). If in fact Dorsey’s job depended on taking digital ad market share, he didn’t stand a chance.

That perhaps explains yesterday’s weird timing; Casey Newton speculated that the board may have leaked the news to ensure that Dorsey didn’t get cold feet. It also, I suspect, explains the market’s cool reaction to the appointment of an insider: Agrawal was there for all of those previously failed attempts to build a direct response marketing business, so it’s not entirely clear what is going to be different going forward.

Twitter’s Advertising Problem

The messiness I alluded to in Twitter’s previous CEO transitions is merely markers on a general run of mismanagement from the company’s earliest days. I’ve long contended that Twitter’s core problem is that the product was too perfect right off the bat; from 2014’s Twitter’s Marketing Problem:

One of the most common Silicon Valley phrases is “Product-Market Fit.” Back when he blogged on a blog, instead of through numbered tweets, Marc Andreessen wrote:

The only thing that matters is getting to product/market fit…I believe that the life of any startup can be divided into two parts: before product/market fit (call this “BPMF”) and after product/market fit (“APMF”).

When you are BPMF, focus obsessively on getting to product/market fit.

Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required.

When you get right down to it, you can ignore almost everything else.

I think this actually gets to the problem with Twitter: the initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit. It just happened, and they’ve been riding that match for going on eight years.

The problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company-equivalent of a lottery winner who never actually learns how to make money, and now they are starting to pay the price.

Seven years on and Twitter has finally started to implement some of the proposals from that article, including leaning heavily into recommendations and topics; in theory the machine learning understandings driving those recommendations should translate into more effective advertising as well. That hasn’t really happened, though, and I’m not sure it ever will, for reasons that go beyond the effectiveness of Twitter’s management (or lack thereof).

Think about the contrast between Twitter and Instagram; both are unique amongst social networks in that they follow a broadcast model: tweets on Twitter and photos on Instagram are public by default, and anyone can follow anyone. The default medium, though, is fundamentally different: Twitter has photos and videos, but the heart of the service is text (and links). Instagram, on the other hand, is nothing but photos and video (and link in bio).

The implications of this are vast. Sure, you may follow your friends on both, but on Twitter you will also follow news breakers, analysts, insightful anons, joke tellers, and shit posters. The goal is to mainline information, and Twitter’s speed and information density are unparalleled by anything in the world. On Instagram, though, you might follow brands and influencers, and your chief interaction with your friends is stories about their Turkey Day exploits. It’s about aspiration, not information, and the former makes a lot more sense for effective advertising.

It’s more than just the medium though; it’s about the user’s mental state as well. Instagram is leisurely and an escape, something you do when you’re procrastinating; Twitter is intense and combative, and far more likely to be tied to something happening in the physical world, whether that be watching sports or politics or doing work:

Instagram is a lean-back experience; on Twitter you lean forward

This matters for advertising, particularly advertising that depends on a direct response: when you are leaning back and relaxed why not click through to that Shopify site to buy that knick-knack you didn’t even know you needed, or try out that mobile game? When you are leaning forward, though, you don’t have either the time or the inclination.

Someone is wrong on the Internet

That ties into Twitter’s third big problem: the number of people who actually want to experience the Internet this way is relatively small. There is a reason that Twitter’s userbase is only a fraction of Instagram’s, and it’s not a lack of awareness; the reality is that most people are visual, and Twitter is textual. Which, of course, is exactly why Twitter’s most fervent users can’t really imagine going anywhere else.

Twitter’s Place in Culture

What makes Twitter such a baffling company to analyze is that the company’s cultural impact so dramatically outweighs its financial results; last quarter Twitter’s $1.3 billion in revenue amounted to 4.4% of Facebook’s $29.0 billion, and yet you can make the case — and I believe it — that Twitter’s overall impact on the world is just as big, if not larger than its drastically larger peer. Facebook hollowed out the gatekeeper position of the media, but that void was filled by Twitter, both in terms of news being made, and just as critically, elite opinion and narrative being shaped.

Given that impact, I can see why Elliott Management would look at Twitter and wonder why it is that the company can’t manage to make more money, but the fact that Twitter is the nexus of online information flow reflects the reality of information on the Internet: massively impactful and economically worthless, particularly when ads — which themselves are digital information — can easily be bought elsewhere.

Twitter is more than just news, though: I wrote last year in Social Networking 2.0 about the rise of private networks that supplemented and, for many use cases, replaced Facebook and Twitter.

A drawing of v1 vs v2 Social Networks

Twitter, even more than Facebook, remains crucial to this new ecosystem: what WhatsApp group or Telegram chat isn’t filled with tweets posted for the purpose of discussion or disparagement, or links discovered via Twitter? It is as if these private groups are a fortress on the frontier; Twitter is the wild where you forage for content morsels, and, of course, where you do battle with the enemy.

Don’t underrate that last part: one of the biggest challenges facing would-be Twitter clones is not simply that a complete lack of moderation leads to an overwhelming amount of crap, but also that the sort of person who thrives on Twitter very much wants to know everything that is happening in the world, including amongst those outside of their circle. Being stuck on a text-based social network that only has some of the information to be consumed is lame; having access to anyone and everything, for better or worse, is a value prop that only Twitter can provide.

This, then, is the other thing that often baffles analysts: Twitter has one of the most powerful moats on the Internet. Sure, Facebook has ubiquity, Instagram has influencers, and TikTok has homegrown stars, but I find it easier to imagine any of those fading before Twitter’s grip on information flow disappears (in part, of course, because Twitter has shown that it’s a pretty crappy business).

A Paid Social Network

So let’s review: there is both little evidence that Twitter can monetize via direct response marketing, and reason to believe that the problem is not simply mismanagement. At the same time, Twitter is absolutely essential to a core group of users who are not simply unconcerned with the problems inherent to Twitter’s public broadcast model (including abuse and mob behavior), but actually find the platform indispensable for precisely those reasons: Twitter is where the news is made, shaped, and battled over, and there is very little chance of another platform displacing it, in large part because no one is economically motivated to do so.

Given this, why not charge for access?

This may seem obvious to you, but it’s a huge leap for me; back when Stratechery first started it was fairly popular to argue that social networks should charge users instead of selling ads, which never made sense. I wrote in 2014’s Ello and Consumer-Friendly Business Models:

When it comes to social networks, on the other hand, advertising is clearly the best option: after all, a social network is only as good as the number of friends that are on it, and the best way to get my friends on board is to offer a kick-ass product for free. In other words, the exact opposite of the feature-limited product that Ello is proposing…

If…you care about making a successful social network that users will find useful over the long run, then actually build something that is as good as you can possibly make it and incentivize yourself to earn and keep as many users as possible.

I still stand by that analysis generally, but I increasingly question whether or not it applies to Twitter. Twitter has long since penetrated the awareness of just about everyone on earth; the vast majority gave the platform a try and never came back, content to consume the tweets that show up everywhere from news articles to cable news. The core that remains, meanwhile, simultaneously bemoans that Twitter is terrible even as they can’t rip their eyes away, addicted as they are to that flow of information that is and will for the foreseeable future be unmatched by any other service.

And yet, despite this impact and indispensability and impenetrable moat, Twitter makes an average of $22.75 per monetizable daily active user per year (and given that some of Twitter’s most hard core users use third-party Twitter clients, and thus aren’t monetizable, the revenue per addicted daily active user is even lower). That’s just under $2/month, an absolutely paltry sum.

Actually charging for Twitter would, of course, reduce the userbase to some degree; moreover, there are a lot of users with multiple accounts, and plenty of non-human users on Twitter. And, of course, Apple and Google would take their share. Still, even if you cut the userbase by a third to 141 million daily addicted users — which I think vastly overstates Twitter’s elasticity of demand amongst its core user base — Twitter would only need to charge $4/month (including App Store fees) to exceed the $4.8 billion in revenue it made over the last twelve months.

And, in fact, that overstates the situation for another reason: only $4.2 billion of Twitter’s last twelve months of revenue came from ads; the rest came from data licensing and other revenue. There is an alternate world where data licensing is Twitter’s primary revenue model: just think about how valuable it is to be the primary protocol for real time information sharing, particularly if you can package and distribute that information in an intelligent way?

Twitter could still do that, and pursue other initiatives like its revitalized API, offering developers the opportunity to build entirely new experiences on Twitter’s information flow (including unmoderated ones). The difference from the first go-around is that Twitter won’t have an advertising business to protect, and thus will have its interests much better aligned with developers who can pay for access. After all, that would be Twitter’s business model.

I also think this makes Twitter’s other subscription offerings, like Super Follows, Revue, etc., more attractive, not less; the biggest challenge in running a subscription business is earning that first dollar, but once a user is paying it’s relatively easy to charge for more.

This could certainly all go horribly wrong; the absolute fastest way to get your users to explore alternatives is to ask them to pay for your service, and there is the matter of acquiring new users, users who can’t afford to pay, etc. Growth matters, fewer users means less vitality, and I’m honestly getting cold feet even proposing this! Certainly existing users would howl and insist they were leaving and never coming back. I think, though, that Twitter is so unique, and its userbase is so locked in, that it is the one social networking service that could potentially pull this off.

Moreover, the fact of the matter is that Twitter has now had one business model and five CEOs (counting Dorsey twice); maybe it’s worth changing the former before the next activist investor demands yet another change to the latter.

Follow-up: Why Subscription Twitter Is a Terrible Idea

Unity, Weta, and Faceless Platforms

At the beginning of a video announcing Unity’s acquisition of Weta Digital, Peter Jackson, filmmaker extraordinaire and the founder of Weta, and Prem Akkaraju, the CEO of Weta Digital, explained why they were excited to hand Weta Digital over (beyond, of course, the $1.63 billion):

Peter Jackson: We knew that digital effects offered so much possibility for us to be able to create the worlds and the creatures that we were imagining.

Prem Akkaraju: Now we’re taking those tools that we have created and are handing them over the Unity to market them for the entire world.

Peter Jackson: Together Unity and Weta Digital can create a pathway for any artist in any industry who will now be able to leverage these incredible creative tools.

The creation, development, and now sale of Weta Digital is a great example of how the ideal approach to innovation in an emerging field changes over time; understanding that journey also explains why Unity is a great home for Weta.

Weta’s Integrated History

Peter Jackson has said that he was inspired to start Weta Digital after seeing Jurassic Park and realizing that CGI was the future of movies; the first movie Weta worked on was Heavenly Creature, which Jackson says didn’t even need CGI. From an oral history of Weta Digital:

We used the film as an excuse to buy one computer, just to put our toe in the water. And we had a guy, one guy, who could figure out how to work it, and we got a couple of bits of software, and we did the effects in Heavenly Creatures really just for the sake of doing some CGI effects so we’d actually just start to figure it out. And then on The Frighteners, which was the next movie we [did], we went from one computer to 30 computers. It was a pretty big jump.”

Weta, as you would expect given how nascent computer graphics were, was integrated from top-to-bottom: Jackson’s team didn’t make software to make graphics for Jackson’s films; Jackson actually made a scene in a film that needed graphics to give his team a reason to get started on making software. That bet paid off a few years later when Weta played a pivotal role in the Lord of the Rings trilogy. The feedback loop between movie-making and software-development was tight in a way that is only possible with a fully integrated approach, resulting in dramatic improvements from film to film. Jackson said of Gollum:

Now the first Lord of the Rings film, Fellowship of the Ring, which has Gollum in about two shots, he’s just a little glimpse, and it’s completely different to the Gollum that’s in the next film, The Two Towers. [That first] version of Gollum was pretty gnarly, pretty crude. It was as good as we could do at that time, and we had to have something in the movie, but we put a very deep shadow, and you know that was on purpose because he didn’t look very good. But, nonetheless, we ran out of time, and that was what we had to have in the movie. We still had another year to get him really good for the Two Towers, where he was half the film, so the Two Towers Gollum was a complete overhaul of the one from the first film. It finally got to the place that we wanted to go.”

Over the intervening years, though, Weta gradually branched out, working on films beyond Jackson’s personal projects, including Avatar and Avengers. This too makes sense: developing software is both incredibly capital intensive and benefits from high rates of iteration; that means there is both an economic motive to serve more customers, increasing the return on the initial investment, and a product motive, since every time the software is used there is the potential for improvement. Still, software development and software application were still integrated, because there was so much more improvement to be had; Jackson again:

Over the years, we’ve written about a hundred pieces of code for a wide variety of different things, and it drives me a little bit crazy because we keep writing code for the same things every time. Like, we do Avatar a few years ago, and we wrote software to create grass, and trees, leaves, that can blow with the wind, that sort of stuff. And then I come along with The Hobbit, and I want some grass or some shrubs or some trees, so I [ask], Why can’t we use the stuff that we wrote [already]? I mean it was fine for the [prior] film, but we wanted to be better. I mean there was some kind of code, unspoken code – and it’s not anything to do with the ownership of the software, because it belongs to us – but the guys themselves, wanna do everything better than they did the last time. So they don’t want the Avatar grass, or the leaves; they want to do grass and leaves better than last time. They set about writing a whole new grass software, whole new leaf software, and it happens over and over again, all the time. It sort of drives me crazy.

What is interesting, though, is that in that same oral history Jackson starts to signal that the plane of improvement was starting to shift; he says at the end:

We live in in an age where everything is possible, really, with CGI. There’s nothing that you cannot imagine in your head or read on a script, or whatever, that you can’t actually do now. What you’ve got now is you’ve simply got faster – faster and cheaper. In terms of computers, you know, every year they get cheaper, and they get twice as fast. It is important because when you’re doing visual effects, it’s like any form of filmmaking, you often want a take two, take three, take four. You do the fire shot with a little flame once and you know you’re not necessarily gonna get it looking great on the first time, so it’s good to have a second go, and maybe a third go, maybe a fourth go. So when you’ve actually got computers that are getting quicker and faster, it gives you more goes at doing this stuff. Ultimately you get a better looking result.

Notice the transition here; at the beginning everything was integrated from the movie shot to the development process to the software to the individual computer:

Weta's integration

Over the ensuing 28 years, though, each of these pieces has been broken off and modularized, increasing the leverage that can be gained from the software itself; Unity’s approach of selling tools to the world is the logical endpoint.

Unity’s Modular History

When 3D games first became possible in the 1990s 3D game engines were developed for a specific game; the most famous developer was id Software, which was founded in 1991, and id’s most famous engine was built for its Quake series of games; id’s rival was Epic, which built the Unreal engine for Unreal Tournament. These were integrated efforts: the engine was built for the game, and the game was built for the engine; once development was completed then the engine was made available to other developers to build their own games, without having to recreate all of the work that id or Epic had already done.

Unity, though was different: its founders started with the express goal of “democratizing game development”; the company’s only game, GooBall, was intended as a proof of concept. It also, like Unity, only ran on the Mac, which wasn’t a great recipe for commercial success in the games business. That all flipped in 2008, though, with the opening of the iPhone App Store: that was one of the greatest gold rushes of all time, and a tool at home on Apple’s technologies was particularly well-placed to profit. Unity was off to the races, quickly becoming the most used engine on iOS specifically and mobile gaming broadly, a position it still holds today: 71% of the top 1000 mobile games on the market run on Unity, and over 3 billion people play games it powers.

Mobile was a perfect opportunity for Unity for reasons that went beyond its Apple roots. Phones, particularly back then, were dramatically underpowered relative to PCs with dedicated graphics cards; that meant that the primary goal for an engine was not to produce cutting edge graphics, but to deliver good enough graphics in a power-constrained environment. Moreover, most of the developers taking advantage of the mobile opportunity were small teams and independents; they couldn’t afford the big PC engines even if they could fit in an iPhone’s power envelope. Unity, though, wasn’t just Apple friendly, but also built from the ground-up for independent developers, both in terms of usability and also pricing.

Over time, as mobile gaming has become the largest part of the market, and as the power of mobile phones has grown, Unity has grown its capabilities and performance; the company has made increasing in-roads into console and PC gaming (albeit mostly with casual games), and has invested significantly in virtual reality. What remains a constant, though, is Unity’s position as a developer’s partner, not a competitor.

Unity’s Business Model

This has opened up further opportunities for Unity: it turned out that mobile gaming had a completely different business model than PC or console gaming as well, which traditionally sold a game for a fixed price. Mobile, though, thanks to its always connected nature and massive market size, moved towards a combination of advertising and in-app purchases. Unity found itself perfectly placed to support both: first, all mobile game developers had the same problem that Unity could solve once and make available to all of its customers, and second, advertising in particular requires far more scale than any one game developer could build on its own. Unity, though, could build an advertising market across all of the games it supported — which again, was most of them — and its customers could rely on Unity knowing that their interests were aligned: if developers made money, Unity made money.

Today it is actually Unity’s Operate Solutions, including its advertising network and in-app purchase support, plus other services like hosting and multiplayer support, that makes up 65% of Unity’s revenue; Create Solutions, which is primarily monetized via a SaaS-subscription to Unity’s development tools, is only 29%. The latter, though, is an on-ramp to the former; CEO John Riccitiello said on Unity’s most recent earnings call:

We can hook [creators] at the artist level when they’re just building out their first visual representation of what they want to do. We can hook them lots of different ways. And at Unity, it almost always seems the same. They start as some sort of a experimental customer doing not a lot. And then we get inside and they do a lot more and then they do a lot more, then they write bigger contracts, they start moving more of our services. I would hate to say this in front our customers although they know it, land and expand is like endemic to Unity. It’s exactly what we’re doing.

This has resulted in a net expansion rate of 142%; that means that Unity has negative churn because its existing customers increase their spending with Unity so substantially. I love a good cohort analysis and Unity had a great one in their S-1:

Unity's S-1 cohort analysis

This shows just how important it is that Unity’s tools attract customers: the company has the ability to grow revenue from its existing customers indefinitely; the primary limiting factor to the company’s growth rate is how many new customers it can bring on board.

Enter Weta.

Unity + Weta

It is striking how the fundamental strengths and weaknesses of Weta and Unity are mirror images of each other: Weta has cutting edge technology, but it’s only available to Weta; Unity’s technology, meanwhile, continues to improve, but its biggest asset is the number of developers on its platform and integration with all of the other components a developer needs to build a business.

What you see in this acquisition, then, is the intersection of these two paths:

  • Weta’s software is increasingly mature and ready to be productized and leveraged across as many customers who wish to use it.
  • Unity’s developer offering is increasingly full-featured and only limited by its ability to acquire new customers.

The logic should be obvious: Weta increases Unity’s market from not just developers but to artists, who can be plugged into Unity’s land-and-expand model. Weta, meanwhile, immediately gains leverage on all of the investment it has made in its software tools.

The convergence of Unity and Weta's paths

There is also a third path flowing through this intersection: the convergence and increase of computing power across devices of all sizes. Unity benefited at the beginning from serving an underpowered mobile market; Weta, in contrast, was so limited by underpowered computers that its developers had to be tightly integrated with artists to make things that were never before possible.

Today, though, phones and computers are increasingly comparable in power, to the benefit of Unity; from Weta’s perspective, that power makes it possible to use its tools iteratively, lowering the learning curve and increasing the market of artists who can figure them out on their own.

That’s why Unity leaped at this opportunity; CEO John Riccitiello told me in an interview:

It’s an incredible opportunity for the world’s artist to get to something that this incredible collection of tools that they’re going to want to use, and they’re going to want to use it in the film industry. But we’ve got thousands of engineers inside of Unity and we’re going to work at making these tools. Some of them already are real time and usable in video games, make most all of them real time so they can be used in video games, but they can be used in many vertical industry circumstances, whether it’s the auto industry for car configurators or design or architecture, engineering construction, or in digital twins.

These tools are pretty darned amazing, I’m proud that we were able to come to an agreement with Peter and the team around him. I’m also excited about the prospect of bringing these tools to a global marketplace…But what I would tell you is this is one of those things where you write down the strategy and then there’s something that accelerates you multiple years, and this is exactly that. We’re thrilled.

It’s going to take time to see all of this come to fruition, and there are obvious challenges in bringing together two companies that are so different; it is precisely those differences, though, that make this acquisition so compelling.

Faceless Platforms

One additional point: to me the most analogous company to Unity is not Epic, its biggest competitor in the game engine business, but TSMC. TSMC, like Unity, was built from the beginning to be a partner to chip designers, not a competitor, in large part because the company had no other route to market. In the process, though, TSMC democratized chip development and then, with the rise of smartphones, gained the volume necessary to overtake integrated competitors like Intel.

By 2019 it was TSMC that was the first to bring chips to market based on Extreme Ultraviolet (EUV) lithography technology; EUV was devilishly difficult and exceptionally expensive, requiring not just years of development by ASML, but also customers like Apple willing to pay for the absolute most cutting edge chips at massive volume. Now TSMC is the most advanced fab in the industry and the 10th most valuable company in the world, and, absent geopolitical risk, a clear winner in a world increasingly experienced via technology.

The ultimate manifestation of that is the metaverse, and here Unity is particularly well-placed; Nvidia CEO Jenson Huang at last week’s GTC keynote described today’s web as being 2D, and the future as being 3D, and now Unity owns the best 3D tools in the world. More importantly, unlike Metaverse aspirants like Facebook or Microsoft, Unity isn’t competing for end users, which means it can partner with everyone building those new experiences — including Facebook and Microsoft and Apple — just as TSMC can build chips for everyone, even Intel.

It is companies like Unity and TSMC — other examples include Stripe or Shopify or the public clouds — that are the most important to the future. The most valuable prize in technology has always been platforms, but in the beginning era of technology the most important platforms were those that interfaced directly with users; in technology’s middle era the most important platforms will be faceless, empowering developers and artists and creators of all types to create completely new experiences on top of the best technology in the world, created and maintained and improved on by companies that aren’t competing with them, but partnering to achieve the scale necessary to accelerate the future.

Microsoft and the Metaverse

It’s certainly the question of the season: what is the Metaverse? I detailed the origin of the term in August; that, though, was about Neal Stephenson’s vision and how it might apply to the future. For the purpose of this Article I am going to focus on my personal definition of the Metaverse, how I think it will come to market, and explain why I think Microsoft is so well placed for this opportunity.

Here is the punchline: the Metaverse already exists, it just happens to be called the Internet. Consider the seven qualities Matthew Ball used to define the Metaverse; the Internet satisfies all of them:1

  • The Internet is persistent
  • The Internet is synchronous and live
  • The Internet has no cap to concurrent users, while also providing each user with an individual sense of “presence”
  • The Internet has a fully functioning economy
  • The Internet is an experience that spans both the digital and physical worlds, private and public networks/experiences, and open and closed platforms
  • The Internet offers unprecedented (although not perfect) interoperability of data, digital items/assets, content, etc.
  • The Internet is populated by “content” and “experiences” created and operated by an incredibly wide range of contributors

I really don’t see anyone creating some sort of grand nirvana that beats what we currently have on any of these metrics. The entire reason the Internet is as open and interoperable as it is is because it was built in a world without commercial imperative or political oversight; all future efforts will be led by companies seeking profits and regulated by governments seeking control, both of which result in centralization and lock-in. Crypto pushes in the opposite direction, but it is a product of the Internet that relies on many of the qualities in Ball’s list, not a replacement.

What makes “The Metaverse” unique, then, is that it is the Internet best experienced in virtual reality. This, though, will take time; I expect that the first virtual reality experiences will be individual metaverses, tied together by the Internet as we experience it today.

Mobile and the Physical World

Forecasts, particularly those that extend multiple years into the future, are always a dangerous enterprise; look no further than January 2020, when I argued in The End of the Beginning that mobile + cloud represented the culmination of the last fifty years in tech history:

A drawing of The Evolution of Computing

The idea behind this article is right there in the title: “The End of the Beginning”. Tech innovation wasn’t over, it was only beginning, but everything in the future would happen on top of the current paradigm.

Then COVID happened, and now I’m not so sure if that’s the entirety of the story.

Implicit in the assumption that mobile + cloud is the endpoint is the preeminence of the physical world. After all, what makes the phone the ultimate expression of a “personal computer” is that it is with us everywhere, from home to work to every moment in-between. That is what allows for continuous computing everywhere.

At the same time, for well over a year a huge portion of people’s lives was primarily digital. The primary way to connect with friends and family was via video calls or social networking; the primary means of entertainment was streaming or gaming; for white collar workers their jobs were online as well. This certainly wasn’t ideal: the first thing people want to do as the world opens up is see their friends and family in person, go to a movie or see a football game as a collective, or take a trip. Work, though, has been a bit slower to come back: even if the office is open, many meetings are still online given that some of the team may be working remote — for many companies, permanently.

This last case presents a scenario where the physical is not pre-eminent; an Internet connection is. In this online-only world a phone is certainly essential as a means to stay connected while moving around; it is not, though, the best way to be online. Virtual reality could be.

Work in VR

There have been two conventional pieces of wisdom about virtual reality that I used to agree with, but now I think both were off-base.

The first one is that virtual reality’s first and most important market will be gaming. The reasoning is obvious: gamers already buy dedicated equipment, and gaming is an immersive activity that could justify the hassle of putting on and taking off a headset. One problem, though, is that gamers buying dedicated equipment are going to care the most about performance, and VR quality is still behind; the bigger problem, though, is that there simply isn’t a big enough market of people with headsets to justify investment from game makers. This is the chicken-and-egg problem that bedevils all new platforms: if you don’t have users you won’t have developers, but if you don’t have developers you won’t have users.

The second assumption is that augmented reality would be a larger and more compelling market than virtual reality, just like the phone is a larger and more compelling market than games (excluding mobile games, of course). This is because the phone is with you all of the time — an accompaniment to your daily life — whereas more immersive experiences like console games are a destination: because they require your full attention, they have access to less of your time.

However, this is why I discussed the COVID-accelerated devaluation of the physical: for decades work was a physical destination; now it is a virtual one. While it used to be that the knowledge worker day was broken up between solitary work on a computer and in-person meetings, over the last two years in-person meetings were transformed into a link on a calendar invite that opened a video-conferencing call. This is the demotion of the physical I referred to above:

The demotion of the physical

Meanwhile, new products like Facebook’s Horizon Workrooms and Microsoft’s Mesh for Microsoft Teams make it possible to hold meetings in virtual reality. While I have not used Microsoft’s offering, one of the things I found compelling about Horizon Workrooms was how it managed mixed reality: not only could you bring your computer into the virtual environment (via a daemon running on your computer that projected the screen into your virtual reality headset), meeting participants without virtual headsets simply appeared on video screens, no different than workers calling in to an in-person meeting in the physical world:

Horizon Workrooms

I am very impressed — and my opinion is colored — by the experience of Horizon Workrooms. I now understand what CEO Mark Zuckerberg means when he talks about “presence”; there really is a tangible sense of being in the same room as everyone else, and not only in terms of focused discussion: something that is surprisingly familiar is noticing the person next to you not at all paying attention and instead doing email on their computer.

At the same time, it’s not an experience that you would want to use all of the time. For one, the tech isn’t quite good enough; the Quest 2, while a big leap in terms of a standalone device, is still too low resolution and has too limited of battery life to wear for long, and a good number of people still get dizzy after prolonged usage. The bigger problem, though, is that putting on the headset for a call is a bit of a pain; you have to unplug the headset, turn it on, log in, find the Horizon Workrooms app, and join the meeting, and while this only takes a couple of minutes, it’s just so much easier to click that link on your calendar and join a video call.

What, though, if you already had the headset on?

Think again over the last couple of years: most of those people working from home were hunched over a laptop screen; ideally one was able to connect an external monitor, but even that is relatively limited in size and resolution. A future VR headset, though, could contain as many monitors as you could possibly want — or your entire field of view could be one massive monitor. Moreover, the fact that a headset shifts your senses out of your physical environment is actually an advantage if said physical environment has nothing to do with your work.

In this world joining a meeting does not entail shifting your context from a computer to a headset, but simply clicking a button or entering a virtual door; now all of the advantages of virtual reality — the sense of presence in particular — comes for free. What will seem anachronistic is using a traditional laptop or desktop computer; those, like a headset, keep you stationary, without any of the benefits of virtual reality (of course not everyone will necessarily use a fully contained headset like an Oculus; those with high computing needs would use a headset tethered to their computer).

PCs and the Enterprise Market

Here is the most important thing: if virtual reality really is better for work, then that solves the chicken-and-egg problem.

Implicit in assuming that augmented reality is more important than virtual reality is assuming that this new way of accessing the Internet will develop like mobile did. Smartphone makers like Apple, though, had a huge advantage: people already had and wanted mobile phones; selling a device that you were going to carry anyway, but which happened to be infinitely more capable for only a few hundred more dollars, was a recipe for success in the consumer market.

PCs, though, didn’t have that advantage: the vast majority of the consumer market had no knowledge of or interest in computers; rather, most people encountered computers for the first time at work. Employers bought their employees computers because computers made them more productive; then, once consumers were used to using computers at work, an ever increasing number of them wanted to buy a computer for their home as well. And, as the number of home computers increased, so did the market opportunity for developers of non-work applications like games.

I suspect that this is the path that virtual reality will take. Like PCs, the first major use case will be knowledge workers using devices bought for them by their employer, eager to increase collaboration in a remote work world, and as quality increases, offer a superior working environment. Some number of those employees will be interested in using virtual reality for non-work activities as well, increasing the market for non-work applications.

All of these work applications will, to be clear, still be accessible via regular computers, phones, etc. None of them, though, will be dependent on any one of those devices. Rather these applications will be Internet-first, and thus by definition, Metaverse-first.

Microsoft’s Opportunity

This means that the company that is, in my opinion, the most well-placed to capitalize on the Metaverse opportunity is Microsoft. Satya Nadella brought about The End of Windows as the linchpin of Microsoft’s strategy, but that doesn’t mean that Microsoft abandoned the idea of owning the core application around which a workplace is organized; their online-first device-agnostic cloud operating system is Teams.

It is a mistake to think of Teams as simply Microsoft’s rip-off of Slack; while any demo from a Microsoft presentation is obviously an idealized scenario, this snippet from last week’s Microsoft Ignite keynote shows how much more ambitious Microsoft’s vision is:

Microsoft can accomplish this vision with Teams in large part because Microsoft makes so many of the component pieces; this gives the company a powerful selling proposition to businesses around the world whose focus is on their actual business, not on being systems integrators. So many Silicon Valley enterprise companies miss this critical point: they obsess over the user experience of their individual application, without considering how that app fits in the context of a company for whom their app is a means to an end.

This integration, though, also means that Microsoft has a big head start when it comes to the Metaverse: if the initial experience of the Metaverse is as an individual self-contained metaverse with its own data and applications, then Teams is already there. In other words, not only is enterprise the most obvious channel for virtual reality from a hardware perspective, but Teams is the most obvious manifestation of virtual reality’s potential from a software perspective (this promotional video is from the Mess for Microsoft Teams webpage):

What is not integrated is the hardware; Microsoft sells a number of third party VR headsets on said webpage, all of which have to be connected to a Windows computer. Microsoft’s success will require creating an opportunity for OEMs similar to the opportunity that was created by the PC. At the same time, this solution is also an advantageous one for the long-term Metaverse-as-Internet vision: Windows is the most open of the consumer platforms, and that applies to Microsoft’s current implementation of VR. The company would do well to hold onto this approach.

Meta’s Challenge

Meta née Facebook is integrated in a different direction: Meta is spending billions of dollars on not just software but also hardware, and while Workrooms is obviously an enterprise application, Meta has to date been very much a consumer company (Workplace notwithstanding). The analogy to the PC era, then, is Apple and the Mac, and that is a reason to be a bit bearish relative to Microsoft.

Meta, however, has a big advantage that the original Mac did not: the Internet already exists. This is where Workroom’s integration of your computer into virtual reality is particularly clever: when I am using my computer in virtual reality, I have access to all of my applications, data, etc.; perhaps that, along with Workroom’s meeting capabilities, will be sufficient.

Meta, though, should shoot for something more. First off, if I am right, and the enterprise is the first big market for VR, then some of that billions of dollars should go towards building an enterprise go-to-market team that can compete with Microsoft. Second, there remains a huge opportunity long squandered by Google: to be the enterprise platform that competes with Microsoft’s integrated offering by effectively tying together best-of-breed independent SaaS offerings into a cohesive whole.

This is, to be honest, probably unrealistic; Meta is starting from scratch in the enterprise space, without any of the identity and email capabilities that Google possesses, much less Microsoft. More importantly, it’s just very difficult seeing the company having the culture to pull off being an enterprise platform. That, though, places that much more burden on Meta making the best hardware, and keeping its integrated operating system truly open. To that end, it is worth noting that Meta is focused on its headsets being standalone, while Microsoft is still tied to Windows; this gives Meta more freedom-of-movement in terms of working with all of the platforms that already exist.

What is clear, though, is that Facebook needed to change its name: no one wants to use a consumer social network for work. And, as I noted in the context of the name change, Meta is still founder-driven. That may give an execution and vision advantage that other companies can’t match. Again, though, that could mean too much focus on a consumer market that might take longer than Meta hopes to be convinced of why exactly they should buy a VR headset.

Apple and AR

Apple seems like it should be a strong competitor. The company is clearly the most advanced as far as hardware goes, particularly when it comes to powerful-yet-power-efficient chips, which is a big advantage in a power constrained environment. Moreover, Apple can leverage the fact it controls the phone, just as it does with the Apple Watch.

However, I am bearish on Apple’s prospects in this space for three reasons:

  • First, rumors suggest that Apple is focusing on augmented reality, not virtual reality; as I detailed above, though, I think that virtual reality will be the larger market, at least at first.
  • Second, Apple’s iPhone-centricity could be a liability, much as Microsoft’s Windows-centricity was a liability once mobile came along. It is very hard to fully embrace a new paradigm if the biggest part of your businesses is rooted in another; indeed, the fact that Apple is focused on augmented reality reflects an assumption that the world will continue to be one in which the physical has preeminence over the virtual.
  • Third, because both virtual reality and augmented reality will be new-to-the-world interfaces, the importance of developers will likely be more important than in the case of the phone. People bought iPhones first, and developers followed; Apple may have trouble if the chicken-and-egg problem runs in the opposite direction.

Apple Watch is the counter to all of these objections: it’s a device for the physical world, it benefits from the iPhone, and Apple delivered the core customer propositions — notifications and fitness — on its own. Perhaps a better way to state my position is that Apple is likely well placed for augmented reality, but not virtual reality, but I have changed my mind about which is more important.

The Field

It’s hard to see any other hardware-based startups emerging as VR platforms; I think the best opportunity for a startup is riding on Microsoft’s coattails and offering an alternative operating system for the hardware that is produced for Windows. Valve is obviously already doing this with Steam, but there may be a place for a more general purpose alternative, probably based on Android (which, I suppose, Google could build, but the company seems awfully content these days).

Snap and Niantic, meanwhile, are focused on augmented reality, but will be handicapped by the inability to effectively offload compute onto the phone in the same way Apple will be able to, and again, the trick will be getting consumers to care.

Roblox, meanwhile, is arguably the Teams of the consumer space: it is a 2D-metaverse that is device-agnostic; the company is working to keep people connected even when they aren’t playing games, including buying Discord competitor Guilded. Discord, meanwhile, is a bit of a metaverse in its own right, with more connections to external applications; this could be a candidate for the aforementioned company that rides on the Microsoft ecosystem’s coattails.

Again, though, none of this is so different from the world as it exists today, because the Internet already exists (and yes, that includes crypto). That is one of the things I still stand by from The End of the Beginning: technology doesn’t move in step changes, but rather evolves on a spectrum towards more continuous computing. Name changes, whether that be from Facebook to Meta or from Internet to Metaverse, are a marker of that evolution, not a punctuated equilibrium.

I wrote a follow-up to this Article in this Daily Update.

  1. These bullets points are quoted from Ball’s list 


The obvious analogy to Facebook’s announcement that it is renaming itself Meta and re-organizing its financials to separate “Family of Apps” — Facebook, Messenger, Instagram, and WhatsApp — from “Facebook Reality Labs” is Google’s 2015 reorganization to Alphabet, which separated “Google”, including the search engine, YouTube, and Google Cloud, from “Other Bets.” The headline for investors is just how much Facebook is spending on Reality Labs — $10 billion this year, and that amount is expected to grow — but next quarter’s financials will also emphasize just how good Facebook’s core business is; if it plays out like Alphabet, this could be a boon for the stock.

At the same time, while the mechanics may be similar, it is the differences that suggest the implications of this transformation are much more meaningful. Start with the name: “Alphabet” didn’t really mean anything in particular, and that was the point; Larry Page said in the announcement post:

What is Alphabet? Alphabet is mostly a collection of companies. The largest of which, of course, is Google. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main Internet products contained in Alphabet instead. What do we mean by far afield? Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens), and Calico (focused on longevity). Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related. Alphabet is about businesses prospering through strong leaders and independence.

“Meta”, on the other hand, is explicit: CEO Mark Zuckerberg said that Facebook is now a metaverse company, and the name reflects that. It is also focused: Alphabet included a host of ventures, many of which had no real connection to Google; Facebook Reality Labs is a collection of efforts, from virtual reality to augmented reality to electromyography systems, all in service to a singular vision where instead of looking at the Internet, we live in it.

The biggest difference, though, is Zuckerberg: while Page and Sergey Brin, as I wrote at the time, “may be abandoning day-to-day responsibilities at Google, [they have] no intention of abandoning Google’s profits” to pursue whatever new initiatives caught their eye, Zuckerberg quite clearly remains fully committed to both the “Family of Apps” and “Reality Labs”; more than that, Meta is, as Zuckerberg said in an interview with Stratechery, a continuation of the same vision undergirding Facebook:

I think that this is going to unlock a lot of the product experiences that I’ve wanted to build since even before I started Facebook. From a business perspective, I think that this is going to unlock a massive amount of digital commerce, and strategically I think we’ll have hopefully an opportunity to shape the development of the next platform in order to make it more amenable to these ways that I think people will naturally want to interact.

Another comparison is Microsoft: the Redmond company never changed its name, but under former CEO Steve Ballmer it might as well have been called the Windows company; that’s how you ended up with names like Windows Azure, a misnomer born of a misguided strategy that sought to leverage Microsoft’s thriving productivity business and burgeoning cloud offering to prop up the product that had made the company rich, famous, and powerful. Zuckerberg made a similar mistake last year, forcing Oculus users to login with their Facebook account, which not only upset Oculus users but also handcuffed products like Horizon Workrooms, Facebook’s VR solution for business meetings.

Satya Nadella’s great triumph as CEO of Microsoft was breaking Windows hold over the company, freeing the company to not just emphasize Azure’s general purpose cloud offerings, but to also build a new OS centered on Teams that was Internet-centric, and device agnostic. Indeed, that is why I don’t scoff at Nadella’s invocation of the enterprise metaverse; sure, Microsoft has the HoloLens, but that is just one way to access a work environment that exists somewhere beyond any one device or any one app.

Meta seems like Zuckerberg’s opportunity to make the same break: Facebook benefited from being just an app (until it didn’t), but until today Facebook was also the company, and as long as that was the case the metaverse vision was going to be fundamentally constrained by what already exists.

There is a third comparison, though, and that is Apple generally, and Steve Jobs specifically. While Jobs’ tenure in retrospect looks like a string of new innovative products after new innovative products, from the Mac to the iPhone, the latter was in many respects Jobs’ opportunity to truly deliver on the vision he had for the former. The Mac was a computer built for end users, but it launched in an era dominated by enterprises; that is why it was initially a failure from a business perspective, which helped drive Jobs out of the company he had founded. Fast forward 23 years and the iPhone launched in an era where end users dominated the market; it was enterprise players like Microsoft that scrambled, and failed, to catch up.

The analogy to Facebook is the company’s failure to build a phone; the company’s biggest problem is that it was simply too late — iPhone and Android were already firmly established by the 2013 launch of the Facebook First phone and Facebook Home Android launcher — but I also thought that Zuckerberg’s conception of what a phone should be was fundamentally flawed. One of the very first articles on Stratechery was Apps, People, and Jobs to Be Done, where I took issue with Zuckerberg’s argument that people, not apps, should be the organizing principle for mobile; I concluded:

Apps aren’t the center of the world. But neither are people. The reason why smartphones rule the world is because they do more jobs for more people in more places than anything in the history of mankind. Facebook Home makes jobs harder to do, in effect demoting them to the folders on my third screen [in favor of social].

I have long assumed that augmented reality would be a bigger opportunity than virtual reality precisely because augmented reality fits in the same lane as the smartphone; I wrote in The Problem with Facebook and Virtual Reality:

That is the first challenge of virtual reality: it is a destination, both in terms of a place you go virtually, but also, critically, the end result of deliberative actions in the real world. One doesn’t experience virtual reality by accident: it is a choice, and often — like in the case of my PlayStation VR — a rather complicated one.

That is not necessarily a problem: going to see a movie is a choice, as is playing a video game on a console or PC. Both are very legitimate ways to make money: global box office revenue in 2017 was $40.6 billion U.S., and billions more were made on all the other distribution channels in a movie’s typical release window; video games have long since been an even bigger deal, generating $109 billion globally last year.

Still, that is an order of magnitude less than the amount of revenue generated by something like smartphones. Apple, for example, sold $158 billion worth of iPhones over the last year; the entire industry was worth around $478.7 billion in 2017. The disparity should not come as a surprise: unlike movies or video games, smartphones are an accompaniment on your way to a destination, not a destination in and of themselves.

That may seem counterintuitive at first: isn’t it a good thing to be the center of one’s attention? That center, though, can only ever be occupied by one thing, and the addressable market is constrained by time. Assume eight hours for sleep, eight for work, a couple of hours for, you know, actually navigating life, and that leaves at best six hours to fight for. That is why devices intended to augment life, not replace it, have always been more compelling: every moment one is awake is worth addressing.

In other words, the virtual reality market is fundamentally constrained by its very nature: because it is about the temporary exit from real life, not the addition to it, there simply isn’t nearly as much room for virtual reality as there is for any number of other tech products.

Meta’s vision, to be clear, is that one ought to be able to access the metaverse from anywhere, including your phone, computer, AR glasses, and of course a VR headset. What is worth considering, though, are the ways in which the technological revolution I wrote about yesterday are changing society; I think that the term “working from home”, for example, is another misnomer: for some number of people work is virtual, which means it can be done from anywhere — your home is just one of many options. And in that case, an escape from physical reality is actually the goal, not a burden; why wouldn’t the same attraction exist in terms of social interactions generally, particularly as more and more communities exist only on the Internet?

Here Facebook the app was again a limitation: the product digitized offline relationships, which is why it grew so quickly; many of the challenges that have placed Zuckerberg in the hot seat currently stem from grafting on purely digital interactions and relationships to a product that was always more reality-rooted than its competitors. The metaverse, though, by its very definition is rooted in the digital world, and if the primary driver is to interact with people virtually, whether that be for work or for play, then Zuckerberg’s people-centric organizing principle — like Apple’s focus on the end-user — may be a viewpoint that was originally too early, and then right on time.

This is all about as favorable a spin as you can put on Meta, of course; there is a lot of grumbling from investors this week about the effect the effort is having on Facebook’s margins, and my previously-voiced suspicion that Zuckerberg just wants to own a platform very much lines up with Facebook currently feeling the pain from its Apple-dependency in particular. And, needless to say, Facebook is dealing with plenty of other issues in the media and Washington D.C. that not only concern the “Family of Apps”, but also threaten the reception of whatever it is that “Reality Labs” ultimately produces.

Zuckerberg, though, is a founder, which both means he decides, thanks to his super-voting shares, and also that he has the credibility to pull investors along; more than that, though, is the clear founder ethos that Zuckerberg is bringing to Meta. Zuckerberg told me:

One of the things that I’ve found in building the company so far is that you can’t reduce everything to a business case upfront. I think a lot of times the biggest opportunity is you kind of just need to care about them and think that something is going to be awesome and have some conviction and build it. One of the things that I’ve been surprised about a number of times in my career is when something that seemed really obvious to me and that I expected clearly someone else is going to go build this thing, that they just don’t. I think a lot of times things that seem like they’re obvious that they should be invested in by someone, it just doesn’t happen.

I care about this existing, not just virtual and augmented reality existing, but it getting built out in a way that really advances the state of human connection and enables people to be able to interact in a different way. That’s sort of what I’ve dedicated my life’s work to. I’m not sure, I don’t know that if we weren’t investing so much in this, that would happen or that it would happen as quickly, or that it would happen in the same way. I think that we are going to kind shift the direction of that.

Meta exists because Zuckerberg believes it needs to exist, and he is devoted to making the metaverse a reality; it’s his call, for better or worse. In that it reminds me of an increasingly popular phrase on FinTwit: House of Zuck. It has been adopted by a set of investors that are eternally bullish on Facebook not because of its results per se, but because of their conviction that those results come from Zuckerberg’s leadership. It is a belief that is being tested as never before.

Facebook has always been unique amongst the Big 5 tech companies because it is the one company that does not have a monopoly-like moat in the market in which it competes; today it is also unique in that it is the only one of the five that is still founder-led. I don’t think that is a coincidence.

The fact that Facebook is uniquely held responsible for the societal problems engendered by the Internet does, I suspect, stem from the fact that Zuckerberg is an obvious target. How many people concerned about anti-vax rhetoric, for example, can even name the person in charge of YouTube, a far more potent vector? Page and Brin were wise to step aside once Google was established, to make Google a less tempting target; the same with Jeff Bezos. Facebook doesn’t have that luxury. Kara Swisher made explicit what seems obvious: the way for Facebook to escape its current predicament is for Zuckerberg to hand the reins to someone else. Only a founder, though, can admit, as Zuckerberg did on Facebook’s recent earnings call, that the company is losing ground with young people, and not just pivot the “Family of Apps”, but the entire company towards a future vision that establishes Meta as a dominant platform in its own right.

That’s also why I considered titling this article “House of Zuck”; that, more than ever, is what Meta née Facebook is. Today’s Facebook Connect keynote is entirely about a future that doesn’t yet exist; believing that it will happen rests on the degree to which you believe that Zuckerberg the founder can accomplish more than any mere manager.

This is where I come back to Jobs: it’s hard to remember now, but the Apple founder had some very rough edges; his exile from Apple was terrible for the company, but good for Jobs’ maturation into an executive with a founder’s vision and drive that could bring Apple to greater heights than ever before. Zuckerberg doesn’t have the luxury of a decade in the wilderness, but he has certainly undergone a trial by fire; Meta’s ultimate success, or lack thereof, will answer the question if that is enough.

You can read an interview I conducted with Zuckerberg about Facebook’s plan for the metaverse here.

Sequoia Productive Capital

Two weeks ago, in The Death and Birth of Technological Revolutions,1 I puzzled about exactly where we were in the Age of Information and Telecommunications: were we still in the turning point, as Carlota Perez, the author of Technological Revolutions and Financial Capital believes; into the Synergy phase of Deployment, where society re-organizes itself around the new paradigm; or into the Maturity phase where the current technological era starts to fizzle out, even as the next era starts to take root?

While my tentative conclusion at the end of that piece was that we were entering the Maturity phase, over the last couple of weeks I have increasingly come to believe that we are earlier in the cycle than I suggested: we have exited the turning point, and are firmly in the Synergy phase.

Evidence of Synergy

Perez identified six elements of her model that are transformed in a technological revolution:

The Elements of a Technological Revolution

Here are some brief comments about a few of these, before I spend more time on financial capital and production capital.

Technological Revolution

The culmination of the current technological paradigm is mobile + cloud; I already made the case in 2020’s The End of the Beginning that tech history was best understood as consisting not of multiple eras — mainframes, PCs, mobile, etc. — but rather as a multi-decade transformation from a computer-as-destination to computing-as-background:

This last point gets at why the cloud and mobile, which are often thought of as two distinct paradigm shifts, are very much connected: the cloud meant applications and data could be accessed from anywhere; mobile made the I/O layer available anywhere. The combination of the two make computing continuous.

A drawing of The Evolution of Computing

What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

It’s arguable that concepts like metaverses represent one final step down this path, and the final unification of the cloud and interaction; I don’t think that means we aren’t in the deployment period technologically speaking though: continuous computing is here (even if it might become even more immersive).

Techno-Economic Paradigm

In line with the nature of continuous computing, the techno-economic paradigm is Everything as a Service, a concept I wrote about in 2016:

Services sound a lot like software: both are intangible, both scale infinitely, and both are infinitely customizable. It follows that a services business model — payment in exchange for service rendered, without the transfer of ownership — is a much more natural fit for software than the transaction model characteristic of manufacturing. It better matches value generated and value received — customers only pay if they use it, and producers are rewarded for making their product indispensable — and more efficiently allocates fixed costs: occasional users may be charged nothing at all, while regular users who find your software differentiated pay more than the marginal cost of providing it.

The services business model is obviously dominant in terms of software: all new software companies are SaaS companies, and old-line companies like Microsoft and Adobe have long since transformed their licensing-based business models to the SaaS business model. All of these businesses make huge investments in fixed costs (building the software), and reach profitability by leveraging the zero marginal cost nature of software and the Internet to scale as large as possible as quickly as possible.

Meanwhile, all of these new companies operating with a SaaS model themselves depend on public clouds like AWS, which has the exact same model, just at even larger scale: gargantuan investments in fixed costs, made profitable by leveraging the zero marginal cost nature of software and the Internet to scale as large as possible as quickly as possible.

This isn’t just an enterprise story, though: huge consumer platforms like Facebook or Netflix are themselves services, operated on the same business model of massive investments in fixed costs, scaled via the zero marginal cost nature of the Internet. Of course the monetization may differ: Netflix charges a subscription, which aligns payment (continuous) with the delivery of value (continuous). Facebook monetizes via ads:

  • Some of these ads are for subscription-based services
  • Some of these ads are for other digital businesses like games based on the same economic principles
  • Some of these ads are for e-commerce

E-commerce might seem like an exception to the services narrative, but while you do get to keep the goods that are delivered, everything about how e-commerce functions is as a service. Shopify has made huge investments in fixed costs to provide a selling platform for goods that are stored in rented space in 3PL warehouses and delivered by companies like Fedex for a fee, which is another way of saying payment for services.

Here again Amazon is the ultimate example of this model: the company has made massive investments in everything from its online store to its distribution centers to even its own planes and delivery vehicles, and an ever-increasing share of its e-commerce revenue comes from merchants effectively renting access to the entire stack; merchants are still selling goods, and consumers are still receiving them, but everything in the middle is a software-driven service with a software-derived business model.

Local services are being impacted as well: Uber, Doordash, Instacart, etc. are only possible in a world of massive compute capacity and ubiquitous smartphones; this enables customers to order cars or food from anywhere, and drivers or delivery workers to know where to go and what to bring. This goes hand-in-hand with the COVID-driven rise in remote work: yes, only a portion of society has the luxury to work from anywhere (probably because they work in a digital services industry), but what is notable is the transformation of manual labor, from Amazon warehouses to Doordash delivery drivers, that has arisen to serve their needs.

Socio-Institutional Framework

This was the primary focus of my previous Article. Perez has long been puzzled by the seeming lack of response by governments to the new technological paradigm; I think that response is coming into sharp focus, but it is easily missed because it is rather dystopian:

What is worth noting, though, is that you can make the case that China has entered the Synergy phase in which government has aligned with technology to profoundly impact China’s citizens. That this entails mass surveillance, censorship, and propaganda doesn’t undo Perez’s thesis; it perhaps punctures her optimism.

There are signs a weaker, yet in some ways similar, form of synergy has happened in the U.S. as well; soon after the Dotcom Bubble came the Patriot Act, and while the political motivations were the 9/11 terrorist attacks, the implementation was very much about leveraging technology for government ends. The extent of this synergy only became clear in 2013 when the Snowden revelations exposed a vast web of surveillance conducted by tech and telecommunications companies in partnership with the NSA.

Then, over the last several years, there has been a concerted effort to push tech companies to increasingly limit misinformation on their networks, and post corrective information instead; it doesn’t take much squinting to re-label both efforts as censorship and propaganda. This is not, I would note, to pass judgment as to whether those efforts are right or wrong (although I am skeptical); merely to note that there may be more evidence of synergy between the government and tech than it seems. It’s all a bit dystopian, to be sure, but revolutions by their nature are unpredictable; it wasn’t a certainty that liberal democracy would triumph in the fourth revolution, much less the current one.

There is also an argument to be made that the dramatic shift in monetary policy over the last 13 years — prompted by the Great Recession, but interestingly enough perfectly aligned with the smartphone era — is itself evidence of synergy: tech is inherently deflationary, which could be balancing out the inflationary potential of printing new money. This, needless to say, is a very complex topic, which is why I didn’t include it in my previous article; consider this paragraph as a point of speculation.

Financial Capital and Production Capital

One of the most important principles in Perez’s model is the difference between Financial Capital and Production Capital and the roles they play in the Installation period versus the Deployment period (just look at the name of the book!). Perez explains:

Financial capital is mobile by nature while production capital is basically tied to concrete products, both by installed equipment with specific operational capabilities and by linkages in networks of suppliers, customers or distributors in particular geographic locations. Financial capital can successfully invest in a firm or a project without much knowledge of what it does or how it does it. Its main question is potential profitability (sometimes even just the perception others may have about it). For production capital, knowledge about product, process and markets is the very foundation of potential success. The knowledge can be scientific and technical expertise or managerial experience, it can be innovative talent or entrepreneurial drive, but it will always be about specific areas and only partly mobile…

All these distinctions lead to a fundamental difference in level of commitment. Financial capital is footloose by nature; production capital has roots in an area of competence and even in a geographic region. Financial capital will flee danger; production capital has to face every storm by holding fast, ducking down or innovating its way forward or sideways. Yet, though the notion of progress and innovation is associated with production capital – and rightly so – ironically when it comes to radical change, incumbent production capital can become conservative and then it is the role of financial capital (whether from family, banks or ‘angels’) to enable the rise of the new entrepreneurs.

Financial capital is critical in the Installation period of a technological revolution. Because it is mobile and seeking a return it flows to new technologies that are just emerging; in the case of the current era that meant chips, then software, then services. Financial capital is also highly speculative, and provokes a frenzy that leads to a bubble, which is exactly what happened in the Dot Com era.

Production capital, on the other hand, is harvested from profitable businesses that re-invest their earnings to improve their products and expand their markets during the Deployment period. This has clearly been happening with the largest tech companies: one of the best investments one could have made over the last decade would have been stock in Apple, Microsoft, Google, Amazon, and Facebook, not because they needed investor capital, but because they were so generating such exceptional returns from reinvesting their own profits.

Indeed, the fact that the big 5 tech companies were so clearly powered by production capital is one of the reasons why I have always been skeptical that we are still stuck in the Turning Point; at the same time, it is not as if the venture capital industry had disappeared.

Financial Capital = Venture Capital

Venture capital is another name for Financial capital; Perez writes:

As far as truly new ventures are concerned, innovators may have brilliant ideas for which they are willing to take huge risks, devoting their whole lives to bringing their projects to reality, but if finance is not forthcoming they can do nothing.

This is how I described venture capital in 2015’s Venture Capital and the Internet’s Impact:

In the case of startups, during the 45 years after Arthur Rock founded the first venture capital partnership in 1961, the vast majority of new firms needed significant funding from day one. Hardware startups of course needed specialized equipment, the funds to make prototypes, and then to set up actual manufacturing lines, but software startups, particularly those with any sort of online component, also needed to make significant hardware investments into servers, software that ran on said servers, and a staff to manage them. This was where the venture capitalists’ unique skill-set came into play: they identified the startups worthy of funding through little more than a PowerPoint and a person, and brought to bear the level of upfront capital necessary to make that startup a reality.

The point of that article, however, was to explain how AWS — a service — was transforming venture capital; now new products could be built in someone’s bedroom, leading to the rise of angels writing much smaller checks to much smaller teams achieving much larger impact much more quickly than ever before. This made a venture capitalist’s job easier in the short term — now they were writing checks based on existing products and growth metrics, instead of PowerPoints — but in the long run it meant that venture capital was becoming commoditized.

The story doesn’t end there: the trouble for venture capitalists is that they are getting squeezed from the top of the funding hierarchy as well: a new class of growth investors, many of them made up of traditional limited partners like Fidelity and T. Rowe Price, are approaching unicorn companies on a portfolio basis…Sure, this is relatively dumb money, but that’s where those angel and incubator relationships come in: if startups increasingly feel they have the relationships and advice they need, then growth funding is basically a commodity, so why not take dumb cheap money sooner rather than later?

Interestingly, just as in every other commodity market, the greatest defense for venture capitalists turns out to be brand: firms like Benchmark, Sequoia, or Andreessen Horowitz can buy into firms at superior prices because it matters to the startup to have them on their cap table. Moreover, Andreessen Horowitz in particular has been very open about their goal to offer startups far more than money, including dedicated recruiting teams, marketing teams, and probably most usefully an active business development team. Expect the venture capitalist return power curve to grow even steeper.

Brand, though, can only resist commoditization for so long; a16z in particular has dramatically accelerated its growth, and now the most famous brand name of all is going even further.

Sequoia’s Transformation

Sequoia Partner Roelof Botha wrote yesterday on Medium:

Innovations in venture capital haven’t kept pace with the companies we serve. Our industry is still beholden to a rigid 10-year fund cycle pioneered in the 1970s. As chips shrank and software flew to the cloud, venture capital kept operating on the business equivalent of floppy disks. Once upon a time the 10-year fund cycle made sense. But the assumptions it’s based on no longer hold true, curtailing meaningful relationships prematurely and misaligning companies and their investment partners. The best founders want to make a lasting impact in the world. Their ambition isn’t confined to a 10-year period. Neither is ours…

Today, we are excited to announce our boldest innovation yet to help founders build enduring companies for the 21st century. In our U.S./Europe business, we are breaking with the traditional organization based on fund cycles and restructuring Sequoia Capital around a singular, permanent structure: The Sequoia Fund.

Moving forward, our LPs will invest into The Sequoia Fund, an open-ended liquid portfolio made up of public positions in a selection of our enduring companies. The Sequoia Fund will in turn allocate capital to a series of closed-end sub funds for venture investments at every stage from inception to IPO. Proceeds from these venture investments will flow back into The Sequoia Fund in a continuous feedback loop. Investments will no longer have “expiration dates.” Our sole focus will be to grow value for our companies and limited partners over the long run.

This announcement, in Perez terms, is as clear as could be: Sequoia is transforming itself from financial capital to production capital. Instead of LP’s investing in funds that make speculative investments in risky endeavors, Sequoia wants to keep long-term positions in companies that have proven business models and are embarking on the decade (or longer) process of improving their products and expanding their markets to the entire world.

This also, I suspect, represents the formation of a sort of “Silicon Valley Inc.”; while the big 5 can entirely self-fund, the nature of the SaaS business model is such that companies with proven product-market fit are better off losing more money up-front rather than less:

  • Customers, once acquired, are like annuities that make money years into the future, but the cost to acquire them has to be paid up front.
  • The core software product represents a huge fixed cost investment that is leveraged by scaling to as many customers as possible.

The combination of these two factors means that SaaS companies take longer to self-fund, even if their models are proven; what Sequoia can do with their model is invest in an entire portfolio of these companies and hold onto them indefinitely, effectively recycling money from mature companies into nascent ones, much as Apple or Microsoft invests profits from their current products into the development of new ones. On an individual company level it looks like venture/financial capital; as a collective it is much more akin to production capital, especially once you realize that many of these companies, thanks to angels and AWS, are already fairly de-risked.

Moreover, while Sequoia’s announcement feels so momentous because of their long history in the Valley, institutions like Tiger Capital are already playing the exact same game; the era of production capital is firmly upon us, which means we are clearly in the Deployment period of Perez’s model.

Growth and Crypto

In Perez’s model the Growth element in the Deployment period means “converging growth of most industries using [the new] paradigm”; that seems inconsistent with a reality where tech continues to provide a huge amount of growth relative to the rest of the economy.

I think, though, that is a problem of definition: we still have a habit of calling everything that runs on a SaaS model a tech company, even if the actual industry or service being rendered isn’t about tech at all. Is Warby Parker a tech company? Is Carvana? Is DoorDash? The list goes on-and-on: new companies are created using technology, but of course they are! Calling everything a tech company is like calling a shopping mall a car company; sure, it was enabled by and undergirded by the automobile, but what in that era wasn’t? To return to The End of the Beginning:

Indeed, this is exactly what we see in consumer startups in particular: few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.

That leaves the question of crypto; given its oppositional nature to the current paradigm — decentralization, encryption, and ownership — it is clearly something completely new; moreover, the capital chasing returns in crypto is clearly financial capital, not production capital.

It’s certainly possible for a new paradigm to emerge alongside an existing paradigm; Perez notes on multiple instances her model is stylized, and details multiple periods of overlap. What is notable, though, is that crypto draws talent from the same pool that drives the IT revolution, and it’s unclear what the impact of this drain will be on the current paradigm. The optimistic take is that the shift to remote work will dramatically increase the available talent pool to tech-companies-that-are-actually-just-companies, and it’s one I share: the opportunity now is in new markets, not new tech, and open source software, along with public clouds, provides a strong foundation for any new company.

That suggests that crypto will continue to exist in a bit of a parallel universe, which makes sense; it already has its own currencies, after all. That is another way of saying I think that crypto is still very early in its lifecycle; 2017 wasn’t the big crash, nor will the next one be — remember that tech had its own internal boom-and-bust cycles in the three decades between the introduction of the Intel processor and the Dotcom bubble. Nor is this a bearish take: those three decades were exceptionally profitable for everyone involved — including Sequoia, which was founded in 1972. Perhaps its successor, the one that shifts to providing productive capital for crypto companies decades from now, has already been born.

  1. Do read that piece first, if you haven’t, and Jerry Neumann’s overview of Perez’s theory

The Death and Birth of Technological Revolutions

What was especially remarkable about Carlota Perez’s Technological Revolutions and Financial Capital was its timing: 2002 was the middle of the cold winter that followed the Dotcom Bubble, and here was Perez arguing that the IT revolution and the Internet were not in fact dead ideas, but in the middle of a natural transition to a new Golden Age.

Note: the following is a woefully incomplete summary of what is a brilliant — and very readable — book. Jerry Neumann has written an excellent overview of Perez’s theory at Reaction Wheel; I highly recommend reading that first if you are unfamiliar with Perez’s work.

Perez’s thesis was based on over 200 years of history and the patterns she identified in four previous technological revolutions:1

  • The Industrial Revolution began in Great Britain in 1771, with the opening of Arkwright’s mill in Cromford
  • The Age of Steam and Railways began in the United Kingdom in 1829, with the test of the ‘Rocket’ steam engine for the Liverpool-Manchester railway
  • The Age of Steel, Electricity and Heavy Engineering began in the United States in 1875, with the opening of the Carnegie Bessemer steel plant in Pittsburgh, Pennsylvania
  • The Age of Oil, the Automobile, and Mass Production began in the United States in 1908, with the production of the first Ford Model-T in Detroit, Michigan
  • The Age of Information and Telecommunications began in the United States in 1971, with the announcement of the Intel microprocessor in Santa Clara, California

Perez’s argument was that the four technological revolutions that proceeded the Age of Information and Telecommunications followed a similar cycle:

The lifecycle of technological revolutions

However, this process is usually disjointed; Perez writes:

In real life, the trajectory of a technological revolution is not as smooth and continuous as the stylized curve presented in Figure 3.1. The process of installation of each new techno-economic paradigm in society begins with a battle against the power of the old, which is ingrained in the established production structure and embedded in the socio-cultural environment and in the institutional framework. Only when that battle has been practically won can the paradigm really diffuse across the whole economy of the core nations and later across the world…

In very broad terms, each surge goes through two periods of a very different nature, each lasting about three decades.

Two different periods in each great surge

As shown in Figure 4.1, the first half can be termed the installation period. It is the time when the new technologies irrupt in a maturing economy and advance like a bulldozer disrupting the established fabric and articulating new industrial networks, setting up new infrastructures and spreading new and superior ways of doing things. At the beginning of that period, the revolution is a small fact and a big promise; at the end, the new paradigm is a significant force, having overcome the resistance of the old paradigm and being ready to serve as propeller of widespread growth.

The second half is the deployment period, when the fabric of the whole economy is rewoven and reshaped by the modernizing power of the triumphant paradigm, which then becomes normal best practice, enabling the full unfolding of its wealth generating potential.

What made Perez’s observation so trenchant in 2002 is that part in the middle: the turning point.

The Post-Dotcom Era

While the Installation Period begins with irruption as new technology emerges in pursuit of real world applications, it eventual transitions into a full-blown frenzy as speculative capital pursues increasingly fantastical commercial applications.

Recurring phases of each great surge

Reality, though, catches up, and the bubble pops.

This financial frenzy is a powerful force in propagating the technological revolution, in particular its infrastructure, and enhancing – even exaggerating – the superiority of the new products, industries and generic technologies. The ostentation of success pushes the logic of the new paradigm to the fore and makes it into the contemporary ideal of vitality and dynamism. It also contributes to institutional change, at least concerning the ‘destruction’ half of creative destruction.

At the same time, as mentioned before, all this excitement divides society, widening the gap between rich and poor and making it less and less tenable in social terms. The economy also becomes unsustainable, due to the appearance of two growing imbalances. One is the mismatch between the profile of demand and that of potential supply. The very process by which intense investment was made possible by concentrating income at the upper end of the spectrum becomes an obstacle for the expansion of production of any particular product and for the attainment of full economies of scale. The other is the rift between paper values and real values. So the system is structurally unstable and cannot grow indefinitely along that path.

With the collapse comes recession – sometimes depression – bringing financial capital back to reality. This, together with mounting social pressure, creates the conditions for institutional restructuring. In this atmosphere of urgency many of the social innovations, which gradually emerged during the period of installation, are likely to be brought together with new regulation in the financial and other spheres, to create a favorable context for recoupling and full unfolding of the growth potential. This crucial recomposition happens at the turning point which leaves behind the turbulent times of installation and paradigm transition to enter the ‘golden age’ that can follow, depending on the institutional and social choices made.

This certainly seems to describe the Dotcom Bubble, which was not only destructive to speculators directly but the economy broadly, even as its excesses, particularly in terms of broadband build-up, funded the infrastructure that would fuel the Internet over the next two decades. And, by extension, those two decades would seem to be the Golden Age of the “Deployment Period.” That certainly seems to be the case with technological dispersion: today over four billion people have access to the Internet, and thanks to the global nature of the web, those in developing countries can consume and create on the same platforms as the most well off.

Moreover, the “Capital” part of Perez’s theory seems to fit as well: some of the best returns over the last fifteen years have been in established public companies like Apple, Microsoft, Google, Amazon, and Facebook — “Production Capital”, in Perez’s nomenclature. Venture capital, meanwhile, which is theoretically speculative “Financial Capital”, has increasingly become professionalized and standardized, thanks in part to the rise of cloud platforms like AWS; building a new SaaS company to take on another old-world vertical certainly takes hard work, but the playbook is fairly well-known.

This was my thinking behind 2020’s The End of the Beginning; I wasn’t thinking of Perez when I wrote that, to be honest, even though I reached for the automobile example. It just seemed clear to me that the post Dotcom Bubble era had reached its natural endpoint as far as market structure was concerned; whatever came next would look significantly different.

Perez disagrees.

The Imminent Golden Age

While the introduction to Technological Revolutions and Financial Capital makes the case that the Dotcom Bubble was the Turning Point, Perez now thinks we are still waiting for the Golden Age — and that there may be another crash in the future (Perez now includes the Great Recession as part of the current revolution’s Turning Point).

Perez’s link is to the Financial Times’ Tech Tonic podcast; the pertinent part starts at the 3:48 mark:

The important thing is that the previous revolutions had the Golden Age after the recession that follows the crash. And we could now perhaps have a global sustainable Golden Age. I think it is perfectly possible with the current technologies.

What would be necessary to bring that Golden Age about? How do we need to tilt the playing field to make that happen?

Well “tilt the playing field” is the word. The first thing we have to understand is that every Golden Age has had to do with social-political choices made by governments, because capitalism really only becomes legitimate when the greed of some is for the benefit of the many.

I think in order to tell you what needs to happen next time I have to give you an example from the past, because otherwise we don’t learn anything from history, and that’s why it’s important to understand how revolutions happen before. The mass production revolution brought the post-War boom. Now what happened then? If we look at the 1930s, we have some similarities with today. We see xenophobia, we see a lot of people angry and following at that time fascism and communism, now all sorts of extremisms right and left, leaders that really offer heaven even though they cannot delivery, but the whole thing is that people are angry and disappointed.

But you also have something else which is very important, which is that there is an enormous technological potential which is not being used. Not enough investment is going in the possible innovations because there is not enough demand, and demand is normally created by some policies. But it has to be policies that are adequate for that particular revolution. So what was the previous revolution? It was about mass production. So what was the direction in which it was tilted?

Well, first of all it was the World War. And with the World War it was obvious that producing a lot of weapons made a lot of good business sense. They became cheaper and better and so on. But then at the end of the war, governments did something very important: they created a set of policies that favored suburbanization. Before the automobile you had railways, so you only had stations, and the land in-between was very cheap, it had no way of being used. But once you have the automobile you can build cheap mass-produced houses to put lots of electrical appliances inside and the car at the door. And at the same time governments made the welfare state so that workers could buy those houses. So you have home ownership and consumerism, that’s one of the directions, and the other direction was the Cold War of course, so that you had innovation going in the two directions.

If we had stayed in what was visible in the 30s, it was very difficult to imagine this Golden Age that came after the war. The same thing is happening to us now. In order to get the technologies to go in the right direction, you’ve got to tilt the playing field, and I hold that the most effective way of doing that today is tilting it towards ‘Green’.

Perez’s view on how a focus on “Green” policies could fuel a Golden Age are well fleshed-out in papers like A Smart Green ‘European Way of Life’: the Path for Growth, Jobs and Wellbeing; one insight that I find very compelling is that the demand that drives job growth is less about the technology itself and more about the new lifestyle that the technology enables (just like suburbanization drove the previous revolution).

It’s worth noting, though, that Perez has a somewhat darker interpretation of the 1930’s in Technological Revolutions and Financial Capital (emphasis mine):

Regarding recovery in the 1930s, one cannot look at the USA only. In Germany, with Hitler’s rise to power, the institutional framework was reoriented to facilitate the development of mass production (and later of mass destruction and genocide). The war economy that began after 1933 in Germany could be seen as a synergy phase of a sort. Fortunately, the Nazis failed to conquer Europe and lost the war; otherwise, National Socialist Germany might have been the center of a longer-lasting fascist world. At that same time, the Soviet economy too was developing very fast with another mode of growth that was also capable of intensively deploying mass production. This wide range of options for the deployment of that particular paradigm — including the Keynesian democracies that will have the USA as their core — is an indication of how much is at stake and how much is decided about the future of each country and of the world at the turning point of each surge.

This isn’t a throwaway observation; Perez’s chart of technological revolutions is clear that the U.S. and Europe were on different timelines:

Approximate dates of installation and deployment phases of each great surge

The implication of this observation is that the “Synergy” phase is amoral; it is not guaranteed that the alignment of government with the new technological revolution and its resultant impact on people leads to a “better” outcome as far as liberal democracy is concerned. Perez noted in a footnote:

The mass-production revolution, which marked most of the institutions of the twentieth century, underlay the centralized governments and massive consumption patterns of the four great modes of growth that were set up to take advantage of those technologies: the Keynesian democracies, Nazi-fascism, Soviet socialism and State developmentalism in the so-called ‘Third World,’ each with very wide-ranging specificities.

Synergies are not always golden.

The China Model

Another observation from Perez is that new technological revolutions create the conditions for newcomers to “leapfrog”:

In periods of paradigm shift there is a window of opportunity for real catching up as well as for forging ahead. Belgium, France and the USA caught up in the installation period of the second surge; Germany and the USA forged ahead in that of the third. Most of Europe, Japan and the Soviet Union, caught up in the fourth (though the latter fell dramatically behind with the fifth).

This is where the absence of China from Technological Revolutions and Financial Capital is notable. The only mention is in the postscript:

Yet, in the globalized world of the present paradigm, demand is also global. The best promise of massive market expansion would seem to be in the incorporation of more and more countries to global growth, investment, production and consumption. Growth in the larger countries of the developing world, together with China, Russia and the ex-socialist group of Eastern Europe, could serve as a first tier to pull the others forward. It is quite obvious that these potentially huge markets are a very long way from saturation.

This was a view reflective of the era in which it was written, in which it was assumed that the Internet, in conjunction with globalization, would liberalize and ultimately democratize China. In 2000, President Bill Clinton, upon the occasion of the establishment of Permanent Normal Relations with China said in a speech:

When China joins the W.T.O., by 2005 it will eliminate tariffs on information technology products, making the tools of communication even cheaper, better, and more widely available. We know how much the Internet has changed America, and we are already an open society. Imagine how much it could change China.

Now there’s no question China has been trying to crack down on the Internet. (Chuckles.) Good luck! (Laughter.) That’s sort of like trying to nail jello to the wall. (Laughter.) But I would argue to you that their effort to do that just proves how real these changes are and how much they threaten the status quo. It’s not an argument for slowing down the effort to bring China into the world, it’s an argument for accelerating that effort. In the knowledge economy, economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.

Things obviously didn’t work out that way; if anything the Internet has allowed China to push its values onto Americans. What is worth noting, though, is that you can make the case that China has entered the Synergy phase in which government has aligned with technology to profoundly impact China’s citizens. That this entails mass surveillance, censorship, and propaganda doesn’t undo Perez’s thesis; it perhaps punctures her optimism.

There are signs a weaker, yet in some ways similar, form of synergy has happened in the U.S. as well; soon after the Dotcom Bubble came the Patriot Act, and while the political motivations were the 9/11 terrorist attacks, the implementation was very much about leveraging technology for government ends. The extent of this synergy only became clear in 2013 when the Snowden revelations exposed a vast web of surveillance conducted by tech and telecommunications companies in partnership with the NSA.

Then, over the last several years, there has been a concerted effort to push tech companies to increasingly limit misinformation on their networks, and post corrective information instead; it doesn’t take much squinting to re-label both efforts as censorship and propaganda. This is not, I would note, to pass judgment as to whether those efforts are right or wrong (although I am skeptical); merely to note that there may be more evidence of synergy between the government and tech than it seems. It’s all a bit dystopian, to be sure, but revolutions by their nature are unpredictable; it wasn’t a certainty that liberal democracy would triumph in the fourth revolution, much less the current one.

A Crypto Revolution?

As the tweet above makes clear, Perez relishes debate about her theories; I am one of many writers on the Internet who have had the distinct pleasure of getting an email out of the blue from Perez, and having a conversation where she pushes and prods to understand the other’s point of view, confident it will make her theses stronger.

And, in that spirit, I have to confess I’m not sure if this rebuttal to Perez’s current position — my sense that we are in the maturation phase of the technological revolution, complete with government synergy — is correct or not. Perez has noted that COVID-19 could end what she thinks is the elongated turning point era, much like World War II ended the elongated turning point era of the previous revolution (at least in the U.S.). It is notable, for example, that the tech industry has also been an essential element in various government lockdown strategies during the COVID pandemic, most obviously by making it possible for the economy to continue to function while people work from home, and also in enabling a work-from-home lifestyle via e-commerce and food delivery services, with all of the commensurate jobs entailed in providing these services. That is a fundamental change to society that is only getting started — perhaps a new Golden Era is in fact imminent.

At the same time, it is notable that crypto, the most obvious candidate for the next technological revolution is not — contra Perez — an obvious extension of the current era. The overarching story of Stratechery has been the rise and consolidation of the aforementioned Big 5 tech companies, and the entire premise of Aggregation Theory is the inevitability of centralization in a world of frictionless abundance. Crypto, though, is about the introduction of scarcity; its payoff is decentralization, at the cost, at least for now, of convenience and speed.

Perez writes in Technological Revolutions and Financial Capital about what the Maturity phase looks like:

This is the twilight of the golden age, though it shines with false splendor. It is the drive to maturity of the paradigm and to the gradual saturation of markets. The last technology systems and the last products in each of them have very short life cycles, since accumulated experience leads to very rapid learning and saturation curves. Gradually the paradigm is taken to its ultimate consequences until it shows up its limitations.

Yet, all the signs of prosperity and success are still around. Those who reaped the full benefits of the ‘golden age’ (or of the gilded one) continue to hold on to their belief in the virtues of the system and to proclaim eternal and unstoppable progress, in a complacent blindness, which could be called the ‘Great Society syndrome’. But the unfulfilled promises had been piling up, while most people nurtured the expectation of personal and social advance. The result is an increasing socio-political split…this is a time when deep questions about the system are being asked in many quarters; the climate is favorable for politics and ideological confrontations to come to the fore. The social ferment can become intense and is sometimes quelled with social reforms.

Meanwhile, in the world of big business, markets are saturating and technologies maturing, therefore profits begin to feel the productivity constriction. Ways are being sought for propping them up, which often involve concentration through mergers or acquisitions, as well as export drives and migration of activities to less-saturated markets abroad. Their relative success makes firms amass even more money without profitable investment outlets. The search for technological solutions lifts the implicit ban on truly new technologies outside the logic of the now exhausted paradigm. The stage is set for the decline of the whole mode of growth and for the next technological revolution.

That seems awfully descriptive of the current era, no? Products that break through reach saturation in record time (see TikTok reaching a billion users in three years, or DTC companies that seem to max out in only a couple of years), while the future of established companies seems to be quagmire in legislators and the courts, even as profits continue to pile up without obvious places to invest. And if the government’s response to the revolution has been disappointing, that also may be because of the revolution itself.

Moreover, to the extent the dystopian picture above is correct — that the real synergy has been between centralized governments and centralized tech companies, to the alarm of both those abroad and in the U.S. — the greater the motivation there is to make the speculative investments that drive the next paradigm, especially if that paradigm operates in direct opposition to the current one. To be sure this framework does imply that crypto is full of scams and on its way to inflating a spectacular bubble, the aftermath of which will be painful for many, but that is both expected and increasingly borne out by the facts as well. What will matter for the future is how much infrastructure — particularly wallet installation — can be built-out in the meantime.

For what it’s worth my suspicion is that the current Installation period for crypto — if that is indeed where we are — has a long ways to run, which is another way of saying most of the economy will remain in the current paradigm for a while longer. The time from the Intel microprocessor to the Dotcom Bubble bursting was 30 years (and, it should be noted, there were a lot of smaller, more localized bubbles along the way); Satoshi Nakamoto only published his paper in 2008. Thirteen years after 1971 was 1984, the year the Mac was introduced; the browser was another 9 years away. It’s one thing to see the future coming; it’s something else entirely to know the timing. On that Perez and I can certainly agree.

I wrote a follow-up to this Article in this Daily Update.

  1. This list is transcribed from the second Table 2.1 — there are two — on page 11 of 2014 paperback edition of Technological Revolutions and Financial Capital 

Facebook Political Problems

Stratechery provides analysis of the strategy and business side of technology and media, and the impact of technology on society.

So begins my about page, and there is no company that expands to fill the space afforded by that description quite like Facebook. This does, on one hand, provide for endless amounts of content, much of which, frankly, I would like to move on from; it also means that writing about one aspect of Facebook is fraught with risk: just because you defend one aspect, or attack another, does not mean one is making a comment on the entirety of the corporate beast. Word limits, though, are a thing — yes, even on the web — which means that one can never cover all of one’s bases in any one individual article.1

So consider this piece self-service, of sorts: this Article will be what I link to in future posts with the caveat, “I’m not talking about Facebook political issues today; if you want my big-picture take follow this link.”

Which will take you here.

A quick aside about timing: this post is being written the day after Frances Haugen appeared before Congress; I’m not writing about her testimony specifically, as I felt it covered very little new ground. Saying as much, though, did seem to prompt a lot of misunderstanding and, frustratingly, allegations of bad faith, which, optimistically, might have been alleviated by a link to an article like this one.

What I did want to do is — word limit be damned — write a post about Facebook’s political problems, as I perceive them, in their entirety. I do think the media gets a lot of things wrong about Facebook, not because there aren’t problems, but because the problems are more profound than the issue of the day. So here’s my best shot.

Facebook’s Benefit

This is perhaps an odd place to start, but it cuts to the core of why I do see real benefits to Facebook’s existence in the world, and is an important part of the trade-off calculations I make in the sections that follow.

I believe that the economy of the future will, if we don’t stifle it along the way, look considerably different than the post World War II order dominated by large multinational corporations whose differentiation was predicated on distribution. Instead the future looks more like a rainforest, with platforms that span the globe and millions of niche businesses that sit on top.

I am, given my career, biased in this regard, but the rise of platforms like Shopify, Etsy, Substack, and the App Store is evidence that new careers can be built and untold niches filled when the entire world is your addressable market. The challenge in a worldwide market, though, is finding the customers who are interested in the niche being filled; this is where Facebook’s ad offering is very much a platform in its own right.

Facebook, via its integration with a host of third party sites and apps, makes it possible for those sites and apps to collectively understand and target prospective customers with the same sort of proficiency that first-data powerhouses like Google and Amazon do, and they don’t need to hold or understand any third-party data to do so. It is difficult to overstate what a big deal this is: suddenly a Shopify merchant can compete with Amazon, a D2C startup with Unilever, or a blog with the New York Times.

Moreover, Facebook’s platform, unlike many of its competitors, is entirely automated and auction-driven; that means that small businesses have the same shot at customers as their far larger competitors do. Facebook is also very adept at simplifying the process (in exchange for more margin, of course): simply specify how much a customer is worth to you, and Facebook will deliver that customer. It’s an advertising platform that truly levels the playing field, and I think it is a very good thing for the world.

Perceived Problem One: Privacy

The flipside of Facebook’s advertising platform is the reality that the company absolutely does ingest huge amounts of data. It is worth noting, though, that a big chunk of the most private human readable data is content that users give Facebook themselves when they use the site; Facebook combines that with all of the data it collects in all of those connected apps and sites — which are primarily about measuring conversions — in its Data Factory to create machine-learning driven profiles that undergird its advertising. What Facebook does not do is sell user data, not only because data undergirds its advertising business, but also because nearly all of that data would be unintelligible and worthless to any entity other than Facebook.

As I noted in Privacy Fundamentalism, it is tempting to imagine this data as being something akin to a file of your life just laying around for anyone to peruse; Apple has implied precisely this with some of its recent ads. The reality, though, is far more mundane: the nature of computers and the Internet is the spewing of data everywhere, and it is only in aggregate, in a data factory, than any insight from this collection of vectors can be derived, and only then in the context of a larger application like targeted advertising conducted at massive scale. It is exceedingly unlikely that Facebook could even make the data about any one individual readable by a human, and there is no incentive to do so; not only do third party apps and sites have no need nor desire for individual level data, neither do advertisers — Facebook’s entire value to both is as a matchmaker at scale.

To my mind, the reality of data on Facebook is well worth the trade-off for the value Facebook’s advertising delivers to niche-focused businesses. Moreover, I feel much better about Facebook as a data mediator than about modular advertising stacks where data really is bought and sold. I do understand that lots of folks disagree with me on this point, but again, everything is a trade-off, and I think this one is worth it.

Perceived Problem Two: Competition

The core of my Framework for Regulating Competition on the Internet is distinguishing between platforms and Aggregators; platforms are extremely valuable, because of the opportunities they enable, but are also more subject to abuse, because companies taking advantages of those opportunities are beholden to the platform. Aggregators, on the other hand, dominate their markets by controlling demand; suppliers are not locked in, but are rather seeking end users, who themselves can visit another website or open another app at any time. The societal value of Aggregators is lower than Platforms, but the extent to which they can inflict harm is more limited as well — competition really is just a click away.

To that end, the only extent to which Facebook is a potential antitrust concern is in terms of advertising, thanks to its massive amount of proprietary data and highly advanced data factory. At the same time, Facebook’s auction driven format ensures that Facebook is not gouging advertisers, and the effectively infinite potential inventory in digital means that competitors are not locked out from building competing products; the only scarcity is audience attention. Moreover, tradeoffs matter here as well: Facebook having extremely effective data is good for businesses on its platform, and there are privacy concerns with forcing Facebook to share.

On the consumer side, I simply don’t see any anticompetitive concerns at all. Consumers are not locked into a single communications app, but can and do multi-home, and can switch between competing services with a swipe. Facebook’s challenge is in continuing to keep users opening and using its apps, and there is plenty of evidence the company is struggling to do just that.

A decade ago Facebook responded to this challenge by savvily buying upcoming communications apps like Instagram and WhatsApp; I do think that this reduced competition in the sector, particularly from an advertising perspective, and I do think there is a case for regulators to look much more critically at Aggregators buying other would-be Aggregators. That noted, in 2021 Facebook — even with Instagram and WhatsApp — faces more competition than ever before; the market worked, and regulators are very much on guard for similar purchases. Facebook is going to have to keep users with what it has, or builds.

Political Problem One: Facebook’s Competence

These first three points, taken together, paint a picture of an exceptionally well-run company and a very attractive business; there is a reason why Facebook is worth around a trillion dollars. At the same time, it is that exceptional competence that starts to explain Facebook’s current political predicament.

Andrew Bosworth, Facebook’s New CTO, wrote an internal memo in 2016 about Facebook’s commitment to growth that later leaked to BuzzFeed; it is essential to understanding Facebook:

We talk about the good and the bad of our work often. I want to talk about the ugly.

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. If you joined the company because it is doing great work, that’s why we get to do that great work. We do have great products but we still wouldn’t be half our size without pushing the envelope on growth. Nothing makes Facebook as valuable as having your friends on it, and no product decisions have gotten as many friends on as the ones made in growth. Not photo tagging. Not news feed. Not messenger. Nothing.

In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That’s our imperative. Because that’s what we do. We connect people.

In the seventeen years since Facebook was founded the number of people using the Internet has grown from around a billion people to around 4.5 billion people; in developed countries the Internet is close to being universal, but developing countries are steadily catching up:

Worldwide Internet users over time

Of those 4.5 billion people, 3.5 billion use at at least one Facebook service on a monthly basis; 2.8 billion use at least one on a daily basis. That is exactly the sort of competence I was referring to above. As Bosworth notes, though, Facebook’s growth isn’t just about competence, but about a single-minded determination to do everything possible to make Facebook synonymous with the Internet.

Here is the problem, though: it is not at all certain that the Internet is good for society. I believe it is — I just articulated a positive vision for the democratization enabled by Facebook advertising, to take but one small example — but there are obviously massive downsides as well. Moreover, many of those downsides seem to spring directly from the fact that people are connected: it’s not simply that it is trivial to find people who think the same as you, no matter how mistaken or depraved you might be, but it’s also trivial to find, observe, and fight with those who simply have a different set of values or circumstances. The end result feels like an acceleration of tribalism and polarization; it’s not only easy to see and like your friends, but even easier to see and hate your enemies with your friends.

This is, as I noted, an Internet problem — as Facebook is happy to tell you — but the truth is that Facebook, thanks to its uber-competent focus and execution on growth, effectively made the Internet problem a Facebook problem. Sure, you can make the case that had Facebook not pursued growth at all costs there would be another social network in its place — and frankly, I believe that Twitter gets off far too easy in discussions about deleterious impacts on society — but the reality is that Facebook did win, and just because some of its spoils are rotten doesn’t absolve the company of responsibility. If you are going to onboard all of humanity, you are going to get all of humanity’s problems.

Secondly, Facebook pursued this growth without questioning if the obstacles it faced — the fact that “The natural state of the world is not connected. It is not unified.” — perhaps functioned as useful buffers; instead the company, particularly founder and CEO Mark Zuckerberg, was blinded by a Messianic impulse to create the conditions for global governance. Zuckerberg wrote in 2017’s Building Global Community:

History is the story of how we’ve learned to come together in ever greater numbers — from tribes to cities to nations. At each step, we built social infrastructure like communities, media and governments to empower us to achieve things we couldn’t on our own. Today we are close to taking our next step. Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.

This is especially important right now. Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial. Every year, the world got more connected and this was seen as a positive trend. Yet now, across the world there are people left behind by globalization, and movements for withdrawing from global connection. There are questions about whether we can make a global community that works for everyone, and whether the path ahead is to connect more or reverse course.

This is a time when many of us around the world are reflecting on how we can have the most positive impact. I am reminded of my favorite saying about technology: “We always overestimate what we can do in two years, and we underestimate what we can do in ten years.” We may not have the power to create the world we want immediately, but we can all start working on the long term today. In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.

What is particularly striking about this excerpt is the jump between the second paragraph, when Zuckerberg nods towards “questions” about reversing course, and the third paragraph which makes clear those questions were never seriously considered. And how could they be — “Connecting people. That’s our imperative. Because that’s what we do. We connect people.” Facebook’s combination of competence and drive ensured Internet problems were Facebook problems, and Zuckerberg’s missionary zeal didn’t give space to consider what might go wrong.

This is the first, and perhaps most important, political problem Facebook has: the Internet causes real problems, and Facebook willingly made itself responsible for those problems, with no real understanding that the problems even existed.

Political Problem Two: Facebook’s Scapegoating

While I just linked Bosworth’s 2016 memo and Zuckerberg’s 2017 manifesto, it’s important to note that the latter was something new: Facebook explicitly committing itself to use its power to enact change. Before that nothing mattered but growth, which produced a very curious dynamic: Facebook had tremendous power but by-and-large declined to exercise it. This created the conditions for Donald Trump, which I explained in The Voters Decide:

Given their power over what users see Facebook could, if it chose, be the most potent political force in the world. Until, of course, said meddling was uncovered, at which point the service, having so significantly betrayed trust, would lose a substantial number of users and thus its lucrative and privileged place in advertising, leading to a plunge in market value. In short, there are no incentives for Facebook to explicitly favor any type of content beyond that which drives deeper engagement; all evidence suggests that is exactly what the service does.

Said reticence, though, creates a curious dynamic in politics in particular: there is no one dominant force when it comes to the dispersal of political information, and that includes the parties described in the previous section. Remember, in a Facebook world, information suppliers are modularized and commoditized as most people get their news from their feed.

This has two implications:

  • All news sources are competing on an equal footing; those controlled or bought by a party are not inherently privileged.
  • The likelihood any particular message will “break out” is based not on who is propagating said message but on how many users are receptive to hearing it. The power has shifted from the supply side to the demand side.

A drawing of Aggregation Theory and Politics

This is a big problem for the parties as described in The Party Decides. Remember, in Noel and company’s description party actors care more about their policy preferences than they do voter preferences, but in an aggregated world it is voters aka users who decide which issues get traction and which don’t. And, by extension, the most successful politicians in an aggregated world are not those who serve the party but rather those who tell voters what they most want to hear.

These dynamics were in many respects the same as the advertising dynamics I described above: on Facebook both small companies and large companies have an equal shot at customers, and both Party insiders and complete outsiders have an equal shot at voters. Moreover, it’s a dynamic that is even greater when combined: Trump famously used Facebook advertising to far greater effect than the Clinton campaign. And, at the same time, the New York Times, itself demoted to another attention seeker in a Facebook world, famously spent huge amounts of time on Clinton’s emails.

What seems clear in retrospect is that the latter two entities made peace (and avoided introspection) by making Facebook the scapegoat for Trump’s election. This was true in a way — Facebook created the conditions for someone like Trump to win — but the furor that followed suggested something much more explicit: within days the conversation was about Fake News and Russian influence being the key factors, not campaign mistakes and questionable coverage decisions, and certainly not the fundamental way that the media environment had changed.

That enmity has clearly persisted: what is striking about Facebook’s political problems in the United States is how they are not consistent across the world. Sure, Facebook fights its battles, with publishers in particular, but they’re mostly about money; it is in the U.S. where Facebook is, at least on one side of the aisle, uniquely despised, and held solely responsible for the Internet’s inherent issues. TikTok’s algorithm is far more addictive than Facebook’s, and dangerous for minors, and YouTube has played perhaps the leading role in the spread of anti-vaccine information, but it is Facebook that gets all of the blame.

Political Problem Three: Facebook’s Power

There is another, more subtle problem related to the previous point: that Facebook power, which was latent in 2016, and which Zuckerberg made a bid to leverage for a utopian outcome in 2017, is increasingly irresistible to the powers that be. It has been fascinating watching the parade of Congressional hearings where Facebook has been harangued endlessly for not doing enough about misinformation by Democrats, while Republicans demand that Facebook stop censoring; both threaten regulation if the company doesn’t do better. Left unacknowledged by the left is that the company dramatically scaled up the number of people and money devoted to controlling what was allowed on the platform, and what was not; the right, meanwhile, apparently didn’t notice that Zuckerberg consistently stood up for Trump’s right to the platform.2

At the same time, this too is a consequence of Facebook’s success with growth: centralized power is, as James Madison wrote in Federalist No. 47, inevitably leveraged for political gain:

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, self-appointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system.

Madison was obviously talking about government, but isn’t that exactly what Zuckerberg effectively aspired to be? From that manifesto:

For the past decade, Facebook has focused on connecting friends and families. With that foundation, our next focus will be developing the social infrastructure for community — for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.

“Social infrastructure for community” may not be government in the Westphalian sense, but at Facebook scale it is something far more powerful; at the same time, given that Facebook doesn’t have guns, it was inevitable that an all-out effort would be made to capture it.

The reason to present all of these problems in unison is that they all exist in parallel; writing about just one of them, as I did yesterday, doesn’t mean that I don’t acknowledge the others, particularly the first one, about Facebook internalizing the fundamental flaws inherent in humanity. In other words, to argue that it is not in Facebook’s interest as an ad platform to have angry political discussions doesn’t change the reality that having people interact more will entail more angry political discussions, amongst every other discussion that humans are liable to have.

This also makes the case for why I do think the world would be better off were Facebook a much smaller company: its political problems — both for the company and for the world — are irrevocably tied to its size. Unfortunately the only means we have to break a company up — antitrust — don’t really apply; Facebook presents a societal problem, not a competitive one. Moreover, per political problem three, politicians don’t want a smaller Facebook; they want a compliant one.

And so, the most likely outcome is that Facebook simply doubles down on information control, perhaps with some regulation that is far more likely to hinder new startups — stricter limits on data, for example, or increasing liability for user-generated content — than it is to materially harm Facebook. Not much will change, at least in the short term.

Politics and Internet 3.0

The capture of Facebook, though, will have consequences, in much the same vein as January’s industry-wide move against Trump; from Internet 3.0 and the Beginning of (Tech) History:

It turns out that when it comes to Information Technology, very little is settled; after decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality…

It is difficult to believe that the discussion of these implications will be reserved for posts on niche sites like Stratechery; the printing press transformed Europe from a continent of city-states loosely tied together by the Catholic Church, to a continent of nation-states with their own state churches. To the extent the Internet is as meaningful a shift — and I think it is! — is inversely correlated to how far along we are in the transformation that will follow — which is to say we have only gotten started. And, after last week, the world is awake to the stakes; politics — not economics — will decide, and be decided by, the Internet.

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols (crypto projects are one manifestation of this, but not the only ones). This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

The freedom of open source and the self-determination of crypto have always been attractive both technically and philosophically. Centralized platforms like Facebook, though, were just so easy. That, though, is why these political shifts matter: I understand the skepticism about crypto in particular, but I think critics who see only the scams or are focused on the challenges miss the fact that countries want to be sovereign and individuals want to be free. The more that U.S. tech companies are consumed by U.S. politics the more motivation there will be to pull the future forward.

  1. This is where I am jealous of those who do nothing but attack Facebook; no nuance is, if nothing else, succinct. 

  2. More on January 6 below 

Cloudflare’s Disruption

From Protocol:

Cloudflare is ready to launch a new cloud object storage service that promises to be cheaper than the established alternatives, a step the company believes will catapult it into direct competition with AWS and other cloud providers. The service will be called R2 — “one less than S3,” quipped Cloudflare CEO Matthew Prince in an interview with Protocol ahead of Cloudflare’s announcement Tuesday morning. Cloudflare will not charge data-egress fees for customers using R2, taking direct aim at the fees AWS charges developers to move data out of its widely popular S3 storage service.

R2 will run across Cloudflare’s global network, which is most known for providing anti-DDoS services to its customers by absorbing and dispersing the massive amounts of traffic that accompany denial-of-service attacks on websites. It will be compatible with S3’s API, which makes it much easier to move applications already written with S3 in mind, and Cloudflare said that beyond the elimination of egress fees, the new service will be 10% cheaper to operate than S3.

Cloudflare has always framed itself as a disruptor; R2 lives up to its reputation.

Cloudflare’s Evolution

I already wrote earlier this year about Cloudflare’s unique advantages in a world where the Internet is increasingly fragmented, thanks to the distributed nature of its service, and why that positioned the company to compete with the major cloud providers in the long run. What is worth referring to with this announcement, though, is this clip I posted of Prince’s initial launch of Cloudflare at TechCrunch Disrupt 2010, particularly this bit from the Q&A:

So from a competitive standpoint, obviously you’re intruding on some of the stuff that the bigger boys are doing, and they’ve been at this for a long time. What’s to stop them from coming in and replicating your model?

There are companies that are doing things at the high end of the market, and they make very fat margins doing it. I’m really a big fan of Clay Christensen, he was a business school professor of mine, and I like the idea of businesses that come in from below. The big incumbents have an Innovator’s Dilemma trying to come down and deal with a company like ours, but we welcome the competition. We think we make a really great product. It’s designed for a certain type of users that are very different than the users that a larger company might be trying to attract.

Prince was spot-on about the competitive response of incumbents to Cloudflare’s offering for the long-tail of websites: it never came, because Cloudflare was serving a new market. This is how Christensen defined new market disruption in The Innovator’s Solution:

The third dimension [is] new value networks. These constitute either new customers who previously lacked the money or skills to buy and use the product, or different situations in which a product can be used — enabled by improvements in simplicity, portability, and product cost…We say that new-market disruptions compete with “nonconsumption” because new-market disruptive products are so much more affordable to own and simpler to use that they enable a whole new population of people to begin owning and using the product, and to do so in a more convenient setting.

That’s not the end of the story, though: new market disruptors don’t stand still, but can leverage the huge runway provided by the new market to build up their product capabilities in a way that eventually threatens the incumbent. Christensen continued:

Although new-market disruptions initially compete against nonconsumption in their unique value network, as their performance improves they ultimately become good enough to pull customers out of the original value network into the new one, starting with the least-demanding tier. The disruptive innovation doesn’t invade the mainstream stream market; rather, it pulls customers out of the mainstream value network into the new one because these customers find it more convenient to use the new product.

This was Cloudflare Workers, edge compute functionality that was a great match for Cloudflare’s CDN offering, but certainly not a competitor for AWS’s core offerings. Back to Christensen:

Because new-market disruptions compete against nonconsumption, the incumbent leaders feel no pain and little threat until the disruption is in its final stages.

This is where R2 comes in.

The AWS Transformation

In a vacuum, most businesses would prefer making a fixed cost investment instead of paying on a marginal use basis. Consider Spotify’s music-streaming business: one of the company’s core challenges is that the more customers Spotify has the more it has to pay music labels — streaming rights are a marginal cost. A streaming service like Netflix, on the other hand, that spends up front for its own content, gets to keep whatever increased revenue that content drives for itself. This same logic applies to computing capacity: buying your own servers is, in theory, cheaper than renting compute from a service like AWS.

When it comes to compute, however, reality is very different than theory.

  • First, usage may be uneven, whether that be because a business is seasonal, hit-driven, or anything in-between. That means that compute capacity has to be built out for the worst case scenario, even though that means most resources are sitting idle most of the time.
  • Second, compute capacity is likely growing — hopefully rapidly, in the case of a new business. Building out infrastructure, though, is not a linear process: new capacity comes online all at once, which means a business has to overbuild for their current needs so that they can accommodate future growth, which again means that most resources are sitting idle most of the time.
  • Third, compute capacity is complex and expensive. That means there are both huge fixed costs that have to be invested before the compute can be used, and also significant ongoing marginal costs to manage the compute already online.

This is why AWS was so transformative: Amazon would spend all of the up-front money to build out compute capacity for all of its customers, and then rent it on-demand, solving all of the problems I just listed:

  • Customers could scale their compute up-or-down instantly in response to their needs.
  • Customers could rent exactly as much compute as they needed at any moment in time, even as they were able to seamlessly handle growth.
  • AWS would be responsible for all of the up-front investment and ongoing maintenance, and because they would operate at such scale, they would get much better prices from suppliers than any individual company could on its own.

It’s impossible to overstate the extent to which AWS changed the world, particularly Silicon Valley. Without the need to buy servers, companies could be started in a bedroom, creating the conditions for the entire angel ecosystem and the shift of traditional venture capital to funding customer acquisition for already proven products, instead of Sun servers for ideas in PowerPoints. Amazon, meanwhile, suddenly had a free option on basically every startup in the world, positioning itself to levy The Amazon Tax on every company that succeeded.

The scale of these options became clear to the world in 2015 when Amazon broke out AWS’s financials for the first time; I called it The AWS IPO:

The big question about AWS, though, has been whether Amazon can keep their lead. Data centers are very expensive, and Amazon has a lot less cash and, more importantly, a lot less profit than Google or Microsoft. What happens if either competitor launches a price war: can Amazon afford to keep up?

To be sure, there were reasons to suspect they could: for one, Amazon already has significantly more scale, which means their costs on a per-customer basis are lower than Microsoft or Google. And perhaps more importantly is the corporate culture that results from a “your-margins-are-my-opportunity” mindset: Amazon can stomach a few percentage points of margin on a core business far more comfortably than Microsoft or Google, both fat off of software and advertising margins respectively. Indeed, when Google slashed prices in the spring of 2014, Amazon immediately responded and proceeded to push prices down further still, just as they had ever since AWS’s inception (the price cuts in response to Google were the 42nd for the company). Still, the question remained: was this sustainable? Could Amazon afford to compete?

This is why Amazon’s latest earnings were such a big deal: for the first time the company broke out AWS into its own line item, revealing not just its revenue (which could be teased out previously) but also its profitability. And, to many people’s surprise, and despite all the price cuts, AWS is very profitable: $265 million in profit on $1.57 billion in sales last quarter alone, for an impressive (for Amazon!) 17% net margin.

Those numbers last quarter are up to $4.2 billion in profit on $14.9 billion in revenue for a net margin of 28%: Amazon has increased its margins, even as Microsoft and Google have increased their focus on the cloud. A big reason is that Microsoft in particular has pursued different customers, coaxing existing businesses to the cloud, thanks in part to their early focus on hybrid solutions.1 Google, meanwhile, has been even further behind, particularly in terms of matching AWS’s sheer breadth of services (even if they weren’t always technically great), making it harder for businesses already used to AWS to shift.

The egress fees R2 is targeting, though, have played a big role as well.

S3’s Egress Pricing

S3 is the OG Amazon Web Service: it launched on March 14, 2006, with this press release:

Amazon Web Services today announced “Amazon S3(TM),” a simple storage service that offers software developers a highly scalable, reliable, and low-latency data storage infrastructure at very low costs…

Amazon S3 is storage for the Internet. It’s designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers…

S3 lets developers pay only for what they consume and there is no minimum fee. Developers pay just $0.15 per gigabyte of storage per month and $0.20 per gigabyte of data transferred.

Prices have, as you might expect, come down over the ensuing 15 years:

  • A gigabyte of storage today is $0.023, a decrease of 85%
  • Moving data into S3 is free, a decrease of 100%
  • Moving a gigabyte out of S3 is $0.09, a decrease of 55%

These numbers are the base rates; prices vary based on different storage tiers, whether or not you use Amazon’s CDN, and, more importantly, whether or not you have a long-term contract with AWS (more on this in a moment). What is consistent across all of those variables, though, are differences in cost between moving data into AWS, and the cost of moving data out; a blog post from earlier this year called the difference AWS’s “Hotel California”:

Another oddity of AWS’s pricing is that they charge for data transferred out of their network but not for data transferred into their network…We’ve tried to be charitable in trying to understand why AWS would charge this way. Disappointingly, there just doesn’t seem to be an innocent explanation. As we dug in, even things like writes versus reads and the wear they put on storage media, as well as the challenges of capacity planning for storage capacity, suggest that AWS should charge less for egress than ingress.

But they don’t.

The only rationale we can reasonably come up with for AWS’s egress pricing: locking customers into their cloud, and making it prohibitively expensive to get customer data back out. So much for being customer-first.

Even if companies are careful to not make any of their back-end services AWS-specific, the larger you grow the more data you have on AWS, and moving that data off is an eye-watering expense. And so, when another company builds a service that looks interesting — like, say, Cloudflare Workers — it’s easier to simply wait for Amazon’s alternative and build using that, and oops, now you’re even more locked into AWS!

What is happening in terms of the value chain is straightforward: Amazon paid fixed costs for its infrastructure, and is charging for it on a marginal basis; all of the upside here accrues to AWS, as seen in the service’s margins. That is also an important part of AWS’s retention strategy: for most AWS customers the easiest solution to rising costs is to simply sign a long-term contract, dramatically decreasing their prices (again, Amazon has the margin to spare) while ensuring they stay on AWS that much longer, accumulating that much more data and relying on that many more AWS-specific services. Hotel Seattle, as it were.

That blog post, by the way, was co-written by Prince, where he made the case that based on Cloudflare’s understanding of bandwidth costs, AWS was making a 7959% margin on US/Canada egress fees; Prince’s conclusion at the time was that AWS ought to join the Bandwidth Alliance and discount or waive egress fees when sending data to Cloudflare (which doesn’t cost AWS anything, thanks to an industry-standard private network interface), but two months on, the true point of Prince’s post was clearly this week’s announcement.

R2’s Low-End Disruption

From the Cloudflare blog:

Object Storage, sometimes referred to as blob storage, stores arbitrarily large, unstructured files. Object storage is well suited to storing everything from media files or log files to application-specific metadata, all retrievable with consistent latency, high durability, and limitless capacity.

The most familiar API for Object Storage, and the API R2 implements, is Amazon’s Simple Storage Service (S3). When S3 launched in 2006, cloud storage services were a godsend for developers. It didn’t happen overnight, but over the last fifteen years, developers have embraced cloud storage and its promise of infinite storage space.

As transformative as cloud storage has been, a downside emerged: actually getting your data back. Over time, companies have amassed massive amounts of data on cloud provider networks. When they go to retrieve that data, they’re hit with massive egress fees that don’t correspond to any customer value — just a tax developers have grown accustomed to paying.

Enter R2.

The reason that Cloudflare can pull this off is the same reason why S3’s margins are so extraordinary: bandwidth is a fixed cost, not a marginal one. To take the most simplified example possible, if I were to have two computers connected by a cable, the cost of bandwidth is however much I paid for the cable; once connected I can transmit as much data as I would like for free — in either direction.

That’s not quite right, of course: I am constrained by the capacity of the cable; to support more data transfer I would have to install a higher capacity cable, or more of them. What, though, if I already had built a worldwide network of cables for my initial core business of protecting websites from distributed denial-of-service attacks and offering a content delivery network, the value of which was such that ISPs everywhere gave me space in their facilities to place my servers? Well, then I would have massive amounts of bandwidth already in place, the use of which has zero marginal costs, and oh-by-the-way locations close to end users to stick a whole bunch of hard drives.

In other words, I would be Cloudflare: I would charge marginal rates for my actual marginal costs (storage, and some as-yet-undetermined-but-promised-to-be-lower-than-S3 rate for operations), and give away my zero marginal cost product for free. S3’s margin is R2’s opportunity.

Modular Disruption

Cloudflare, at least in AWS terms, remains a minnow; the company had $152 million in revenue last quarter, 10 percent of AWS’s revenue upon its unveiling six years ago. Prince, though, is thinking big; from that Protocol article:

“We are aiming to be the fourth major public cloud,” Prince said. Cloudflare already offers a serverless compute service called Workers, and Prince thinks that adding a low-cost storage service will encourage more developers and companies to build applications around Cloudflare’s services.

That is one way this could play out: R2 is a compelling choice for a certain class of applications that could be built to serve a lot of data without much compute. Moreover, by virtue of using the S3 API,2 R2 can also be dropped into existing projects; developers can place R2 in front of S3, pulling out data as needed, once, and getting free egress forever-after.

Still, AWS is far more than storage; the second AWS product was EC2 — Elastic Compute Cloud — which lets customers rent virtual computers that by definition are far more capable than a necessarily limited edge computing service like Workers, along with a host of database offerings and the sort of specialized services I mentioned earlier. Not all of these will necessarily translate well to Cloudflare’s distributed infrastructure, either.

Again, though, Cloudflare’s distributed nature is the entire reason the company’s cloud ambitions are so intriguing: R2 may be a direct competitor for S3, but that doesn’t mean that anything else about Cloudflare’s cloud ambitions has to be the same. Go back to Christensen and The Innovator’s Solution:

Modularity has a profound impact on industry structure because it enables independent, nonintegrated organizations to sell, buy, and assemble components and subsystems. Whereas in the interdependent world you had to make all of the key elements of the system in order to make any of them, in a modular world you can prosper by outsourcing or by supplying just one element. Ultimately, the specifications for modular interfaces will coalesce as industry standards. When that happens, companies can mix and match components from best-of-breed suppliers in order to respond conveniently to the specific needs of individual customers.

Clay Christensen's graph of modular disruption

As depicted in figure 5–1, these nonintegrated competitors disrupt the integrated leader. Although we have drawn this diagram in two dimensions for simplicity, technically speaking they are hybrid disruptors because they compete with a modified metric of performance on the vertical axis of the disruption diagram, in that they strive to deliver rapidly exactly what each customer needs. Yet, because their nonintegrated structure gives them lower overhead costs, they can profitably pick off low-end customers with discount prices.

This is where zero egress costs could be an even bigger deal strategically than they are economically. S3 was the foundation of AWS’s integrated cloud offering, and remains the linchpin of the company’s lock-in; what if R2, thanks to its explicit rejection of data lock-in, becomes the foundation of an entirely new ecosystem of cloud services that compete with the big three by being modular? If you can always get access to your data for free, it becomes a lot more plausible to connect that data to best-of-breed compute options built by companies focused on doing one thing well, instead of simply waiting for Amazon to offer up their pale imitation that doesn’t require companies to pay out the nose to access.

Cloudflare's modular cloud

Moreover, like any true disruption, it will be very difficult for Amazon to respond: sure, R2 may lead Amazon to reduce its egress fees, but given the importance of those fees to both AWS’s margins and its lock-in, it’s hard to see them going away completely. More importantly, AWS itself is locked-in to its integrated approach: the entire service is architected both technically and economically to be an all-encompassing offering; to modularize itself in response to Cloudflare would be suicidal.

At the same time, this is also why Cloudflare’s success in becoming the fourth cloud, should it happen, will likely be additive to the market: companies on AWS are by-and-large not going anywhere, but there are new companies being formed all of the time, and a whole bunch of companies that have yet to move to the cloud, as well as the aforementioned Internet fragmentation that plays to Cloudflare’s advantage. Here it is a benefit to Cloudflare that it is a relatively small company: opportunities that seem trivial to giants will be big wins, giving the company the increasing scale it needs to flesh out its offerings and build its new cloud ecosystem. Success is not assured, but the strategy is sound enough to make Prince’s late professor proud.

  1. Amazon resisted hybrid for a long time, because it’s a technically terrible solution relative to just moving everything to the cloud, which is to say that Amazon made the same mistake all of Microsoft’s competitors make: relying on a “better product” instead of actually meeting customers where they are and solving their needs 

  2. Copying APIs is a favorite tactic of Amazon when it comes to open source projects

The Apple v. Epic Decision

The vast majority of Judge Yvonne Gonzalez Rogers decision in Epic v. Apple is both straight-forward and predictable; I wrote that the iPhone company would likely win when the lawsuit was filed, and argued that the law was firmly on Apple’s side in App Store Arguments. That is indeed what happened: Apple won, and it wasn’t particularly close; Epic has already filed an appeal, but I doubt it will succeed.

What was surprising, though — and, frankly, a much more interesting question for the Court of Appeals — is that Judge Gonzalez Rogers also issued an injunction banning Apple’s anti-steering provision; while I do think Apple’s anti-steering provision is anti-competitive, this injunction is an odd outcome of this specific case, and a source of much confusion about what this decision was actually about.

Market Definition

The most important part of any antitrust case is market definition. Epic argued there is a smartphone market consisting of iOS and Android, and then on iOS there is a distinct “iOS App Distribution” market, and downstream from that a “iOS In-App Payment Solutions” market. Apple, on the other hand, argued that all of digital gaming was a market, including not just Android but also consoles and PCs.

Judge Gonzalez Rogers disagreed with both, defining the market as ‘mobile game transactions’. Epic’s argument was, as expected, dismissed out of hand; Supreme Court precedent is extremely skeptical that there are single brand markets, and the primary exception (Eastman Kodak) is only applicable if customers are unaware of aftermarket limitations at the time of purchase. In fact, customers are not only aware of Apple’s walled garden policies, but it is in fact a selling point for the iPhone, which means customers know what they are getting into when they choose the iPhone over Android.

In disagreeing with Apple, Judge Gonzalez Rogers first ruled that digital games are a distinct market from general non-gaming apps; the list of reasons are worth noting:1

Having considered and reviewed the evidence, the Court concludes based on its earlier findings of facts that the appropriate submarket to consider is digital game transactions as compared to general non-gaming apps. Indeed, the Court concluded that there were nine indicia indicating a submarket for gaming apps as opposed to non-gaming apps: (i) the App Store’s business model is fundamentally built upon lucrative gaming transactions; (ii) gaming apps constitute a significant majority of the App Store’s revenues; (iii) both the gaming, mobile, and software industry as well as the general public recognize a distinction between gaming apps and non-gaming apps; (iv) gaming apps and their transactions exhibit peculiar characteristics and users; (v) game app developers often employ specialized technology inherent and unique to that industry in the development of their product; (vi) game apps further have distinct producers — game developers — that generally specialize in the production of only gaming apps; (vii) game apps are subject to distinct pricing structures as compared to other categories of apps; (viii) games and gaming transactions are sold by specialized vendors; and (ix) game apps are subject to unique and emerging competitive pressures, that differs in both kind and degree from the competition in the market for non-gaming apps.

Next Judge Gonzalez Rogers ruled that mobile gaming transactions were a distinct market from digital gaming transactions generally:

As an initial matter, Apple’s own documents recognize mobile gaming as a submarket. One industry report describes mobile gaming as a “$100 billion industry by itself” that accounts for 59% of global gaming revenue. While PC and console gaming has grown more slowly, mobile gaming has experienced double-digit growth driven by “the free-to-play model” with in-app purchases. “Remarkably,” this rapid growth “has not significantly cannibalized revenues from the PC or console gaming markets,” which suggests that consumers are not necessarily substituting among them. Another industry report describes distinct user bases for mobile gaming: young children, teenage girls, and older adults are disproportionally likely to be mobile gamers only. Multiplatform gaming, by contrast, is driven by teenage boys and young adults under 25.

Judge Gonzalez Rogers cited further evidence about the extent to which mobile gaming business models differed from consoles and PCs, and Apple’s own expert evidence that showed only a small amount of cross-over in popular titles.

Ultimately, I found Judge Gonzalez Rogers’ market definition reasonable, although Apple could argue on appeal that all of digital gaming should be included. What is more notable, though, is the strong distinction Judge Gonzalez Rogers draws between games and non-gaming apps:

  • First, given this market definition, this case was only about mobile games. That is good news for app developers who have their own antitrust complaints about Apple’s policies, and also a reason for Apple to not take this ruling as a complete endorsement of their policies.
  • Second, this ruling does suggest that Apple ought to — and now has the judicial imprimatur to — treat games differently than other apps.

This is perhaps the most important takeaway from this decision; so much of the company’s App Store troubles have come from applying rules and regulations that are appropriate for games to other areas of the App Store that are totally different, and now the company has license to come up with two sets of rules for what, Judge Gonzalez Rogers ruled, are two different markets.

Market Power

With that definition established Judge Gonzalez Rogers ruled that Apple did not have monopoly power in ‘mobile gaming transactions’:

  • Apple has between 52%-57% share of the mobile gaming transaction market, which is significant but not enough to be a prima facie case of monopoly power.2
  • There is no evidence of a restriction in output, i.e. the mobile gaming transaction market continues to grow, despite the fact there is evidence that Apple’s 30% commission rate is artificially high.
  • While iOS and Android have substantial advantages, there is some evidence of increased competition in the mobile gaming space in the form of the Nintendo Switch and cloud gaming services.

Judge Gonzalez Rogers did note:

Given the totality of the record, and its underdeveloped state, while the Court can conclude that Apple exercises market power in the mobile gaming market, the Court cannot conclude that Apple’s market power reaches the status of monopoly power in the mobile gaming market. That said, the evidence does suggest that Apple is near the precipice of substantial market power, or monopoly power, with its considerable market share. Apple is only saved by the fact that its share is not higher, that competitors from related submarkets are making inroads into the mobile gaming submarket, and, perhaps, because plaintiff did not focus on this topic.

This paragraph explains why Apple may wish to appeal the definition of the market specifically; it is an important consideration in the anti-steering injunction which is addressed below.

What is unaddressed by this ruling, and by antitrust law in general, is the reality that the mobile gaming transaction market is a duopoly; this is a problem with tech regulation generally, as I noted in United States v. Google:

This gets at a larger problem in many tech markets: the tendency towards duopoly, which often lets one company cover for the other acting anti-competitively. In the case of Apple and Google:

  • Android’s presence in the market means that Apple can act anticompetitively with its App Store policies (which Google is happy to ape).
  • Apple’s privacy focus justifies decisions like limiting trackers, restricting cookies, and cutting off in-app analytics; Google happily follows Apple’s lead, which impacts its advertising rivals far more than it does Google, improving their relative competitive position.
  • Apple earns billions of dollars giving its customers the best default search experience, even as that ensures that Google will remain the best search engine (and raises questions about the sincerity of Apple’s privacy rhetoric).

This isn’t the only duopoly: Google and Facebook jointly dominate digital advertising, Microsoft and Google jointly dominate productivity applications, Microsoft and Amazon jointly dominate the public cloud, and Amazon and Google jointly dominate shopping searches. And, while all of these companies compete, those competitive forces have set nearly all of these duopolies into fairly stable positions that justify cooperation of the sort documented between Apple and Google, even as any one company alone is able to use its rival as justification for avoiding antitrust scrutiny.

Judge Gonzalez Rogers does note that it is unclear whether Google “could increase output in the short run in order to erode Apple’s market share”, but the real problem is that Google is content to simply share the market with Apple and earn their own supracompetitive commission rate.

The IAP System

Most of the rest of Judge Gonzalez Rogers’ decision balances Apple’s alleged anticompetitive conduct, including its ability to maintain outsized profit margins on the App Store because of the lack of competition for iOS App Stores and In-App Payment alternatives, with its procompetitive justifications, including enhanced security, intrabrand competition with Android-based phones, and its right to protect its investment in intellectual property.

The first two procompetitive justifications work as a set: Apple convinced Judge Gonzalez Rogers that App Review provided an additional unique and valuable layer of security above-and-beyond operating system-based security measures, and furthermore, that the companies approach to app distribution and monetization was not only not detrimental to customers, but was actually a selling point for the iPhone (as noted above this also explains why the App Store was not covered by the Eastman Kodak exemption).

Apple also won on the intellectual property point, as expected, but this is where the decision starts to get a bit complicated. Judge Gonzalez Rogers emphasized throughout the decision that Apple had a right to a commission on apps, but not necessarily 30%:

First, and most significant, as discussed in the findings of facts, IAP is the method by which Apple collects its licensing fee from developers for the use of Apple’s intellectual property. Even in the absence of IAP, Apple could still charge a commission on developers. It would simply be more difficult for Apple to collect that commission.

Indeed, while the Court finds no basis for the specific rate chosen by Apple (i.e., the 30% rate) based on the record, the Court still concludes that Apple is entitled to some compensation for use of its intellectual property. As established in the prior sections, Apple is entitled to license its intellectual property for a fee, and to further guard against the uncompensated use of its intellectual property. The requirement of usage of IAP accomplishes this goal in the easiest and most direct manner, whereas Epic Games’ only proposed alternative would severely undermine it. Indeed, to the extent Epic Games suggests that Apple receive nothing from in-app purchases made on its platforms, such a remedy is inconsistent with prevailing intellectual property law.

What is important to note is that this was not a complete win for Apple’s argument in court: there the company argued that the intellectual property for which the company deserved to be compensated was not simply the App Store and all of its attendant services, but also its entire stack of developer tools, from APIs to Xcode and everything in-between. This argument enraged developers like Marco Arment:

Apple’s leaders continue to deny developers of two obvious truths:

  • That our apps provide substantial value to iOS beyond the purchase commissions collected by Apple.
  • That any portion of our customers came to our apps from our own marketing or reputation, rather than the App Store…

To bully and gaslight developers into thinking that we need to be kissing Apple’s feet for permitting us to add billions of dollars of value to their platform is not only greedy, stingy, and morally reprehensible, but deeply insulting.

Judge Gonzalez Rogers agreed:

No one credibly disputes that Apple and third-party developers act symbiotically. Apple gives developers an audience and developers make Apple’s platform more attractive. Thus, Apple earns revenue each time a developer earns revenue creating a feedback loop. However, as revenues show, the ultimate effect appears to vary within developer groups depending on how a developer chooses to monetize its app.

Further, there is substantial evidence that Epic Games, and perhaps other larger developers, bring their own audience to iOS. Fortnite was already popular when it arrived on iOS and Apple sought exclusive Fortnite content to attract new users. That said, Epic Games wanted Apple’s user base, to which it did not have access, as it had already saturated its other options. Also, Match Group found that the majority of new users from the App Store organically searched for its apps (e.g., by typing in “Tinder”), while Apple contributed only 6% of discovery. For these developers, Apple’s role in generating in-app purchases was “nothing” but it continued to receive a 30% commission on in-app purchases.

Therefore, the intellectual property to which Apple is entitled to receive a commission is more narrowly defined: IAP, which Judge Gonzalez Rogers defined as such:

Apple’s IAP or “in-app purchasing” system is a collection of software programs working together to perform several functions at once in the specific context of a transaction on a digital device. Apple uses the system to manage transactions, payments, and commissions within the App Store, but it also uses the system in other “stores” on iOS devices, such as “the iTunes Store on iOS, Apple Music, iCloud or Cloud services” and “physical retail stores”. The system is not something that is bought or sold…

More specifically, Apple’s IAP, as used here, is a secured system which tracks and verifies digital purchases, then determines and collects the appropriate commission on those transactions. In this regard, the system records all digital sales by identifying the customer and their payment methods, tracking and accumulating transactions; and conducts fraud-related checks. IAP simultaneously provides information to consumers so that they can view their purchase history, share subscriptions with family members and across devices, manage spending by implementing parental controls, and challenge and restore purchases.

Apple also intends the system to provide the customer with a single interface which can be used, and trusted, with respect to all purchases regardless of the developer. Importantly, the system has become more sophisticated over time, but the record does not detail the various versions…

Creating a seamless system to manage all its e-commerce was not an insignificant feat. Further, expanding it to address the scale of the growth required a substantial investment, not to mention the constant upgrading of the cellphones to allow for more sophisticated apps. Under current e-commerce models, even plaintiff’s expert conceded that similar functionalities for other digital companies were not separate products. Under all models, Apple would be entitled to a commission or licensing fee, even if IAP was optional. Payment processors have the ability to provide only one piece of the functionality. There is no evidence that they can provide the balance. Thus, the Court finds Epic Games has not shown that IAP is a separate and distinct product.

Forgive the long excerpt, but understanding this definition is critical to understanding this case as a whole. What Gonzalez Rogers is saying is that IAP is not the App Store app or the payment processing or any other discrete offering, but rather the totality of its digital e-commerce system. It is the intellectual property undergirding this IAP system that Apple is entitled to monetize in whatever reasonable way that it sees fit. And, to take this section full circle, not only does Apple have a right to monetize IAP, forcing developers to use IAP enhances its other two procompetitive justifications:

If Apple could no longer require developers to use IAP for digital transactions, Apple’s competitive advantage on security issues, in the broad sense, would be undermined and ultimately could decrease consumer choice in terms of smartphone devices and hardware…to a lesser extent, the use of different payment solutions for each app may reduce the quality of the experience for some consumers by denying users the centralized option of managing a single account through IAP. This would harm both consumers and developers by weakening the quality of the App Store to those that value this centralized system.

That was a lot of legalese, but this is the takeaway: IAP is distinct intellectual property from developer tools broadly; it is the entire set of app management tools, not just a payment processor; and Apple has legitimate competitive justification to require IAP be used for in-app purchases.

The Anti-Steering Injunction

I mentioned above that this was where the decision got a bit complicated; notice that I just used “IAP” and “in-app purchases” to represent two distinct concepts. Specifically, it seems clear that Gonzalez Rogers has defined “IAP” to be Apple’s overall commerce system, while “in-app purchases” are purchases made in an app. In other words, Apple is justified in requiring IAP for in-app purchases.

This is the essential bit of context for making sense of the only part of the case that Apple lost: the injunction against the App Store’s anti-steering provisions, which states:

Apple Inc. and its officers, agents, servants, employees, and any person in active concert or participation with them (“Apple”), are hereby permanently restrained and enjoined from prohibiting developers from (i) including in their apps and their metadata buttons, external links, or other calls to action that direct customers to purchasing mechanisms, in addition to In-App Purchasing and (ii) communicating with customers through points of contact obtained voluntarily from customers through account registration within the app.

As noted above, Gonzalez Rogers maintained throughout the decision that while Apple was certainly entitled to a commission, the appropriate rate was likely lower than 30%; here she argues that the App Store’s anti-steering provisions made the correct rate impossible to discover:

Apple’s own records reveal that two of the top three “most effective marketing activities to keep existing users coming back” in the United States, and therefore increasing revenues, are “push notifications” and “email outreach”. Apple not only controls those avenues but acts anticompetitively by blocking developers from using them to Apple’s own unrestrained gain. As explained before, Apple uses anti-steering provisions prohibiting apps from including “buttons, external links, or other calls to action that direct customers to purchasing mechanisms other than in-app purchase,” and from “encourag[ing] users to use a purchasing method other than in-app purchase” either “within the app or through communications sent to points of contact obtained from account registrations within the app (like email or text).” Thus, developers cannot communicate lower prices on other platforms either within iOS or to users obtained from the iOS platform. Apple’s general policy also prevents developers from informing users of its 30% commission.

These provisions can be severed without any impact on the integrity of the ecosystem and is tethered to legislative policy. As an initial matter, courts have long recognized that commercial speech, which includes price advertising, “performs an indispensable role in the allocation of resources in a free enterprise system.” Restrictions on price information “serve to increase the difficulty of discovering the lowest cost seller . . . and [reduce] the incentive to price competitively[.]” Thus, “where consumers have the benefit of price advertising, retail prices often are dramatically lower than they would be without advertising.” Antitrust scholars have recognized the same: “The less information a consumer has about relative price and quality, the easier it is for market participants to charge supracompetitive prices or provide inferior quality.”

Notice the references to “other platforms”; those are the web, Android, or, in cases like Fortnite, other consoles. Judge Gonzalez Rogers’ argument is not that Apple has to allow different payment options within an app — as noted in the previous section, that is Apple’s right to control, and even mandate — but rather that Apple can’t stop a developer from telling users that they can go outside the app to another platform to acquire digital content.3

The most obvious way this might have an impact is through developers offering lower prices if users are willing to pay on the web. Apple could theoretically counter this by requiring developers to offer the same price everywhere, but it seems unlikely that would pass muster with Judge Gonzalez Rogers. Note, though, that IAP is not going anywhere: external links can sit next to IAP, but they can’t replace it, and they can never be as well-integrated as Apple’s offering is.

Back to the decision:

Thus, although Epic Games has not proven a present antitrust violation, the anti-steering provisions “threaten[] an incipient violation of an antitrust law” by preventing informed choice among users of the iOS platform. Moreover, the anti-steering provisions violate the “policy [and] spirit” of these laws because anti-steering has the effect of preventing substitution among platforms for transactions. Apple has not offered any justification for the actions other than to argue entitlement. Where its actions harm competition and result in supracompetitive pricing and profits, Apple is wrong. Accordingly, the harm from the anti-steering provisions outweighs its benefits, and the provision violates the UCL under the balancing test.

The “UCL” is California’s Unfair Competition Law, which is broader than U.S. antitrust law, and includes provisions for “incipient violations” and “unfair conduct”; this is the legal underpinning for Judge Gonzalez Rogers’ injunction. This is also the other part of the ruling that Apple may appeal: the company already argued in court that Judge Gonzalez Rogers should not consider violations under the UCL separately from violations under federal antitrust law, and while Judge Gonzalez Rogers took particularly scathing interest in Apple’s anti-steering provisions during CEO Tim Cook’s testimony, they were not a major focus of Epic’s case. It’s also unclear on what grounds a case about “mobile game transactions” resulted in wholesale changes to the entire App Store.4 Moreover, the Supreme Court recently ruled in favor of anti-steering provisions in a case about American Express’s policies; Gonzalez Rogers made a good argument that this situation is different, but that is open to interpretation. And, of course, UCL is a California law, but Gonzalez Rogers said the injunction applied nationwide.

Apple’s Next Move

To reiterate what I said at the beginning, this was a near total victory for Apple, and a devastating defeat for Epic. Not only did the Fortnite-maker not gain the opportunity to build their own app store or have their own in-app purchases, Judge Gonzalez Rogers also ruled that Apple was justified in revoking the development license for not just Fortnite but all of Epic’s subsidiaries, including Unreal Engine. That means that Epic, at least for now, can’t work on its licensable engine for other developers.

More broadly, while some of Apple’s claims were curtailed, its App Store model was by-and-large found to be legal (at least for games). Even the injunction against anti-steering made clear that Apple can, if it wishes, insist that apps include its IAP system alongside links to another platform (i.e. the web). Might Apple start insisting that Netflix and Spotify re-add IAP at the same time they put in links to their websites?

I think that would be unwise, for two reasons.

First, while antitrust cases are decided by the courts, it is important to remember that the question at hand is about statute interpretation, not constitutionality. That means that Congress can simply change the statutes, or, in the specific case of the App Store, pass laws explicitly undoing Apple’s approach. If that happens this case is moot, which is to say that Apple would do well to appease would-be regulators, instead of doubling-down on its current policies.

Second, not only did this case demonstrate that games are by far the biggest revenue drivers in the App Store (around 75% of revenue, and 98% of in-app purchases), but Judge Gonzalez Rogers’ decision made the case that the games market is distinct from the broader app market. This is an opportunity that Apple should embrace to treat games differently.

This would result in a two-prong strategy: first, expand on Apple’s recent settlement with the Japan Fair Trade Commission to make clear that all non-game apps (not just “Reader” apps) can link to external websites for payments, effectively ending the anti-steering provision for non-game apps, while second, appeal the anti-steering injunction in this case. Should the company win the latter it can both fully deliver on its commitment to App Store security in terms of games, making all purchases run through IAP, even as loosening the reins on non-game apps both relieves regulatory pressure and, more importantly, expands the economic possibilities of the app economy.

This is biased advice, to be sure; it’s exactly what I’ve been begging Apple to do for years. The risk of taking so long to change course is decisions like this: Apple won on almost every count but one, but that one has the potential to cause Apple a fair bit of trouble. Games have always been a vector for scams and abuse; it would have been much better (and profitable) for Apple to keep tight control of the category while giving ground elsewhere. Now it has to deal with a blanket injunction and critics who still don’t think it is enough.

I wrote a follow-up to this Article in this Daily Update.

  1. The full exposition of this reasoning is on Page 61 of the ruling 

  2. This is generally held to require at least 65% market share 

  3. This is admittedly unclear from the one page injunction, so I can see why some observers are arguing that the injunction allows alternative in-app purchase flows; however, this argument simply doesn’t make sense in the context of the entire ruling, which makes clear that alternative platforms are websites and other operating systems 

  4. Not that I’m complaining! 

Tech Epochs and the App Store Trap

Matthew Brooker, writing for Bloomberg Opinion, is worried about Xi Jinping leading China into a trap:

The middle-income trap describes how economies tend to stall and stagnate at a certain level of development, once wages have risen and productivity growth becomes harder. Relatively few make the transition to high-income status. The history of those that have, such as South Korea and Taiwan, points to a need for the state’s role to retreat as markets advance. Ad hoc interventions by governments may work at more basic levels of development. At higher income levels, economies become too complex for command-and-control management by individuals. Systems are increasingly what matters. Rules that are transparent, predictable and fairly applied enable market forces to take over the job of directing economic activity, raising efficiency and allowing innovation to flourish.

This inevitably implies some ceding of power by the rulers. It also potentially implies political change. South Korea and Taiwan both transitioned from authoritarian to democratic political systems as they became richer. The largest high-income economies are almost all democracies. Xi, a believer in the historic mission and preordained victory of the Communist Party, is far from receptive to such a message. The party has embraced markets, but from a position of superiority. Like laws, they are there to be used, when useful; the party remains supreme, above all.

I have spent a fair bit of time over the last few months discussing China’s recent crackdown on its tech industry; to me one of the most interesting questions is whether China’s renewed quest to catch up technologically, particularly in the area of semiconductors, might suffer from the country’s recent crackdown. Dan Wang argued in Foreign Affairs (and previously, in a Stratechery Daily Update interview) that U.S. sanctions were exactly the sort of spur the country needed:

China’s private entrepreneurial firms have driven the bulk of the country’s technological success, even though their interests have not always aligned with the state’s goal of strengthening domestic technology. Beijing has, for example, recently begun cracking down on certain consumer Internet companies and online education firms, in part to redirect the country’s efforts towards other strategic technologies such as computer chips. This has meant that China’s most impressive technological achievements—building state-of-the-art capabilities in renewable energy, consumer Internet services, electronics, and industrial equipment—have as often been driven in spite of state interference as they have because of it.

Then came U.S. President Donald Trump. By sanctioning entrepreneurial Chinese companies, he forced them to stop relying on U.S. technologies such as semiconductors. Now, most of them are trying to source domestic alternatives or design the necessary technologies themselves. In other words, Trump’s gambit accomplished what the Chinese government never could: aligning private companies’ incentives with the state’s goal of economic self-sufficiency.

What happens, though, if the priorities of those private companies shifts from winning in the market to satisfying the Party? Is there a cost to losing world-class founders and entrepreneurs like ByteDance CEO Zhang Yiming and Pinduoduo CEO Colin Huang? It is one thing to align private incentives with state incentives; it is an open question if doing so by removing the drive for dominance and outsized profits ends up being a case of one step forwards, and two steps back.

Tech Epochs

In 2014 I described the The Three Epochs of Consumer Tech to that point; to summarize my argument (which, as is always the case seven years on, isn’t perfect):

Tech's epochs

  • The PC epoch had Windows as its operating system, productivity software as its killer app, and email was the dominant communications medium.
  • The Internet epoch had the browser as its operating system, search as its killer app, and social networking, particularly Facebook, was the dominant communications medium.
  • The mobile epoch had iOS and Android as its operating systems, the sharing economy as its killer app, and messaging was the dominant communications medium.

At the time I posited that the next epoch was unclear; I listed wearables, Bitcoin, and mobile applications like Uber as possibilities, although settled on messaging as being the most likely to be the fourth epoch.

Then, in 2020, I argued we had reached The End of the Beginning:

There may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.

Indeed, this is exactly what we see in consumer startups in particular: few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.

That article was about the overall shift in computing from mainframes to PCs to mobile, which mirrored the shift from one room computing to desktop computing to cloud computing:

A drawing of The Evolution of Computing

This is a progression where the potential for crypto-based computing and its inherent decentralization fits right in (especially as politically-motivated tech companies provide the impetus).

Note, though, that these two articles don’t mirror each other exactly: Epoch 2, the Internet epoch, wasn’t really about the underlying hardware at all; rather, it was the PC epoch that provided the foundation for the Internet epoch. Google and Facebook would have been much less valuable if there weren’t already hundreds of millions of people with PCs and Internet connections; a similar reduction in value would have occurred if Microsoft had been able to control what people accessed on the web, and on what terms.

It was the Internet, meanwhile, that gave mobile the fuel to get off of the ground; remember how Steve Jobs introduced the iPhone:

Jobs wasn’t wrong — Apple absolutely did reinvent the phone — but it did so on the back of concepts that already existed: the iPod, mobile phones, and, most importantly, the Internet. And then Apple introduced the App Store.

The App Store Economy

Thirteen years on and its easy to lose sight of just how important Apple’s approach to the App Store was not only for the iPhone but also developers. John Gruber, in the wake of South Korea passing a law opening up in-app payments, wrote on Daring Fireball:

I think the latter half of Apple’s statement is true — user trust in in-app purchases will decline…there’s no denying that the result of any of these laws would be to make iOS and Google’s Android more like Macs and PCs. There’s also no denying that people make far more digital purchases and install far more apps on their mobile devices (iOS or Android) than their PCs (Mac or Windows)…

But I am confident that the overwhelming majority of typical users are more comfortable installing apps and making in-app purchases on their iOS and Android devices than on their Mac and Windows PCs not despite Apple and Google’s console-like control over iOS and Android, but because of it. And if these measures come to pass and iOS and Android devices are forced by law to become pocket PCs, I think there’s a high chance it’ll prove unpopular with the mass market. The masses are not clamoring for the app stores to be opened up. These arguments over app stores are entirely inside baseball for the technical and business classes. I’ve had non-technical friends and relatives complain to me about all sorts of things related to their iPhones over the last 10 years, but never once have any of them said to me, “Boy, I sure wish iPhone apps and games could ask me for my credit card number to make purchases, and that the overall experience of using apps was more like the anything-goes nature of using the web or my desktop computer.” Never. It doesn’t just seem that the unintended consequences of such legislation is being under-considered; it seems as though it’s not being considered or acknowledged at all.

Perhaps I’m wrong, and it’ll all work out just fine. Anyone who claims to know how such a scenario will turn out is full of shit. But from what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

It’s easy for developers to measure the 30% that Apple takes of their earnings, or the cost it takes to implement the company’s in-app purchasing APIs, or the time it takes to deal with App Review. It’s much harder, particularly in 2021, to appreciate the extent to which Apple increased the total addressable market for everyone, not simply by inventing a device that could be used everywhere, but also by enforcing a distribution model that made consumers feel safe and thus willing to try out far more apps than they ever did on PC. And Gruber is right, that even now it is possible that loosening the App Store rules might have unintended consequences on those same developers chafing under Apple’s control.

At the same time, I think that Gruber underrates the impact of the “ownerless world wide web”; yes, I myself wrote an article in 2015 entitled Why Web Pages Suck,1 and yes, that is an inevitable outcome of a lack of centralized quality control. An arena where anything goes, though, doesn’t simply make it possible to produce garbage, but also things that are completely new to the world: rules that limit bad things have the unfortunate side effect of limiting good things that the rule-maker never considered.

One example, if I may be so immodest, is this site: not only did I not have the wherewithal to build an app in 2013, the App Store only offered Newsstand subscriptions for established publications; I could, though, build a web page, and leverage services like Stripe to charge subscriptions. Yes, Apple has since expanded the use of App Store subscriptions, but in almost all cases that support has chased use cases, from streaming to reading to video to collaboration and cloud services, that were first pioneered on the ownerless world wide web.

What is new to the App Store are the shift of more and more productivity applications to subscription billing. This trend, to be fair, started with Microsoft and Adobe, but even basic utility apps have followed suit. What is not clear is if, in a vacuum, this is particularly good for their business: for years small apps thrived on the PC and especially the Mac by leveraging Internet distribution and monetizing via paid upgrades, which struck a balance between making more money from your best customers, a necessity for every business, without introducing subscription fatigue and a sense of resentment from marginal customers not sure if they can justify the ongoing expense.

The App Store, though, is not a vacuum: it is an economy where Apple sets the rules, and I’ve been writing about how traditional developer business models simply weren’t enabled for as long as this site has been around; by 2015 it seemed clear that the era of mobile productivity apps was going to be a disappointing one:

That, then, means that Cook’s conclusion that Apple could best improve the iPad by making a new product isn’t quite right: Apple could best improve the iPad by making it a better platform for developers. Specifically, being a great platform for developers is about more than having a well-developed SDK, or an App Store: what is most important is ensuring that said developers have access to sustainable business models that justify building the sort of complicated apps that transform the iPad’s glass into something indispensable.

That simply isn’t the case on iOS. Note carefully the apps that succeed on the iPhone in particular: either the apps are ad-supported (including the social networks that dominate usage) or they are a specific type of game that utilizes in-app purchasing to sell consumables to a relatively small number of digital whales.

Six years on and not much has changed; iPad hardware continues to improve, the consumption experience remains fantastic, and the Pencil has unlocked interesting new use cases. Killer apps, though, that are uniquely possible on the iPad, are still quite rare; Apple pitches the device as a PC replacement, but even then the most advanced demo is Photoshop. It is as if the iPad specifically and productivity software on iOS generally is in a sort of “middle-income trap”: the obvious tools are there, and giant software developers with their subscription plans can justify building complex apps, but the innovation in the ecosystem has never lived up to Jobs’ vision.

Challenges, Creators, and Metaverses

I do think the mobile productivity ship has sailed, particularly in terms of the iPad. It’s a fine device with fine apps, and it’s great for consuming media. And, I should note, Apple sold $30 billion worth of the devices over the last 12 months. That is hardly a flop — quite the opposite, in fact.

Go back to the analogy at the beginning of this Article, though: a country in the middle income trap is by nearly every objective measure a massive success. That is certainly the case for China, the development and growth of which has done more to alleviate human poverty than any event in human history. Today China is in many sectors, particularly labor-centric manufacturing, the most advanced economy in the world, with world-class cities and infrastructure that puts much of the U.S. to shame. The challenge now, though, is to not simply catch up to the West but to surpass it, and that means innovation in ways that go beyond applying Western technology to China-specific use cases and taking advantage of leapfrog opportunities (like payments); whether that innovation can be achieved as control is re-centralized is one of the most important questions of the next decade.

Apple, meanwhile, is seeing more challenges to its centralized control of the app economy than ever before, from antitrust lawsuits to potential legislation to run-ins with regulators around the world. At the same time there is an ongoing explosion in completely new digital-first business models, including the so-called creator economy and meta-verses like Roblox, and over the horizon, the crypto-economy.

It is those challenges that will, slowly but surely, force Apple to give up control. Last week’s news that Apple, after settling with the Japan Fair Trade Commission, will allow “a single link” to ‘reader’ apps2 on a worldwide basis suggests a new more open approach and a stubborn refusal to give up iron-fisted control of the App Store economy all at the same time. Time will tell if Apple decides to rethink the App Store in one fell swoop, or if regulators dismantle Apple’s regime piece-by-piece, and potentially geography-by-geography, in a way that harms Apple’s core business.

What is important, though, is that these changes happen sooner rather than later, for the sake of tech’s fourth epoch.

The Fourth Epoch

If consumer tech’s second epoch — the Internet — was built on and enabled by the first — the PC — then it follows that the fourth epoch is built on and enabled the third. Both the creator economy and metaverses fit the bill: yes, some creators can make a go of it on the web, but I’ll be the first to say that that is only possible because of social networks. What seems more likely are that creators emerge on platforms built to accommodate them, and those platforms themselves will sit on top of mobile. That is even more likely to be the case when it comes to metaverses, which are likely able to deliver superior experiences as a native app than as a web app.3

Tech Epochs

The problem is the App Store: if Apple is taking 30% of every transaction, and the platform owner their own share for having created the opportunity and toolset for the creator, then that means creators need that many more fans to make a living, reducing the number who are successful. It’s the same story for metaverses: Roblox isn’t remotely profitable, even as it pays its developers pennies-on-the-dollar, thanks in large part to Apple taking 30% for the pleasure of existing on iOS. Apple is absolutely right that the App Store created economic opportunity; it is also taking it away from an expanding universe of creators and developers who have no reason to interact with iOS APIs.

What is particularly frustrating about this state of affairs is that it is not as if Apple is making things easier for creator platforms: look no further than Twitter’s new super-follows feature, which is easy-to-understand from a user perspective — pay X amount of dollars to get access to subscriber-only tweets — but only because Twitter is creating in-app subscriptions by hand for every super-tweeter, and even then is limited to 10,000 in-app purchase options. Other creator platforms like Twitch create convoluted token-based subscription schemes to get around Apple’s in-app purchase limitations that obfuscate prices and result in worse outcomes for the creators in terms of retaining customers; customers, meanwhile, pay more in the app than they do on the web to cover Apple’s fees; these fees, quite clearly, exist because of the company’s total control of apps, not because any value is being provided.

Of course these are big well-known companies fighting with the biggest, most well-known company; the question, as always, is about the companies that aren’t formed, the creators that aren’t empowered, the metaverses that die on the vine because developers couldn’t make money, or the platform creator couldn’t justify the risk. Looking back it’s easy to see how Microsoft and Windows could have stifled the Internet epoch; Apple (and Google!) ought not hold back the full potential of the app-platform epoch.

Epoch Bridges

I’ve written a lot about the App Store over the entire course of Stratechery generally, and over the last year specifically. Yes things are hopefully changing, and that is a reason for analysis, but I could see an argument that emerging technologies like crypto are much more interesting.

To that point, though, it’s worth noting that there is one additional reason beyond greed or control of the customer experience why Apple and Google might not be motivated to loosen their control of apps: if crypto is tech’s fifth epoch — and there is a very good chance that is the case — then it is very much in the crypto-industry’s interest to pay attention to and weigh in on these App Store battles. Remember that the Internet provided the bridge from the PC to mobile; in a well-functioning market apps and platform-level APIs would provide the bridge from mobile to crypto. Just think about all of the obstacles there are in making crypto applications user-friendly and accessible to general users, and how much more would be possible if mobile were as open and configurable as the PC.

And, of course, it is worth considering how bad that might be in the long run for Apple and Google. If those in power are primarily concerned with protecting their position then perhaps it is inevitable that innovation is a casualty.

I wrote a follow-up to this Article in this Daily Update.

  1. Also in response to Gruber

  2. “Reader” apps are defined in the App Store Guidelines as apps that “allow a user to access previously purchased content or content subscriptions (specifically: magazines, newspapers, books, audio, music, and video).” 

  3. The question as to whether Apple is handicapping mobile Safari is separate but related.