DALL-E, the Metaverse, and Zero Marginal Content

Last week OpenAI released DALL-E 2, which produces (or edits) images based on textual prompts; this Twitter thread from @BecomingCritter has a whole host of example output, including Teddy bears working on new AI research on the moon in the 1980s:

Teddy bears working on new AI research on the moon in the 1980s

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window:

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

And, in the most on-the-nose example possible, A human basking in the sun of AGI utopia:

A human basking in the sun of AGI utopia]

OpenAI has a video describing DALL-E on its website:

While the video does mention a couple of DALL-E’s shortcomings, it is quite upbeat about the possibilities; some excerpts:

Dall-E 2 is a new AI system from OpenAI that can take simple text descriptions like “A koala dunking a basketball” and turn them into photorealistic images that have never existed before. DALL-E 2 can also realistically edit and re-touch photos…

DALL-E was created by training a neural network on images and their text descriptions. Through deep learning it not only understands individual objects like koala bears and motorcycles, but learns from relationships between objects, and when you ask DALL-E for an image of a “koala bear riding a motorcycle”, it knows how to create that or anything else with a relationship to another object or action.

The DALL-E research has three main outcomes: first, it can help people express themselves visually in ways they may not have been able to before. Second, an AI-generated image can tell us a lot about whether the system understands us, or is just repeating what it’s been taught. Third, DALL-E helps humans understand how AI systems see and understand our world. This is a critical part of developing AI that’s useful and safe…

What’s exciting about the approach used to train DALL-E is that it can take what it learned from a variety of other labeled images and then apply it to a new image. Given a picture of a monkey, DALL-E can infer what it would look like doing something it has never done before, like paying its taxes while wearing a funny hat. DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

That last line may raise some eyebrows: at first glance DALL-E looks poised to compete with artists and illustrators; there is another point of view, though, where DALL-E points towards a major missing piece in a metaverse future.

Games and Medium Evolution

Games have long been on the forefront of technological development, and that is certainly the case in terms of medium. The first computer games were little more than text:

A screenshot from Oregon Trail

Images followed, usually of the bitmap variety; I remember playing a lot of “Where in the world is Carmen San Diego” at the library:

A screenshot from "Where in the world is Carmen San Diego"

Soon games included motion as you navigated a sprite through a 2D world; 3D followed, and most of the last 25 years has been about making 3D games ever more realistic. Nearly all of those games, though, are 3D images on 2D screens; virtual reality offers the illusion of being inside the game itself.

Still, this evolution has had challenges: creating ever more realistic 3D games means creating ever more realistic image textures to decorate all of those polygons; this problem is only magnified in virtual reality. This is one of the reasons even open-world games are ultimately limited in scope, and gameplay is largely deterministic: it is through knowing where you are going, and all of your options to get there, that developers can create all of the assets necessary to deliver an immersive experience.

That’s not to say that games can’t have random elements, above and beyond roguelike games that are procedurally generated: the most obvious way to deliver an element of unpredictability is for humans to play each other, albeit in well-defined and controlled environments.

Social and User-Generated Content

Social networking has undergone a similar medium evolution as games, with a two-decade delay. The earliest forms of social networking on the web were text-based bulletin boards and USENET groups; then came widespread e-mail, AOL chatrooms, and forums. Facebook arrived on the scene in the mid-2000s; one of the things that helped it explode in popularity was the addition of images. Instagram was an image-only social network that soon added video, which is all that TikTok is. And, over the last couple of years in particular, video conferencing through apps like Zoom or Facetime have delivered 3D images on 2D screens.

Still, medium has always mattered less for social networking, just because the social part of it was so inherently interesting. Humans like communicating with other humans, even if that requires dialing up a random BBS to download messages, composing a reply, and dialing back in to send it. Games may be mostly deterministic, but humans are full of surprises.

Moreover, this means that social networking is much cheaper: instead of the platform having to generate all of the content, users generate all of the content themselves. This makes it harder to get a new platform off of the ground, because you need users to attract users, but it also makes said platform far stickier than any game (or, to put it another way, the stickiest games have a network effect of their own).

Feeds and Algorithms

The first iterations of social networking had no particular algorithmic component other than time: newer posts were at the top (or bottom). That changed with Facebook’s introduction of the News Feed in 2006. Now instead of visiting all of your friends’ pages you could simply browse the feed, which from the very beginning made decisions about what content to include, and in what order.

Over time the News Feed evolved from a relatively straightforward algorithm to one driven by machine learning, with results so inscrutable that it took Facebook six months to fix a recent rankings bug. The impact has been massive: not just Facebook but also Instagram saw huge increases in engagement and increased growth the better their algorithmically-driven feeds became; it was also great for monetization, as the same sort of signals that decided what content you saw also influenced what ads you were presented.

However, the reason why this discussion of algorithmically-driven feeds is in a different section than social networking is because the ultimate example of their power isn’t a social network at all: it’s TikTok. TikTok, of course, is all user-generated content, but the crucial distinction from Facebook is that you aren’t limited to content from your network: TikTok pulls in the videos it thinks you specifically are most interested in from across its entire network. I explained why this was a blindspot for Facebook in 2020:

What is interesting to point out is why it was inevitable that Facebook missed this: first, Facebook views itself first-and-foremost as a social network, so it is disinclined to see that as a liability. Second, that view was reinforced by the way in which Facebook took on Snapchat. The point of The Audacity of Copying Well is that Facebook leveraged Instagram’s social network to halt Snapchat’s growth, which only reinforced that the network was Facebook’s greatest asset, making the TikTok blindspot even larger.

TikTok combines the zero cost nature of user-generated content with a purely algorithmic feed that is divorced from your network; there is a network effect, in that TikTok needs lots of content to choose from, but it doesn’t need your specific network.

The Machine Learning Metaverse

I get that metaverses were so 2021, but it strikes me that the examples from science fiction, including Snow Crash and Ready Player One, were very game-like in their implementation. Their virtual worlds were created by visionary corporations or, in the case of the latter, a visionary developer who also included a deterministic game for ultimate ownership of the virtual world. Yes, third parties could and did build experiences with strong social components, most famously Da5id’s Black Sun club in Snow Crash, but the core mechanic — and the core economics — were closer to a multi-player game than anything else.

That, though, is exceptionally challenging in the real world: remember, creating games, particularly their art, is expensive, and the expense increases the more immersive the experience is. Social media, on the other hand, is cheap because it uses user-generated content, but that content is generally stuck on more basic mediums — text, pictures, and only recently video. Of course that content doesn’t necessarily need to be limited to your network — an algorithm can deliver anything on the network to any user.

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before:

  • Game developers pushed the limits on text, then images, then video, then 3D
  • Social media drives content creation costs to zero first on text, then images, then video
  • Machine learning models can now create text and images for zero marginal cost

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

Zero Marginal Content

There is another way to think about DALL-E and GPT and similar machine learning models, and it goes back to my longstanding contention that the Internet is a transformational technology matched only by the printing press. What made the latter revolutionary was that it drastically reduced the marginal cost of consumption; from The Internet and the Third Estate:

Meanwhile, the economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.

How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing languages across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers. This consolidation occurred at varying rates — England and France several hundred years before Germany and Italy — but in nearly every case the First Estate became not the clergy of the Catholic Church but a national monarch, even as the monarch gave up power to a new kind of meritocratic nobility epitomized by Burke.

The Internet has had two effects: the first is to bring the marginal cost of consumption down to zero. Even with the printing press you still needed to print a physical object and distribute it, and that costs money; meanwhile it costs effectively nothing to send this post to anyone in the world who is interested. This has completely upended the publishing industry and destroyed the power of gatekeepers.

The other impact, though, has been on the production side; I wrote about TikTok in Mistakes and Memes:

That phrase, “Facebook is compelling for the content it surfaces, regardless of who surfaces it”, is oh-so-close to describing TikTok; the error is that the latter is compelling for the content it surfaces, regardless of who creates it…To put it another way, I was too focused on demand — the key to Aggregation Theory — and didn’t think deeply enough about the evolution of supply. User-generated content didn’t have to be simply pictures of pets and political rants from people in one’s network; it could be the foundation of a new kind of network, where the payoff from Metcalfe’s Law is not the number of connections available to any one node, but rather the number of inputs into a customized feed.

Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds needs virtual content created at virtually zero cost, fully customizable to the individual.

Of course there are many other issues raised by DALL-E, many of them philosophical in nature; there has already been a lot of discussion of that over the last week, and there should be a lot more. Still, the economic implications matter as well, and after last week’s announcement the future of the Internet is closer, and weirder, than ever.

Why Netflix Should Sell Ads

The Information reported over the weekend that Netflix executives have told employees to keep an eye on the bottom line:

In two separate meetings over the past few weeks, Netflix executives cautioned employees to be more mindful about spending and hiring, according to three people familiar with the discussions. The comments, made at an employee town hall on Monday as well as during a management offsite held last month in Anaheim, Calif., come as the streaming giant grapples with sharply slowing subscriber growth…

Netflix has also been pondering steps that could help offset the revenue impact of the subscriber slowdown, including cracking down on people sharing the passwords to their accounts. While Netflix has long allowed such password sharing, it has become more common in the U.S. and other parts of the world than executives anticipated, the people said. This effort has been underway for about a year, however, well before the slowdown became apparent.

These are presented as two different issues, but there is a connection between them: Netflix should be hiring more people — a lot of them — and those people should be building a product that increases subscriber numbers and revenue. That product is advertising.

Netflix’s Business Model: Subscriptions

Netflix is, incredibly enough, 24 years old, and a subscription model has served the company well. Not that Netflix had much choice when it started: the company briefly sold DVDs online, before focusing exclusively on renting them; neither approach offered much surface area for advertising, and besides, the subscription model was revolutionary in its own right.

DVDs-by-mail was, from a certain perspective, inconvenient: you couldn’t simply drive to your local Blockbuster and peruse the selection; on the other hand, Netflix’s model gave you access to nearly every movie ever released, not just those in stock at your local store. The real innovation, though, was that business model: instead of paying to rent a DVD and being gouged with late fees, you could pay a set amount each month and keep the DVDs Netflix mailed to you as long as you wanted; send one back to get the next one in your queue.

Consumers loved it, and Netflix has stuck with the model even as the shift to streaming flipped their value proposition on its head: streaming is even more convenient than hopping in your car, but only a subset of content (ever-expanding, to be sure) is on Netflix. That has been more than enough to fuel Netflix’s growth; the service had 222 million subscribers at the end of 2021.

Still, as The Information noted, that number isn’t increasing as quickly as it used to. Netflix sported over 20% year-over-year subscriber growth for years (usually more than that), but hasn’t broken the 20% mark since Q4 2020; growth for the last three quarters was in the single digits. Some of that is likely due to growth that was pulled forward by the pandemic:

Netflix subscriber additions by year

The bigger problem, though, is saturation: Netflix has 75 million subscribers in the US and Canada, where there are around 132 million households. That is nearly as many subscribers as linear TV (84 million), and once you consider shared passwords, penetration may be higher. Other markets like India have more room to grow, but much lower household incomes, and Netflix’s relatively high prices have been an obstacle.

Netflix has ways to grow other than subscribers, most obviously by raising prices. The company has done just that on a mostly annual basis for eight years: in the U.S. the price of a Standard subscription (HD, 2 screens) has increased from $7.99 to $15.49. Netflix executives argue that customers don’t mind because Netflix keeps increasing the amount of content they find compelling; it’s an argument that is easier to accept when subscriber growth is up-and-to-the-right. Now the task is to keep raising prices while ensuring subscriber numbers don’t start going in the opposite direction.

Netflix’s New Initiative: Gaming

To accomplish this Netflix is not only continuing to invest in original programming, but also branching out into new kinds of content, including games. This may seem an odd idea at first: sure, Netflix is generating some new IP, but it would generally be much easier to license that IP than to become proficient at gaming. Netflix, though, believes it has a unique advantage when it comes to gaming: its business model. Chief Product Officer Greg Peters said in the company’s Q2 2021 earnings interview:

Our subscription model yields some opportunities to focus on a set of game experiences that are currently underserved by the sort of dominant monetization models and games. We don’t have to think about ads. We don’t have to think about in-game purchases or other monetization. We don’t have to think about per-title purchases. Really, we can do what we’ve been doing on the movie and series side, which is just hyper laser-focused on delivering the most entertaining game experiences that we can. So we’re finding that many game developers really like that concept and that focus and this idea of being able to put all of their creative energy into just great gameplay and not having to worry about those other considerations that they have typically had to trade off with just making compelling games.

Netflix’s gaming efforts to date have been fairly limited; the company launched with five titles in November, but the fact the company has bought three gaming studios suggests a strong appetite for more — at least amongst Netflix executives.

But what about consumers?

Netflix’s Job: TV

Consumers don’t care so much about business models; they have jobs that they want to get done, and the traditional cable bundle used to do a whole bunch of jobs: information gathering, education, sports, story-telling, escapism, background noise, and more. As I noted in The Great Unbundling, these jobs are increasingly done by completely different services: we get news on the Internet, education from YouTube, story-telling from streaming services, etc.

Netflix is obviously one of those streaming services, but the company is also investing in movies (escapism), and is increasingly the default choice when it comes to the under-appreciated “background noise” category: the service has oceans of low-brow content ready to be streamed while you are barely paying attention. This is a big reason why for many people their choice of streaming services is a matter of which service do they subscribe to in addition to Netflix.

Still, all of these jobs are about passively consuming content; from a consumer perspective gaming is something different, in that you are an active participant. To that end, it’s not clear to me why consumers would even think to consider Netflix when it comes to gaming: that’s not what the service’s job is, nor was it the job of the linear TV bundle that Netflix is helping replace.

Netflix’s Market: Attention

Then again, as founder and co-CEO Reed Hastings likes to say, Netflix’s competition is much broader than TV; Hastings wrote in the company’s Q4 letter to shareholders:

In the US, we earn around 10% of television screen time and less than that of mobile screen time. In 2 other countries, we earn a lower percentage of screen time due to lower penetration of our service. We earn consumer screen time, both mobile and television, away from a very broad set of competitors. We compete with (and lose to) Fortnite more than HBO. When YouTube went down globally for a few minutes in October, our viewing and signups spiked for that time. Hulu is small compared to YouTube for viewing time, and they are successful in the US, but non-existent in Canada, which creates a comparison point: our penetration in the two countries is pretty similar. There are thousands of competitors in this highly-fragmented market vying to entertain consumers and low barriers to entry for those with great experiences. Our growth is based on how good our experience is, compared to all the other screen time experiences from which consumers choose. Our focus is not on Disney+, Amazon or others, but on how we can improve our experience for our members.

Hastings’ point was that analysts should not be overly focused on the threat posed by other streaming services; Netflix has been fighting for attention for years. This is correct, by the way: thanks to the Internet everything from television to social networking to gaming can be delivered at zero marginal cost; the only scarce resource is time, which means attention is the only thing that needs to be competed for.

Well, that and money: companies competing for customer money need a way to communicate to customers what they have to sell and why it is compelling; that means advertising, and advertising requires attention. It follows, then, that the most effective business model in the attention economy is advertising: if customers rely on Google or Facebook to navigate the abundance of content that is the result of zero marginal costs, then it is Google and Facebook that are the best-placed to sell effective ads.

Notice, though, the trouble this Internet reality presents to Netflix: if content is abundant and attention is scarce, it’s easier to sell attention than content; Netflix’s business model, though, is the exact opposite.

Netflix’s Differentiation: Unique Content

Netflix, of course, sees this as a differentiator, and for a long time it was: linear TV had commercials, while Netflix had none. Linear TV made you wait for your favorite show, while Netflix gave you entire seasons at once. This was particularly compelling when Netflix had similar content to linear TV: why would you put up with commercials and TV schedules when you could just stream what you wanted to?

However, as more and more content has moved away from TV and to competing streaming services, differentiation is no longer based on the user experience, but rather uniqueness; on-demand no-commercials is no longer unique, but Stranger Things can only be found on Netflix.

Here Netflix’s biggest advantage is the sheer size of its subscriber base: Netflix can, on an absolute basis, pay more than its streaming competitors for the content it wants, even as its per-subscriber cost basis is lower. This advantage is only accentuated the larger Netflix’s subscriber base gets, and the more revenue it makes per subscriber; the user experience of getting to that unique content doesn’t really matter.

All of these factors make a compelling case for Netflix to start building an advertising business.

First, an advertising-supported or subsidized tier would expand Netflix’s subscriber base, which is not only good for the company’s long-term growth prospects, but also competitive position when it comes to acquiring content. This also applies to the company’s recent attempts to crack down on password sharing, and struggles in the developing world: an advertising-based tier is a much more accessible alternative.

Second, advertising would make it easier for Netflix to continue to raise prices: on one hand, it would provide an alternative for marginal customers who might otherwise churn, and on the other hand, it would create a new benefit for those willing to pay (i.e. no advertising for the highest tiers).

Third, advertising is a natural fit for the jobs Netflix does. Sure, customers enjoy watching shows without ads — and again, they can continue to pay for that — but filler TV, which Netflix also specializes in, is just as easily filled with ads.

Above all, though, is the fact that advertising is a great opportunity that aligns with Netflix’s business: while the company once won with a differentiated user experience worth paying for, today Netflix demands scarce attention because of its investment in unique content. That attention can be sold, and should be, particularly as it increases Netflix’s ability to invest in more unique content, and/or charge higher prices to its user base.

This, I will note, is an about face for me; I’ve long been skeptical that Netflix would ever sell advertising, or that they should. The former may still be warranted, particularly in light of Netflix’s gaming initiative. This feels like solipsism: Netflix’s executives think a lot about their business model, so they are looking for growth opportunities that seem to leverage said business model; I’m not convinced, though, that customers appreciate or care about the differentiation that Netflix claims to be leveraging in gaming, whereas they would appreciate lower prices for streaming, and already have the expectation for ads on TV.

Meanwhile, subscriber growth has stalled, even as the advertising market has proven to be much larger than even Google or Facebook can cover. Moreover, the post-ATT world is freeing up more money for the sort of top-of-funnel advertising that would probably be the norm on a Netflix advertising service. In short, the opportunity is there, the product is right, and the business need is pressing in a way it wasn’t previously.

Of course this would be a lot of work, and a big shift in Netflix’s well-defined value proposition; Netflix, though, has made big shifts before: the entire reason why advertising is a possibility is because Netflix is a streamer, not a DVD mailer. In that view a new (additional) business model is just another rung on Netflix’s ladder.

I wrote a follow-up to this Article in this Daily Update.

An Interview with Nvidia CEO Jensen Huang about Manufacturing Intelligence

It took a few moments to realize what was striking about the opening video for Nvidia’s GTC conference: the complete absence of humans.

That the video ended with Jensen Huang, the founder and CEO of Nvidia, is the exception that accentuates the takeaway. On the one hand, the theme of Huang’s keynote was the idea of AI creating AI via machine learning; he called the idea “intelligence manufacting”:

None of these capabilities were remotely possible a decade ago. Accelerated computing, at data center scale, and combined with machine learning, has sped up computing by a million-x. Accelerated computing has enabled revolutionary AI models like the transformer, and made self-supervised learning possible. AI has fundamentally changed what software can make, and how you make software. Companies are processing and refining their data, making AI software, becoming intelligence manufacturers. Their data centers are becoming AI factories. The first wave of AI learned perception and inference, like recognizing images, understanding speech, recommending a video, or an item to buy. The next wave of AI is robotics: AI planning actions. Digital robots, avatars, and physical robots will perceive, plan, and act, and just as AI frameworks like TensorFlow and PyTorch have become integral to AI software, Omniverse will be essential to making robotics software. Omniverse will enable the next wave of AI.

We will talk about the next million-x, and other dynamics shaping our industry, this GTC. Over the past decade, Nvidia-accelerated computing delivered a million-x speed-up in AI, and started the modern AI revolution. Now AI will revolutionize all industries. The CUDA libraries, the Nvidia SDKs, are at the heart of accelerated computing. With each new SDK, new science, new applications, and new industries can tap into the power of Nvidia computing. These SDKs tackle the immense complexity at the intersection of computing, algorithms, and science. The compound effect of Nvidia’s full-stack approach resulted in a million-x speed-up. Today, Nvidia accelerates millions of developers, and tens of thousands of companies and startups. GTC is for all of you.

The core idea behind machine learning is that computers, presented with massive amounts of data, can extract insights and ideas from that data that no human ever could; to put it another way, the development of not just insights but, going forward, software itself, is an emergent process. Nvidia’s role is making massively parallel computing platforms that do the calculations necessary for this emergent process far more quickly than was ever possible with general purpose computing platforms like those undergirding the PC or smartphone.

What is so striking about Nvidia generally and Huang in particular, though, is the extent to which this capability is the result of the precise opposite of an emergent process: Nvidia the company feels like a deliberate design, nearly 29 years in the making. The company started accelerating defined graphical functions, then invented the shader, which made it possible to program the hardware doing that acceleration. This new approach to processing, though, required new tools, so Nvidia invented them, and has been building on their fully integrated stack ever since.

The deliberateness of Nvidia’s vision is one of the core themes I explored in this interview with Huang recorded shortly after his GTC keynote. We also touch on Huang’s background, including immigrating to the United States as a child, Nvidia’s failed ARM acquisition, and more. One particularly striking takeaway for me came at the end of the interview, where Huang said:

Intelligence is the ability to recognize patterns, recognize relationships, reason about it and make a prediction or plan an action. That’s what intelligence is. It has nothing to do with general intelligence, intelligence is just solving problems. We now have the ability to write software, we now have the ability to partner with computers to write software, that can solve many types of intelligence, make many types of predictions at scales and at levels that no humans can.

For example, we know that there are a trillion things on the Internet and the number things on the Internet is large and expanding incredibly fast, and yet we have this little tiny personal computer called a phone, how do we possibly figure out of the trillion things in the internet what we want to see on our little tiny phone? Well, there needs to be a filter in between, what people call the personalized internet, but basically an AI, a recommender system. A recommender that figures out based on the nature of the content, the characteristics of the content, the features of the content, based on your implicit and your explicit and implicit preferences, find a way through all of that to predict what you would like to see. I mean, that’s a miracle! That’s really quite a miracle to be able to do that at scale for everything from movies and books and music and news and videos and you name it, products and things like that. To be able to predict what Ben would want to see, predict what you would want to click on, predict what is useful to you. I’m talking about things that are consumer oriented stuff, but in the future it’ll be predict what is the best financial strategy for you, predict what is the best medical therapy for you, predict what is the best health regimen for you, what’s the best vacation plan for you. All of these things are going to be possible with AI.

As I note in the interview, this should ring a bell for Stratechery readers: what Huang is describing is the computing functionality that undergirds Aggregation Theory, wherein value in a world of abundance accrues to those entities geared towards discovery and providing means of navigating this world that is fundamentally disconnected from the constraints of physical goods and geography. Nvidia’s role in this world is to provide the hardware capability for Aggregation, to be the Intel to Aggregators’ Windows. That, needless to say, is an attractive position to be; like many such attractive positions, it is one that was built not in months or years, but decades.

Read the full interview with Huang here.

The Current Thing

One of the most amazing things about the Internet is how it provides a level playing field for everyone: this post that you are reading was written by a single person, and it is just as accessible as an article written by the New York Times, or a proclamation issued by the President of the United States.

It used to be that media organizations had a big advantage by virtue of owning printing presses and delivery trucks, or broadcast licenses; celebrities and politicians would have their proclamations carried across those same mediums by virtue of their popularity or power. The same advantages applied to other areas of the economy like retail and consumer packaged goods: building physical stores is a big barrier of entry if you want to be the former, and having a large and popular set of products gave big companies access to those retail channels.

What is common to both examples was the importance of controlling physical space, but that control came with inherent limitations: a paper newspaper could not be delivered everywhere, and TV broadcasts were limited by the signal strength of broadcast towers. Stores had to be built, and packaged goods had to be stocked on shelves.

The Internet changes all of that: now articles and videos are simply digital bits, easily created and easily transmitted anywhere on the globe, effectively for free. Physical goods still need to be made, but they can be sold to anyone by anyone, and shelf space has been replaced by the commoditized cardboard box.

This first order reality, though, has had a multitude of second order effects. Newspapers, for example, were amongst the first online sites, and it seemed like a massive boon: now an article that was only accessible by those within a limited geographic area delineated by the reach of delivery trucks could be read by anyone in the world. The problem is that that same reach was available to everyone; back in 2014 I wrote in Economic Power in the Age of Abundance:1

One of the great paradoxes for newspapers today is that their financial prospects are inversely correlated to their addressable market. Even as advertising revenues have fallen off a cliff — adjusted for inflation, ad revenues are at the same level as the 1950s — newspapers are able to reach audiences not just in their hometowns but literally all over the world.

A drawing of The Internet has Created Unlimited Reach

The problem for publishers, though, is that the free distribution provided by the Internet is not an exclusive. It’s available to every other newspaper as well. Moreover, it’s also available to publishers of any type, even bloggers like myself.

A city view of Stratechery's readers in 2014

To be clear, this is absolutely a boon, particularly for readers, but also for any writer looking to have a broad impact. For your typical newspaper, though, the competitive environment is diametrically opposed to what they are used to: instead of there being a scarce amount of published material, there is an overwhelming abundance. More importantly, this shift in the competitive environment has fundamentally changed just who has economic power.

That article was one of the first articulations of the concepts undergirding Aggregation Theory, which is downstream from the shift from geographic-driven scarcity to Internet-driven abundance: now the most valuable companies in the world were those that helped users navigate abundance, whether that be via search (Google), contacts (Facebook), or retail (Amazon).

The Current Thing Meme

Most of my discussion of Aggregation Theory has been about economics and concepts like zero marginal costs; just as it doesn’t cost anything to publish, it doesn’t cost Google anything (on a marginal basis) to help every person in the world find the specific piece of content they are looking for. This, by extension, motivates publishers to work well with Google, motivates users to use Google more, and gives Google the best possible opportunity to show ads, attracting more and more advertising.

In other words, centralization is a second order effect of decentralization: when all constraints on content are removed, more power than ever accrues to the entity that is the preferred choice for navigating that content; moreover, that power compounds on itself in a virtuous feedback loop.

This dynamic, though, goes beyond economics; consider the meme that inspired the title of this Article:

This meme has, for rather obvious reasons, made a fair number of people upset, particularly to the extent it suggests that support for a country fighting for its existence in the face of a brutal invasion is somehow inauthentic. I think, though, that interpretation is too literal; after all, the meme can be extended in lots of different ways:

What I think is captured here is orthogonal to the actual issue at hand (in the case of Musk’s version, Ukraine); the entire point of the generic labeling (“The Current Thing”) is that there is a dynamic that exists independent of the issue being critiqued, and my contention in this Article is that said dynamic is Aggregation Theory for ideas.

Aggregating Ideas

Go back to the point about the explosion of content on the Internet: the first order implication is that there is an explosion of ideas; after all, anyone can publish anything. Presumably this means that there are far more categories of thought than ever before! And, if you dig deep enough into the Internet, this is true.

Most people, though, don’t dig that deep, just as they don’t dig that deep for content or contacts or commerce: it’s just far easier and more convenient to rely on Google or Facebook or Amazon. Why wouldn’t this same dynamic apply to ideas? Being informed about everything happening in the world is hard if not impossible: humans evolved to care intensely about what happened in their local environment; however, first mass media, and then the Internet, brought news from everywhere to our immediate attention.

Given that, it seems entirely reasonable — expected even — that we all outsource our intuition for what events matter, and what our position on those events should be, to the most convenient option, especially if that option has obvious moral valence. Police brutality against people of color is obviously bad; people dying from COVID is obviously bad; Russia invading Ukraine is obviously bad; why wouldn’t each of us snap into opposition to obviously bad things?

This dynamic is exactly what the meme highlights: sure, the Internet makes possible a wide range of viewpoints — you can absolutely find critics of Black Lives Matter, COVID policies, or pro-Ukraine policies — but the Internet, thanks to its lack of friction and instant feedback loops, also makes nearly every position but the dominant one untenable. If everyone believes one thing, the costs of believing something else increase dramatically, making the consensus opinion the only viable option; this is the same dynamic in which publishers become dependent on Google or Facebook, or retailers on Amazon, just because that is where money can be made.

Again, to be very clear, that does not mean the opinion is wrong; as I noted, I think the resonance of this meme is orthogonal to the rightness of the position it is critiquing, and is instead concerned with the sense that there is something unique about the depth of sentiment surrounding issues that don’t necessarily apply in any real-life way to the people feeling said sentiment.

Righteousness and Dissent

Here I think it is useful to go back to economics. The more that an entity becomes dependent on an Aggregator, the more perilous the economic outlook for said entity. If you depend on Google or Facebook for traffic, or Amazon for sales, the more liable you are to have your margin consumed by said entities. A truly sustainable business model depends on being able to connect to your customers on your own terms, not an Aggregator’s.

A similar critique can be made of ideas; I thought this tweet was very well-stated:

It is very counter-intuitive to see how “bad” ideas are in fact extremely valuable: not only do they highlight why the good ideas are better, but they also sometimes show that the “good” ideas are in fact wrong. Arguing that the earth was not the center of the universe was once a “bad” idea; it was also correct. At the same time, to think that the Catholic church of 500 years ago was the only time where the dominant mode of thinking clearly missed the mark seems exceptionally arrogant; we rightly believe that allowing room for dissidents was, in the past, a good thing. It seems clear to me that doing the same today is likely to prove more valuable than not.

Here is the problem: it turns out it was much easier to believe in the value of dissidents in a world of meaningful marginal costs for the propagation of ideas. Most people never encountered contrary opinions when spreading said opinions entailed publishing them on paper and spreading them in the physical world; on the Internet, on the other hand, bad ideas are only a search away. Moreover, the means by which to suppress those opinions are far more obvious: instead of having to shut down a printing press, one only needs to pressure those same centralized Aggregators that arose for economic reasons to suppress “wrong” speech.

The end result is a world where the ability for anyone to post any idea has, paradoxically, meant far greater mass adoption of popular ideas and far more effective suppression of “bad” ideas. That is invigorating when one feels the dominant idea is righteous; it seems reasonable to worry about the potential of said sense of righteousness overwhelming the consideration of whether particular courses of action are actually good or bad.

Moderation Frameworks

In 2019 I wrote an Article entitled A Framework for Moderation, which argued for a finely-tuned examination of the Internet stack as a driver of moderation decisions:

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

A drawing of The Position In the Stack Matters for Moderation

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

In this view the decision of Cogent and Lumen to cut-off backbone capacity to Russia feels like a mistake. Both companies are the very definition of infrastructure, with no user-facing presence; it follows that they should not be making any decisions based on political considerations (with Carl von Clausewitz’s observation that “war is simply the continuation of political intercourse with the addition of other means” in mind).

And yet they have cut off Russia all the same, along with a whole host of Western companies. To be very clear, I get it: what Russia is doing to Ukraine is wrong, above and beyond the significant economic challenges in serving a country hit with the most comprehensive set of sanctions in history.

At the same time, I can’t help but worry about a world where every level of the Internet stack feels empowered to act based on political considerations, and it makes me think that my Framework for Moderation was wrong. In a world of idea aggregation the push to go along with the current thing is irresistible, making any sort of sober consideration of one’s position in the stack irrelevant. The only effective counter is a blanket policy of not censoring or cutting off service under any circumstance: it’s easier to appeal to consistency than it is to make a nuanced decision that runs counter to the current thing.

That’s the thing about aggregation: one can understand how it works, and yet be powerless to resist its incentives. It seems foolhardy to think that this might be true for economics and not true for ideas, even — especially! — if we are sure they are correct.

  1. The image in the excerpt is from 2014; the updated view of the last thirty days is broadly similar, but there has been a big relative increase in Washington DC, Los Angeles, India, and Singapore. 

Tech and War

While it has been only 11 days since Russia invaded Ukraine, it is already clear that the long-term impact on the tech industry is going to be substantial. The goal of this Article is to explore what those implications might be.

Let me start with some caveats:

  • First, while I presume it goes without saying, I condemn Russia’s invasion of Ukraine in the strongest possible terms.
  • Second, the situation is obviously extremely fluid. My goal is to write about impacts that seem likely to endure, but some issues, particularly those involving China, could shift considerably.
  • Third, the long-term is inherently difficult to predict. Nearly every major event that has has happened over the last several years, from Donald Trump’s election, to COVID, to this invasion, was not only not anticipated by most people, but was in fact dismissed even after there were signs in place that they might occur. So take all of this with the appropriate grain of salt.

The most important thing to make clear about this Article, though, is that much of it is focused on capabilities, not intentions. In much of our daily life we rely on the good intentions of others, even if they have dangerous capabilities. One mundane example is traffic on a two-way street: oncoming cars have the capability of swerving into my lane and hitting me head-on; I trust that they do not intend to do so. There are a whole host of similar examples, for good reason: societies that trust each other’s intentions function much more smoothly and efficiently; no one wants every single street to be built with concrete dividers between traffic.

In an ideal world international relations would work the same way, and there is an argument that much of the prosperity of the last few decades has been driven by the sort of increased trust and interconnectedness that comes from assuming the good intentions of other countries — or at a minimum enlightened self-interest — leading to increased economic efficiency for everyone engaged in global trade. In this arena, though, the question of capabilities is never far from the surface: what can one country do to another, should the intentions of the first country change, and what must the second country do to ameliorate that risk? And here there is very much a tech angle.

Public Versus Private Sanctions

In response to the invasion Western governments unleashed an unprecedented set of sanctions on Russia; these sanctions were primarily financial in nature, and included:

  • Disconnecting sanctioned Russian banks from the SWIFT international payment system
  • Cutting off the Russian Central Bank from foreign currency reserves held in the West
  • Identifying and freezing the assets of sanctioned Russian individuals

The sanctions, which were announced last weekend, led to the crashing of the ruble and the ongoing closure of the Russian stock market, and are expected to wreak havoc on the Russian economy; now the U.S. and E.U. are discussing banning imports of Russian oil.

This Article is not about those public sanctions, by which I mean sanctions coming from governments (Noah Smith has a useful overview of their impact here); what is interesting to me is the extent to which these public sanctions have been accompanied by private sanctions by companies, including:

This is an incomplete list! The key thing to note, though, is few if any of these actions were required by law; they were decisions made by individual companies.

This, though, is where the intentions versus capabilities distinction arises, in two different respects:

  • First, the public/private distinction that I just noted may not be so apparent to people outside of the U.S. or the West generally; one could certainly understand how other countries might interpret this collection of public and private sanctions as being different parts of a single whole. To that end, this collection of actions demonstrates the capability of effectively wiping an economy off of the map.
  • Second, to the extent that the public/private distinction is understood, it highlights the capability of private companies to impose sanctions, and their willingness to do so in pursuit of political goals — even if those political goals are to stop an unjust invasion and save lives.

I suspect that both of these interpretations matter and will have long-reaching effects, in part because they are not a new trend, but a continuation of an ongoing one.

Internet 3.0 and the Rise of Politics

Last January I wrote an article entitled Internet 3.0 and the Beginning of (Tech) History that argued that technology broadly has passed through two eras: 1.0 was the technological era, and 2.0 was the economic era.

The technological era was defined by the creation of the technical building blocks and protocols that undergird the Internet; there were few economic incentives beyond building products that people might want to buy, in part because few thought there was any money to be made on the Internet. That changed during the 2000s, as it became increasingly clear that the Internet provided massive returns to scale in a way that benefited both Aggregators and their customers. I wrote:

Google was founded in 1998, in the middle of the dot-com bubble, but it was the company’s IPO in 2004 that, to my mind, marked the beginning of Internet 2.0. This period of the Internet was about the economics of zero friction; specifically, unlike the assumptions that undergird Internet 1.0, it turned out that the Internet does not disperse economic power but in fact centralizes it. This is what undergirds Aggregation Theory: when services compete without the constraints of geography or marginal costs, dominance is achieved by controlling demand, not supply, and winners take most.

Aggregators like Google and Facebook weren’t the only winners though; the smartphone market was so large that it could sustain a duopoly of two platforms with multi-sided networks of developers, users, and OEMs (in the case of Android; Apple was both OEM and platform provider for iOS). Meanwhile, public cloud providers could provide back-end servers for companies of all types, with scale economics that not only lowered costs and increased flexibility, but which also justified far more investments in R&D that were immediately deployable by said companies.

There is no economic reason to ever leave this era, which leads many to assume we never will; services that are centralized work better for more people more cheaply, leaving no obvious product vector on which non-centralized alternatives are better. The exception is politics, and the point of that Article was to argue that we were entering a new era: the political era.

Go back to the two points I raised above:

  • If a country, corporation, or individual assumes that the tech platforms of another country are acting in concert with their enemy, they are highly motivated to pursue alternatives to those tech platforms even if those platforms work better, are more popular, are cheaper, etc.
  • If a country, corporation, or individual assumes that tech platforms are themselves engaged in political action, they are highly motivated to pursue alternatives to those tech platforms even if those platforms work better, are more popular, are cheaper, etc.

Again, just to be crystal clear, these takeaways are true even if the intentions are pure, and the actions are just, because the question at hand is not about intentions but about capabilities. And while I get it can be hard to appreciate that distinction in the case of a situation like Ukraine, it’s worth noting that similar takeaways could be drawn from de-platforming controversies after January 6 and the attempts to control misinformation during COVID; if anything the fact that there are multiple object lessons in recent history of the willingness of platforms to both act in concert with governments and also of their own volition emphasizes the fact that from a realist perspective capabilities matter more than intentions, because the willingness to exercise those capabilities (to a widely varying degree, to be sure) has not been constrained to a single case.

India and Sanctions

The two countries where these questions are likely to loom largest are China and India.

Start with the latter: India is widely considered the most important long-term growth market for a whole host of tech companies, thanks to its massive population that is only just now coming online, combined with a growing economy that, to the extent it can follow a similar path to China, promises more opportunity than anywhere else in the world. In the economic era it has made perfect sense for India to be a core market for Google, Facebook, Amazon, etc.

It was India, though, that raised some of the most strident objections to Twitter and Facebook’s decision to take down President Trump’s accounts after January 6, with several politicians pointing out that tech executives in San Francisco could do the same to them; in the case of the Ukraine invasion India is staying neutral, thanks in part to its significantly longer-term relationship with Russia, particularly from a military perspective. That makes it all-the-more likely that the aforementioned private sanctions are being interpreted in terms of capabilities, not intentions, clouding the long-term prospects of those tech companies counting on India for growth.

It’s important to note that this isn’t an abstract idea for India: the country’s nuclear program was started in response to India’s defeat in the 1962 Sino-Indian War, but the country’s first nuclear test in 1974 led to sanctions from the United States, as did far more extensive tests in 1998. The United States also sailed a fleet into the Bay of Bengal during a conflict with Pakistan in 1971, shortly after India signed a treaty with the USSR, and the fleet was there to oppose India, not to support it. This matters not because it excuses India’s neutrality in the current conflict, but to explain why these private sanctions from U.S. tech companies may have different interpretations and unintended consequences in a market they were counting on.

China is in a very different position, thanks to the long-run effects of the Great Firewall: U.S. consumer services companies obviously can’t sanction China, because China has already blocked them and built its own alternatives (one does wonder to what extent Moscow and perhaps even New Delhi look at the Great Firewall with jealousy). China’s problem — and potentially the West’s opportunity — lies with a far more fundamental piece of technology: semiconductors.

Semiconductors and China

China’s leading semiconductor foundry is the Semiconductor Manufacturing International Corporation — SMIC for short. While the majority of SMIC’s volume is on older 55nm and 65nm process nodes, the company has a sizable and growing business at the extremely popular 28nm node. The company has also recently started mass production of 14nm and has demonstrated the ability to build 7nm chips. Even so, the most cutting edge companies in China have long been used to buying their chips abroad, whether that be Intel chips for servers or contracting with TSMC for everything else.

The Trump administration took square aim at both vectors: in the case of the latter all American chip companies and companies that relied on American technology — which is to say, all of them, including TSMC — were barred from selling to Huawei, effectively killing the company’s smartphone business and severely damaging its telecom business. SMIC, meanwhile, has been barred from acquiring ASML’s cutting-edge extreme ultroviolet (EUV) lithography machines, which are essential for building 7nm and below chips cost-effectively.

What is notable in terms of this conflict is that China has given every appearance of supporting Russia (although the country, like India, abstained from the United Nations motion to condemn Russia’s invasion). The big question in terms of Russian sanctions is just how far this support will go: on the one hand, working with Russia risks sanctions in the West, which is a much larger market for China; this is a big deterrent for SMIC, which has a big opportunity to undercut TSMC in price on trailing edge nodes. Seizing that opportunity means sanctioning Russia; from Bloomberg:

Washington is expected to lean on major Chinese companies from Semiconductor Manufacturing International Corp. to Lenovo Group Ltd. to join U.S.-led sanctions against Russia, aiming to cripple the country’s ability to buy key technologies and components. China is Russia’s biggest supplier of electronics, accounting for a third of its semiconductor imports and more than half of its computers and smartphones. Beijing has opposed the increasingly severe measures that the U.S. has taken to restrict Russia’s trade and economy in response to its invasion of Ukraine, however U.S. officials expect tech suppliers such as SMIC to uphold the new rules and curtail trade of sensitive technology with American origin, especially as it relates to Russia’s defense sector.

Any items produced with certain U.S. inputs, including American software and designs, are subject to the ban, even if they are made overseas, a U.S. official told Bloomberg News on Monday. Companies that attempt to evade these new controls would face the prospect of themselves being cut off from U.S.-origin technology and corporate executives risk going to jail for violations…Beijing has made self-sufficiency in the semiconductor sector a national priority, but for now its tech companies still rely heavily on U.S. designs and technology. SMIC continues to use chipmaking equipment from American vendors including Applied Materials Inc. even after it got blacklisted by the U.S. in 2020. If the company fails to comply with U.S. sanctions, it could face tightening of restrictions that may make it more difficult or impossible to secure licenses for repair parts and new equipment.

China, though, may be tempted by the prospect of resource-rich Russia being dependent on Beijing for a functioning economy, as well as the longer-term project of building economic and technical systems that are independent of the West. That could entail pushing SMIC to send chips to Russia in defiance of Western sanctions, with the thought being that short-term pain is worth the long-term gain. The risks of this approach are huge though: even if SMIC can’t get EUV, it can still get pretty far with deep ultraviolet (DUV), but the Biden administration is already pushing to cut China off from any more of those machines as well:

The chip maker, SMIC, a year ago had been added to the entity list, which restricts companies from exporting U.S.-origin technology without a license. That, however, has proven ineffective in keeping many manufacturing tools used to make semiconductors out of SMIC’s hands, the people familiar said. Under the current designation, SMIC is restricted from buying U.S. tools “uniquely required” to build chips with 10-nanometer circuits and smaller, which is close to the leading edge of semiconductor manufacturing technology. Since many manufacturing tools can be used to produce chips at a variety of sizes, exporters took the view that they were still able to sell tools that could be adjusted to produce the smaller chips and the restriction “became effectively language that means nothing,” one of the people said…

The Defense Department, with the support of officials at the State and Energy Departments, as well as the National Security Council, wants to change the wording to restrict SMIC’s access to items “capable of” producing semiconductors with 14-nanometer circuits and smaller, the people familiar said, broadening the list of items SMIC won’t be able to get.

This is context for what may be the single biggest strategic question confronting the Biden administration:

  • The U.S. has already damaged Huawei and constrained SMIC’s long-term prospects on the cutting edge, and there is a credible threat that the U.S. could further damage SMIC’s current capabilities.
  • The U.S. doesn’t simply want SMIC to not sell to Russia, it also wants broader support from China for sanctions against Russia, particularly since China almost certainly has more influence over Russian President Vladimir Putin than any other country.

The strategic choice is this:

  • The U.S. could relax sanctions on SMIC and address China’s broader semiconductor needs in exchange for cooperation on Russia, at the medium-term risk of increasing China’s technological capability (albeit with the upside of helping U.S. firms that undergird much of the semiconductor industry).
  • Alternatively, the U.S. could simply pressure China to not sell to Russia, or even ratchet up pressure on SMIC, at the short-term risk of China taking a hit to its technological industry in exchange for supporting Russia and building an alternative to the U.S.-dominated world order.

This is not an easy question, particularly in the heat of the current moment. China, not Russia, is the U.S.’s long-term strategic rival; more than that, though, is another long-term issue that very much has a semiconductor component: Taiwan.

Taiwan and Deterrence

While China has framed its refusal to condemn Russia mostly in terms of NATO expansion, it’s not hard to draw the obvious parallel to Taiwan: given that Beijing sees Taiwan as a part of China that it has the right to take back, by military means if necessary, it’s understandable why China might view Russian rhetoric about Ukraine’s historical ties to Russia with sympathy; given this, it’s possible that China is going to support Russia no matter what. Moreover, this also raises questions about the wisdom of enhancing China’s technological capabilities with American-derived technology, given the high likelihood that said enhancement will go towards increased military capabilities.

At the same time, cutting China off from TSMC has brought its own risks; I wrote in the context of Huawei in 2020:

Should the United States and China ever actually go to war, it would likely be because of Taiwan. In this TSMC specifically, and the Taiwan manufacturing base generally, are a significant deterrent: both China and the U.S. need access to the best chip maker in the world, along with a host of other high-precision pieces of the global electronics supply chain. That means that a hot war, which would almost certainly result in some amount of destruction to these capabilities, would, as the Wall Street Journal notes, be devastating:

Taiwan, China and South Korea “represent a triad of dependency for the entire U.S. digital economy,” said an influential 2019 Pentagon report on national-security considerations regarding the supply chain for microelectronics. “Taiwan, in particular, represents a single point-of-failure for most of the United States’ largest, most important technology companies,” said the report, written by Rick Switzer, who served as a senior foreign-policy adviser to an Air Force unit. He concluded that the U.S. needs to strengthen its industrial policies to address the situation.

It’s the same for China, as I noted in that Daily Update about Huawei; one of the risks of cutting China off from TSMC is that the deterrent value of TSMC’s operations is diminished. At the same time, though, Taiwan — and South Korea, for that matter, where Samsung’s most advanced fabs are located — are a whole lot closer to China than they are to the U.S., and the location of land masses is not changing, at least on a time scale that is significant to this discussion!

This point applies to semiconductors broadly: as long as China needs U.S. technology or TSMC manufacturing, it is heavily incentivized to not take action against Taiwan; when and if China develops its own technology, whether now or many years from now, that deterrence is no longer a factor. In other words, the short-term and longer-term are in opposition to the medium-term:

  • The short-term upside of relaxing sanctions against China in semiconductors in exchange for supporting sanctions against Russia is a potentially earlier end to the conflict in Ukraine.
  • The medium-term risk of giving China access to Western technology is that China develops more advanced products that could be used by its military.
  • The long-term risk of cutting China off is the development of an alternative to the West that is completely unconstrained by sanctions, public or private.

There is no obvious answer, and it’s worth noting that the historical pattern — i.e. the Cold War — is a complete separation of trade and technology. That is one possible path, that we may fall into by default. It’s worth remembering, though, that dividers in the street are no way to live, and while most U.S. tech companies have flexed their capabilities, the most impressive tech of all is attractive enough and irreplaceable enough that it could still create dependencies that lead to squabbles but not another war.

Tech Power

The most powerful takeaway from the past ten days, though, at least from a tech perspective, is related to the nuclear question. To return to India, from the Nuclear Weapons Archive:

A most telling (and often quoted) exchange between [India Prime Minister Inder Kumar] Gujral and Pres. Clinton occurred on 22 September 1997 at the occasion of the U.N. General Assembly session in New York. Gujral later recounted telling Clinton that an old Indian saying holds that Indians have a third eye. “I told President Clinton that when my third eye looks at the door of the Security Council chamber it sees a little sign that says ‘only those with economic power or nuclear weapons allowed.’ I said to him, ‘it is very difficult to achieve economic wealth’.”

The implication is that nuclear capability was a more attainable route to great nation status than was economic dominance; what, then, to make of an industry that can, via private sanction, destroy economic wealth above and beyond government action? The capability wielded by the tech industry is incredible; it is easy to cheer when it is being used in the service of intentions that are so clearly good. It’s equally easy to understand how much fear that capability may generate in the long run.

Shopify’s Evolution

Tobi Lütke, who famously started Shopify when he realized that the software he built to run his snowboard shop was a much bigger opportunity than the shop itself, was reminiscing on Twitter about how cheap it used to be to run digital advertising:

This isn’t just a fun story: it’s a critical insight into the conditions that enabled Shopify to become the company that it is today; understanding how those conditions have changed give insight into what Shopify needs to become.

Shopify’s Evolution

Back in 2004 a lot of the pieces that were necessary to run an e-commerce site existed, albeit in rudimentary and hard-to-use forms. One could, with a bit of trouble, open a merchant account and accept credit cards; 3PL warehouses could hold inventory; UPS and Fedex could deliver your goods. And, of course, you could run really cheap ads on Google. What was missing was software to tie all of those pieces together, which is exactly what Lütke built for Snowdevil, his snowboard shop, and in 2006 opened up to other merchants; the software’s name was called Shopify:

Shopify started as the center of multiple third party services

This idea of Shopify as the hub for an e-commerce shop is one that has persisted to this day, but over the ensuing years Shopify has added on platform components as well; a platform looks like this (from The Bill Gates Line):

A drawing of Platform Businesses Attract Customers by Third Parties

The first platform was the Shopify App Store, launched in 2009, where developers could access the Shopify API and create new plugins to deliver specific functionality that merchants might need. For example, if you want to offer a product on a subscription basis you might install Recharge Subscriptions; if you want help managing your shipments you might install ShipStation. Shopify itself delivers additional functionality through the Shopify App Store, like its Facebook Channel plugin, which lets you easily sync your products to Facebook to easily manage your advertising.

A year later Shopify launched the Shopify Theme Store, where merchants could buy a professional theme to make their site their own; now the hub looked like this:

Shopify added two platforms for apps and themes

At the same time Shopify also vertically integrated to incorporate features it once left to partners; the most important of these integrations was Shopify Payments, which launched in 2013 and was rebranded as Shop Pay in 2020. Yes, you could still use a clunky merchant account, but it was far easier to simply use the built-in Shop Pay functionality. Shop Pay was also critical in that it was the first part of the Shopify stack to build a consumer brand: users presented with a myriad of payment options know that if they click the purple Shop Pay button all of their payment and delivery information will be pre-populated, making it possible to buy with just one additional tap.

Shopify integrated into payments with Shop Pay

Even with this toehold in the consumer space, though, Shopify has remained a company that is focused first-and-foremost on its merchants and its mission to “help people achieve independence by making it easier to start, run, and grow a business.” That independence doesn’t just mean one-person entrepreneurs either: good-size brands like Gymshark, Rebecca Minkoff, KKW Beauty, Kylie Cosmetics, and FIGS leverage Shopify to build brands that are independent of Amazon in particular.

Apple and Facebook

In 2020’s Apple and Facebook I explained the symbiotic relationship between the two companies when it came to the App Store:

Facebook’s early stumbles on mobile are well-documented: the company bet on web-based apps that just didn’t work very well; the company completely rewrote its iOS app even as it was going public, which meant it had a stagnating app at the exact time mobile was exploding, threatening the desktop advertising product and platform that were the basis of the company’s S-1.

The re-write turned out to be not just a company-saving move — the native mobile app had the exact same user-facing features as the web-centric one, with the rather important detail that it actually worked — but in fact an industry-transformational one: one of the first new products enabled by the company’s new app were app install ads. From TechCrunch in 2012:

Facebook is making a big bet on the app economy, and wants to be the top source of discovery outside of the app stores. The mobile app install ads let developers buy tiles that promote their apps in the Facebook mobile news feed. When tapped, these instantly open the Apple App Store or Google Play market where users can download apps.

The ads are working already. One client TinyCo saw a 50% higher click through rate and higher conversion rates compared to other install channels. Facebook’s ads also brought in more engaged users. Ads tech startup Nanigans clients attained 8-10X more reach than traditional mobile ad buys when it purchased Facebook mobile app install ads. AdParlor racked up a consistent 1-2% click through rate.

Facebook’s App Install product quickly became the most important channel for acquiring users, particularly for games that monetized with Apple’s in-app purchase API: the combination of Facebook data with developer’s sophisticated understanding of expected value per app install led to an explosion in App Store revenue. 

It’s worth underlining this point: the App Store would not be nearly the juggernaut it is today, nor would Apple’s “Services Narrative” be so compelling, were it not for the work that Facebook put in to build out the best customer acquisition engine in the industry (much to the company’s financial benefit, to be clear); Apple and Facebook’s relationship looked like this:

Apple and Facebook's symbiotic relationship

Facebook was by far the best and most efficient way to acquire new users, while Apple was able to sit back and harvest 30% of the revenue earned from those new users. Yes, some number of users came in via the App Store, but the primary discovery mechanism in the App Store is search, which relies on a user knowing what they want; Facebook showed users apps they never knew existed.

Facebook and Shopify

Facebook plays a similar role for e-commerce, particularly the independent sellers that exist on Shopify:

Shopify's dependence on Facebook

What makes Facebook’s approach so effective is that its advertising is a platform in its own right. Just as every app on a smartphone or every piece of software on a PC shares the same resources and API, every advertiser on Facebook, from app maker to e-commerce seller and everyone in-between, uses the same set of APIs that Facebook provides. What makes this so effective, though, is that the shared resources are not computing power but data, especially conversion data; I explained how this worked in 2020’s Privacy Labels and Lookalike Audiences, but briefly:

  • Facebook shows a user an ad, and records the unique identifier provided by their phone (IDFA, Identifier for Advertisers, on iOS; GAID, Google Advertising Identifer, on Android).
  • A user downloads an app, or makes an e-commerce purchase; Facebook’s SDK, which is embedded in the app or e-commerce site, again records the IDFA or notes the referral code that led the user to the site, and charges the advertiser for a successful conversion.
  • The details around this conversion, whether it be which creative was used in the ad, what was purchased, how much was spent, etc., is added to the profile of the user who saw the ad.
  • Advertisers take out new ads on Facebook asking the company to find users who are similar to users who have purchased from them before (Facebook knows this from past purchases seen by its SDK, or because an advertiser uploads a list of past customers).
  • Facebook repeats this process, further refining its understanding of customers, apps, and e-commerce offerings in the process, including the esoteric ways (only discoverable by machine learnings) in which they relate to each other.

The critical thing to understand about this process is that no one app or e-commerce seller stands alone; everyone has collectively deputized Facebook to hold all of the pertinent user data and to figure out how all of the pieces fit together in a way that lets each individual app maker or e-commerce retailer acquire new customers for a price less than what that customer is worth to them in lifetime value.

This, by extension, means that Shopify doesn’t stand alone either: the company is even more dependent on Facebook to drive e-commerce than Apple ever was to drive app installs.1 That’s why it’s not a surprise that Facebook’s recent plunge in value was preceded (and then followed) by Shopify’s own:

Shopify and Facebook's faltering stocks

Part of Shopify’s decline is likely related to the fact that it is another pandemic stock: the sort of growth the company saw while customers were stuck at home with nothing to do other than shop online couldn’t go on forever. Moreover, the company announced big increases in spending (more on this in a moment). However, the major headwind the company shares with Facebook is Apple.

ATT’s Impact

I have been writing regularly about Apple’s App Tracking Transparency (ATT) initiative since it was announced two years ago, so I won’t belabor the point; the key thing to understand is that ATT broke the Facebook advertising collective. On the app install side this was done by technical means: Apple made the IDFA an opt-in behind a scary warning about tracking, which most users declined.

The e-commerce side is more interesting: while Apple can’t technically limit what Facebook collects via its Pixel on a retailer’s website, ATT bans said broad collection all the same. To that end Facebook originally announced plans to not show the ATT prompt and abandon the IDFA; a few months later the company did an about-face announcing that it would indeed show the ATT prompt, and also limit what it collected in its in-app browser via the Facebook Pixel.

It’s unclear what happened to change Facebook’s mind; had they continued on their original path then their app advertising business would have suffered from a loss of data, but the e-commerce advertising would have been relatively unaffected (the loss of IDFA-related app install data would have decreased the amount of data available for that machine learning-driven targeting). What seems likely — and to be clear, this is pure speculation — is that Apple threatened to kick Facebook and its apps out of the App Store if it didn’t abide by ATT’s policies, even the parts that were technically unenforceable.

Regardless, the net impact is that it was suddenly impossible for Facebook to tie together all of the various pieces of that virtuous cycle I described above deterministically. Ads were hard to tie to conversions, conversions were hard to tie to users, which meant that users and advertisers were hard to tie to each other, resulting in less relevant ads for the former that cost more money for the latter.2 The monetary impact is massive: Facebook forecast a $10 billion hit to 2022 revenue, and as noted, its market value has been cut by a third.

Note, however, that ATT didn’t hurt all advertisers: companies like Google and Amazon are doing great; I explained why in Digital Advertising in 2022:

Amazon’s advertising business has three big advantages relative to Facebook’s.

  1. Search advertising is the best and most profitable form of advertising. This goes back to the point I made above: the more certain you are that you are showing advertising to a receptive customer, the more advertisers are willing to bid for that ad slot, and text in a search box will always be more accurate than the best targeting.
  2. Amazon faces no data restrictions. That noted, Amazon also has data on its users, and it is free to collect as much of it as it likes, and leverage it however it wishes when it comes to selling ads. This is because all of Amazon’s data collection, ad targeting, and conversion happen on the same platform — Amazon.com, or the Amazon app. ATT only restricts third party data sharing, which means it doesn’t affect Amazon at all.
  3. Amazon benefits from ATT spillover. That is not to say that ATT didn’t have an effect on Amazon: I noted above that Snap’s business did better than expected in part because its business wasn’t dominated by direct advertising to the extent that Facebook’s was, and that more advertising money flowed into other types of advertising. This almost certainly made a difference for Amazon as well: one of the most affected areas of Facebook advertising was e-commerce; if you are an e-commerce seller whose Shopify store powered-by Facebook ads was suddenly under-performing thanks to ATT, then the natural response is to shift products and advertising spend to Amazon.

All of these advantages will persist: search advertising will always be effective, and Amazon can always leverage data, and while some degree of ATT-related pullback was likely due to both uncertainty and the fact that Facebook hasn’t built back its advertising stack for a post-ATT world, the fact that said future stack will never be quite as good as the old one means that there is more e-commerce share to be grabbed than there might have been otherwise.

This last point is not set in stone: Shopify is already making major investments to compete with Amazon; it has the opportunity to do even more.

The Shopify Fulfillment Network

In 2019 I wrote about Shopify and its orthogonal competition with Amazon in Shopify and the Power of Platforms:

The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

A drawing of The Shopify Platform


This is how Shopify can both in the long run be the biggest competitor to Amazon even as it is a company that Amazon can’t compete with: Amazon is pursuing customers and bringing suppliers and merchants onto its platform on its own terms; Shopify is giving merchants an opportunity to differentiate themselves while bearing no risk if they fail.

The context of that Article was Shopify’s announcement of yet another platform initiative: the Shopify Fulfillment Network.

From the company’s blog:

Customers want their online purchases fast, with free shipping. It’s now expected, thanks to the recent standard set by the largest companies in the world. Working with third-party logistics companies can be tedious. And finding a partner that won’t obscure your customer data or hide your brand with packaging is a challenge.

This is why we’re building Shopify Fulfillment Network—a geographically dispersed network of fulfillment centers with smart inventory-allocation technology. We use machine learning to predict the best places to store and ship your products, so they can get to your customers as fast as possible.

We’ve negotiated low rates with a growing network of warehouse and logistic providers, and then passed on those savings to you. We support multiple channels, custom packaging and branding, and returns and exchanges. And it’s all managed in Shopify.

The first paragraph explains why the Shopify Fulfillment Network was a necessary step for Shopify: Amazon may commoditize suppliers, hiding their brand from website to box, but if its offering is truly superior, suppliers don’t have much choice. That was increasingly the case with regards to fulfillment, particularly for the small-scale sellers that are important to Shopify not necessarily for short-term revenue generation but for long-run upside. Amazon was simply easier for merchants and more reliable for customers.

Notice, though, that Shopify is not doing everything on their own: there is an entire world of third-party logistics companies (known as “3PLs”) that offer warehousing and shipping services. What Shopify is doing is what platforms do best: act as an interface between two modularized pieces of a value chain.

A drawing of Shopify as an Interface

On one side are all of Shopify’s hundreds of thousands of merchants: interfacing with all of them on an individual basis is not scalable for those 3PL companies; now, though, they only need to interface with Shopify. The same benefit applies in the opposite direction: merchants don’t have the means to negotiate with multiple 3PLs such that their inventory is optimally placed to offer fast and inexpensive delivery to customers; worse, the small-scale sellers I discussed above often can’t even get an audience with these logistics companies. Now, though, Shopify customers need only interface with Shopify.

Over the intervening three years, though, Shopify has moved away from this vision: Shopify Fulfillment Network (SFN) is not going to be a platform like the Shopify App Store but rather an integrated part of Shopify’s core offering like Shop Pay. President Harley Finkelstein explained on the company’s recent earnings call:

We are consolidating our network to larger facilities. We’ll operate more of them ourselves, and we’ll unify the warehouse management software that we’ve been building and testing over the past 18 months. We expect that these changes will enable us to deliver packages in 2 days or less to more than 90% of the U.S. population, while minimizing the inventory investment for SFN merchants. While Amy will go into more detail as to what our evolved vision looks like from a financial perspective, I can tell you, from a merchant’s perspective, Shopify Fulfillment can be life-changing for their businesses. We hear from merchants that fulfillment is only something you think about when it isn’t working well, and they are thrilled that they now never have to think about it. The stockouts and pre-orders that took the shine off strong demand for the new releases, largely became, I think, in the past, with Shopify Fulfillment. And just recently, I heard from a merchant who tells me that he sleeps even better because Shopify Fulfillment just works. Comments like these fuel our ambition, and we’ll continue to explore opportunities to give merchants more visibility and control over their most important assets.

Building and managing warehouses itself is a major commitment: Shopify is going to spend a billion dollars in capital expenditures in 2023 and 2024 building out the Shopify Fulfillment Network, and it seems safe to assume that that spending will only increase over time. I think, though, that this makes sense: Shopify learned from Shop Pay that it can decrease complexity for merchants and deliver a better experience for customers by doing essential functionality itself, and those same needs exist in logistics as well, particularly given Amazon’s massive investment in its own integrated operations.

Keep in mind, though, logistics isn’t the only advantage that Amazon has.

Shopify Advertising Services

Remember the fundamental challenge that ATT presents to Facebook: the company can no longer pool the conversion and targeting data of all of its advertisers such that the sum of effectiveness vastly exceeds what any one of those advertisers could accomplish on their own. The response of the gaming market has been a wave of consolidation to better pool and leverage data. Amazon, as noted, is well ahead of the curve here: because the company’s third-party merchant ecosystem lives within the Amazon.com website and app, Amazon has full knowledge of conversions and the ability to target consumers without any interference from Apple.

Shopify is halfway there: a massive number of e-commerce retailers are on Shopify, but today Shopify mostly treats them all as individual entities, having left the pooling of data for advertising to Facebook. Now that Facebook is handicapped by Apple, Shopify should step up to provide substitute functionality and build its own advertising network.

This advertising network, though, would look a bit different than what you might expect. First, Shopify doesn’t have any major customer-facing properties to display ads; it could potentially build some cross-shop advertising, but that doesn’t seem very ideal for either merchants or customers. The reality is that Shopify merchants still need to find customers on other sites like social networks; the challenge is doing so without knowing who is actually seeing the ads.

Here Shopify’s ability to act on behalf of the entire Shopify network provides an opening: instead of being an advertising seller at scale, like Facebook, Shopify the company would become an advertising buyer at scale. Armed with its perfect knowledge of conversions it could run probabilistically-targeted campaigns that are much more precise than anyone else, using every possible parameter available to advertisers on Facebook or anywhere else, and over time build sophisticated cohorts that map to certain types of products and purchase patterns. No single Shopify merchant could do this on their own with a similar level of sophistication, which Facebook indirectly admitted on its recent earnings call; COO Sheryl Sandberg said:

On measurement, there were 2 key areas within measurement, which were impacted as a result of Apple’s iOS changes. And I talked about this on the call last quarter as you referenced. The first is the underreporting gap. And what’s happening here is that advertisers worry they’re not getting the ROI they’re actually getting. On this part, we’ve made real progress on that underreporting gap since last quarter, and we believe we’ll continue to make more progress in the years ahead. I do want to caution that it’s easier to address this with large campaigns and harder with small campaigns, which means that part will take longer, and it also means that Apple’s changes continue to hurt small businesses more.

Sandberg’s comment was primarily about the sheer amount of data produced by larger campaigns, but the same principle applies to the sheer number of campaigns as well: any one advertiser is, thanks to ATT, limited in the data points they can get from Facebook, making it more difficult to run multiple campaigns to better understand what works and what doesn’t. However, Shopify could in theory run campaigns for each of its individual merchants and collate the data on the back-end; this is the inverse of Amazon’s advantage of being one website, because in this case Shopify benefits from having a hand in such a huge number of them.

I suspect the response of many close Shopify watchers is that such an initiative is not in the company’s DNA; that, though, is why the evolution of the Shopify Fulfillment Network is so notable: building and operating warehouses wasn’t really in the company’s DNA either, but it is the right thing to do if the company is going to continue to power The Anti-Amazon Alliance. The same principle applies to this theoretical ad network, particularly given that it is Amazon who is a big winner from Apple’s changes.

What is interesting is that it appears that Shopify is already flirting with this idea:3 Last summer the company quietly introduced the concept of Shopify Audiences during an invite-only presentation; Tanay Jaipuria fleshed out the concept on Substack:

Shopify Audiences is a data exchange network, which uses aggregated conversion data (i.e., data around which people bought a merchant’s product) across all opted-in merchants on Shopify to generate a custom audience for a given merchant’s product. This audience is essentially a set of people Shopify believes are likely to be interested in your product given the data around all transactions that have taken place across all opted-in merchants on Shopify.

Merchants can then use these audiences when advertising on FB, Snap, Twitter, and other ad platforms1 either as custom audiences or lookalike audiences which should result in higher-performing ads and lower cost per conversion to acquire customers/sales.

This is a big step, and is very valuable for Facebook advertising in particular; however, it doesn’t address the Apple issue, because ATT bans custom and lookalike audiences from external data sources. That means that this data can’t be used in any campaign targeting iOS users. That is why Shopify Audiences is only the first step: to make this work Shopify Audiences needs to become Shopify Advertising Services, where Shopify doesn’t just collect the targets but buys the ads to target them as well, in a way no one else in the world can.

The Conservation of Attractive Profits

Shopify Advertising Services, Shopify Fulfillment Network, and Shop Pay would, without question, result in a very different looking company than the one I sketched out at the beginning of this Article:

Shopify with integrated payments, fulfillment, and advertising

This sort of monolith, though, makes sense not only because of the specifics of what is happening in the market, but from a theoretical perspective as well. I wrote another article about Shopify a year ago called Market-Making on the Internet, highlighting how major consumer-facing platforms were increasingly incorporating Shopify into their sites and apps:

What makes the Shopify platform so fascinating is that over time more and more of the e-commerce it enables happens somewhere other than a Shopify website. Shopify, for example, can help you sell on Amazon, and in what will be an increasingly important channel, Facebook Shops. In the latter case Facebook and Shopify are partnering to create a fully-integrated market: Facebook’s userbase and advertising tools on one side, and Shopify’s e-commerce management and seller base on the other. The broader takeaway, though, is that Shopify’s real value proposition is working across markets, not creating an exclusive one.

Facebook’s motivations are clear: conversions in Facebook Shops can be tracked in a way conversions on websites no longer can be, which will result in in more effective advertising; it is to Shopify’s credit that they are seen as such an important and credible partner that Facebook is going as far as incorporating Shop Pay as well. Even so, this is very much an example of Facebook integrating and Shopify, as it must, modularizing to accommodate them. That is a recipe for long run commoditization.

That is why it is a good thing that Shopify is integrating elsewhere in its business: profits in a value chain follow integration, and the more that Shopify is intertwined with the biggest players the more it needs to find other ways to differentiate. Shop Pay is already a massive win, and fulfillment has the chance to be another one; advertising shouldn’t be far behind.

I wrote a follow-up to this Article in this Daily Update.

  1. While you can make purchases from brands in the Shop Pay app, it’s an insignificant channel that isn’t at all comparable to Apple’s own direct route to customers via the App Store. 

  2. Facebook’s advertising is sold on an auction basis, and advertisers often bid against desired outcomes, like conversions; the more difficult it is to target users the more users there are who need to be showed an ad, which increases demand for inventory, increasing prices. 

  3. Thanks to Eric Seufert for tipping me off to this. 

Digital Advertising in 2022

Six years ago tomorrow, in The Reality of Missing Out, I wrote that the digital advertising market was settled, and Google and Facebook won:

I have been arguing for a while that in the aggregate the tech sector is fine, and the state of advertising-based services is a perfect example of what I mean: taken as a basket the six companies in this article (Google, Facebook, Yahoo, Twitter, LinkedIn, and Yelp) are up 19% over the last year, even though the latter four companies are down a collective 53%; the fact that Google and Facebook are up a combined 31% more than makes up for it.

This makes sense: while advertising as a whole is a zero-sum game, there is a secular shift from not just print but also radio and TV to digital, which is why this basket of digital advertising companies is up. Digital, though, is subject to the effects of Aggregation Theory, a key component of which is winner-take-all dynamics, and Facebook and Google are indeed taking it all.

The Article was prescient for a time; Yahoo has been passed around for peanuts, LinkedIn was bought by Microsoft a few months later, and while Yelp and Twitter’s stock have more-or-less doubled since then,1 that gain pales in comparison to that of Google and Facebook:

Google, Facebook, Twitter, and Yelp's market gains over the last six years

That chart, though, only runs through last Wednesday; here is the new chart post-Facebook earnings:

Google, Facebook, Twitter, and Yelp's market gains over the last six years, including Facebook's recent earnings

It turns out that, for now anyways, buying $TWTR on the day I wrote that article would have provided a better return than $FB.

Direct Response and the Collapse of the Funnel

In that Article I painted an idealized picture of the traditional marketing funnel and how Google and Facebook’s advertising products mapped onto it:

What Sandberg is detailing here is really quite extraordinary: Facebook helped Shop Direct move customers through every part of the funnel: from awareness through Instagram video ads to consideration through retargeting and finally to conversion with dynamic product ads on Facebook (and, in the not too distant future, a direct customer relationship to build loyalty via Messenger).

A drawing of Digital Advertising 2.0

Google is promising something similar: awareness via properties like YouTube, consideration via DoubleClick, and conversion via AdSense. Just as importantly, both companies are promising that leveraging their respective platforms will provide benefits on both sides of the ROI equation: the return will be better given the two companies’ superior targeting capabilities and ability to measure conversion, and the investment will be smaller because you can manage your entire funnel from a single ad-buying interface.

Herein lies the first thing that I got wrong: the traditional marketing funnel made sense in a world where different parts of the customer journey happened in different places — literally. You might see an advertisement on TV, then a coupon in the newspaper, and finally the product on an end cap in a store. Every one of those exposures was a discrete advertising event that culminated in the customer picking up the product in question in putting it in their (literal) shopping cart.

On the Internet, though, that journey is increasingly compressed into a single impression: you see an ad on Instagram, you click on it to find out more, you login with Shop Pay, and then you wonder what you were thinking when it shows up at your door a few days later. The loop for apps is even tighter: you see an ad, click an ‘Install’ button, and are playing a level just seconds later. Sure, there are things like re-targeting or list building, but by-and-large Internet advertising, particularly when it comes to Facebook, is almost all direct response.

This can make for an exceptionally resilient business model: because the return-on-investment (ROI) of direct response advertising is measurable to a fantastically greater degree than traditional advertising measurement, advertisers can spend right up to the level they place on a particular customer or transaction’s value; Facebook, of course, is willing to help them do that as easily as possible, squeezing out margin in the process. Moreover, because these ads are sold at auction, the company is insulated from events like COVID or boycotts; I explained in 2020’s Apple and Facebook:

This explains why the news about large CPG companies boycotting Facebook is, from a financial perspective, simply not a big deal. Unilever’s $11.8 million in U.S. ad spend, to take one example, is replaced with the same automated efficiency that Facebook’s timeline ensures you never run out of content. Moreover, while Facebook loses some top-line revenue — in an auction-based system, less demand corresponds to lower prices — the companies that are the most likely to take advantage of those lower prices are those that would not exist without Facebook, like the direct-to-consumer companies trying to steal customers from massive conglomerates like Unilever.

In this way Facebook has a degree of anti-fragility that even Google lacks: so much of its business comes from the long tail of Internet-native companies that are built around Facebook from first principles, that any disruption to traditional advertisers — like the coronavirus crisis or the current boycotts — actually serves to strengthen the Facebook ecosystem at the expense of the TV-centric ecosystem of which these CPG companies are a part.

The problem for Meta is in the title of that article: Apple. The latter’s App Tracking Transparency (ATT) initiative severed the connection amongst e-commerce sellers, app developers, and Facebook by which Facebook achieved that ROI, and while the company is better positioned than anyone else to build a replacement, it is important to note that the impairment entailed in probabilistically measuring ad effectiveness instead of deterministically is a permanent one.

This isn’t just a Facebook problem: Snap said on its earnings call:

Our advertising partners who prefer to leverage lower-funnel goals such as in-app purchases, have been most impacted by [ATT]. We are seeing these advertisers migrate to mid-funnel goals where they have greater visibility such as install or click. Advertisers who optimize via web-based goal-based bids or GBBs have been less impacted, given that many of them have adopted the Snap pixel.

Snap’s direct response business is not nearly as good as Facebook’s, and is a much smaller business both overall and in terms of the overall company’s revenue; that left a lot more cushion to absorb ATT, both because Snap’s performance had less to lose, and also because the company’s brand-business could help pick up the slack. This, paradoxically, led many investors to overfit Facebook’s disappointing forecast to Snap’s outlook; the reality is that advertising dollars will find a way to be spent, and the alternatives to direct response on Snap were more impactful to the bottom line than they are on Facebook, in part because the latter was so dominant in direct response until now.

The Amazon Advertising IPO

What made the Facebook model tick was the way in which the company could convert conversion tracking to targeting: because Facebook knew a lot about someone who saw an ad and then converted, they could easily find other people who were similar — lookalike audiences — and show them similar ads, continually optimizing their targeting and increasing their understanding along the way.

Google Search, though, has a built-in advantage: Google doesn’t have to figure out what you are interested in because you do the company the favor of telling it by searching for it. The odds that you want a hotel in San Francisco are rather high if you search for “San Francisco hotels”; it’s the same thing with life insurance or car mechanics or e-commerce.

Google is not the only search engine that monetizes e-commerce effectively. Back in 2015 I described the breakout of Amazon Web Services’ financials as The AWS IPO:

This is why Amazon’s latest earnings were such a big deal: for the first time the company broke out AWS into its own line item, revealing not just its revenue (which could be teased out previously) but also its profitability. And, to many people’s surprise, and despite all the price cuts, AWS is very profitable: $265 million in profit on $1.57 billion in sales last quarter alone, for an impressive (for Amazon!) 17% net margin.

Those numbers pale in comparison to what I guess we might call the Amazon Advertising IPO, given that the company broke out its advertising for the first time this quarter, revealing $9.7 billion in revenue, a 32% increase year-over-year (Amazon did not break out the unit’s profitability). While that is still a fraction of Google’s $61.2 billion last quarter, or Facebook’s $32.6 billion, it is a larger fraction than you might expect, and several multiples of Snap’s $1.3 billion in revenue. Indeed, given the fact that Amazon is closer in revenue to Facebook than Facebook is to Google it seems fair to characterize the advertising market as dominated not by a big two but a big three.

Amazon’s advertising business has three big advantages relative to Facebook’s.

  1. Search advertising is the best and most profitable form of advertising. This goes back to the point I made above: the more certain you are that you are showing advertising to a receptive customer, the more advertisers are willing to bid for that ad slot, and text in a search box will always be more accurate than the best targeting.

  2. Amazon faces no data restrictions. That noted, Amazon also has data on its users, and it is free to collect as much of it as it likes, and leverage it however it wishes when it comes to selling ads. This is because all of Amazon’s data collection, ad targeting, and conversion happen on the same platform — Amazon.com, or the Amazon app. ATT only restricts third party data sharing, which means it doesn’t affect Amazon at all.

  3. Amazon benefits from ATT spillover. That is not to say that ATT didn’t have an effect on Amazon: I noted above that Snap’s business did better than expected in part because its business wasn’t dominated by direct advertising to the extent that Facebook’s was, and that more advertising money flowed into other types of advertising. This almost certainly made a difference for Amazon as well: one of the most affected areas of Facebook advertising was e-commerce; if you are an e-commerce seller whose Shopify store powered-by Facebook ads was suddenly under-performing thanks to ATT, then the natural response is to shift products and advertising spend to Amazon.

All of these advantages will persist: search advertising will always be effective, and Amazon can always leverage data, and while some degree of ATT-related pullback was likely due to both uncertainty and the fact that Facebook hasn’t built back its advertising stack for a post-ATT world, the fact that said future stack will never be quite as good as the old one means that there is more e-commerce share to be grabbed than there might have been otherwise.

Google’s Dominance

Of course you could just as easily make an argument that when it comes to digital advertising there is Google and everyone else. Google clearly faces competition from Amazon for e-commerce search advertising — the European Commission’s Google Shopping case is only surpassed by the FTC’s Facebook lawsuit when it comes to overly narrow market definitions that ignore reality — but is dominant in terms of nearly every other vertical. Moreover, that dominance is shored up by the same factors favoring Amazon, at least in part.

The first one is obvious: search advertising works great, and Google is the best at it.

The second one, about data collection, is more interesting, particularly in the context of ATT. Facebook CFO Dave Wehner groused on the company’s recent earnings call:

We believe the impact of iOS overall as a headwind on our business in 2022 is on the order of $10 billion, so it’s a pretty significant headwind for our business. And we’re seeing that impact in a number of verticals. E-commerce was an area where we saw a meaningful slowdown in growth in Q4. And similarly, we’ve seen other areas like gaming be challenged. But on e-commerce, it’s quite notable that Google called out, seeing strength in that very same vertical. And so given that we know that e-commerce is one of the most impacted verticals from iOS restrictions, it makes sense that those restrictions are probably part of the explanation for the difference between what they were seeing and what we were seeing.

And if you look at it, we believe those restrictions from Apple are designed in a way that carves out browsers from the tracking prompts Apple requires for apps. And so what that means is that search ads could have access to far more third-party data for measurement and optimization purposes than app-based ad platforms like ours. So when it comes to using data, that it’s not really apples-to-apples for us. And as a result, we believe Google’s search ads business could have benefited relative to services like ours that face a different set of restrictions from Apple. And given that Apple continues to take billions of dollars a year from Google Search ads, the incentive clearly exists for this policy discrepancy to continue.

Apple, it should be noted, has always treated the browser as a carve-out from its App Store restrictions (not that it has any choice: Apple, in contrast to the App Store, doesn’t have any points of leverage over the open web), so it is fair to dismiss Wehner’s conspirational musings about the iPhone maker’s motivations.

At the same time, the broader observation is a smart one: Google, thanks to the combination of being the default search engine on Safari and having a business built on the web, basically has first-party privileges on the iPhone when it comes to data. It can show ads to iPhone users on the default browser and track how those ads perform on third-party websites to a much greater extent than an app like Facebook directing users to the exact same third-party websites can.

In terms of ATT, it is notable that the only part of Google’s business that fell short of Wall Street expectations was YouTube; I suspect it is not a coincidence that YouTube has a significant app-install business of its own, and ATT’s restrictions on what those installed apps can report back to Google may have hurt business a bit. At the same time, the same dynamics that drove advertising to other parts of Snap’s business and to Amazon advertising likely benefited Google as well, including Android.

Facebook Risk

There is no question that Facebook has been significantly impaired, but the company is by no means doomed, in large part because while search is very effective at finding what you want, there remains the need to make you aware of what you didn’t know existed. This is what Facebook excels at more than any other platform: by knowing who you are and what you have liked or purchased in the past, Facebook can place ads for products or apps you have never heard of in the Feed, in Stories, or, going forward in Reels.

This, in my opinion, is actually a far more important form of advertising than search ads: yes, there are scenarios where a firm can surface something that fits exactly what you are searching for, but oftentimes search ads feel like a rake on organic results that would have given you what you were looking for anyways. Facebook-style display advertising, on the other hand, is the foundation upon which an entirely new host of Internet-only businesses are built. These niche-focused companies are only possible when the entire world is your market, but they would founder without a way to find the customers who are looking for exactly what they have to offer; Facebook ads solve that problem.

That discovery mechanism, though, doesn’t just depend on data; it also depends on attention. This is where the TikTok challenge looms large: Apple and ATT may have had the largest financial impact on Facebook, but TikTok and the loss of attention are the more existential risk.

Illustrating the Ad Market

Still, Facebook’s forecast, disappointing as they were to investors, was for $27-29 billion in revenue this quarter; this is still a major player in an advertising market dominated by the three companies mentioned in this article, with one looming dark horse. To illustrate the market — and with the caveat that this is a massive oversimplification of what is a large and varied opportunity — consider a two-by-two defined by apps and commerce (both physical and digital) on one axis, and search and display on the other:

A two-by-two graph with search and display on one axis, and apps and commerce on the other

This is what that market looked like back in 2016:

The 2x2 graph in 2016, when Google and Facebook were dominant

This is what the market looks like in 2022:

The 2x2 graph in 2022, with challenges from Amazon and Apple

First off, note that this illustration doesn’t include a huge part of Google’s market, which is basically search for anything other than e-commerce. It also doesn’t include the still substantial market for brand advertising. Direct response advertising, though, is the truly Internet-native advertising form, and while Google and Facebook are important, note the two new entrants who have substantial advantages:


Amazon has the best fulfillment and logistics operation in e-commerce,2 which it uses to not only drive its own first-party retail but also third-party merchant services. Indeed, this is another way to think about how Amazon is insulated from ATT: it’s not that the company doesn’t have a multitude of third-party merchants on its platform, it is that by taking on the role of an Aggregator instead of a platform it gets to fold all of those third party merchants into its app and website, beyond the limitations enforceable by Apple. Then it effectively gives those third party merchants no choice but to buy ads if they want to be noticed by customers.


Apple launched its App Store advertising business in the fall of 2016, starting with the most obvious place: search. Apple hasn’t disclosed how much it makes in advertising, but there are analyst estimates of $5 billion annually. Not all of this is search — Apple has since added inventory in the App Store’s “Suggested” section as well as owned-and-operated apps like Apple News — but most of it is; Apple is confined to the top right corner…for now.

One of the biggest questions about the advertising landscape going forward is if Apple is going to move down to the “Apps + Discovery” quadrant that remains Facebook’s purview. If the company did they would have an unbeatable advantage: remember, Apple has made clear through its App Store policies and testimony in the Epic case that it views apps on the App Store as first party for Apple (this is how the company justifies its anti-steering provisions, likening links to websites to putting up signs in its own store for another, even though the signs in question are in the app and not the App Store). It follows, then, that Apple would see no inconsistency in denying Facebook the ability to have knowledge about installation and conversions derived from a Facebook ad, even as Apple has perfect knowledge of those installations and conversions from its own ads.

This isn’t a hypothetical! Apple’s Advertising & Privacy page states:

We may use information such as the following to assign you to segments:

  • Account Information: Your name, address, age, gender, and devices registered to your Apple ID account. Information such as your first name in your Apple ID registration page or salutation in your Apple ID account may be used to derive your gender. You can update your account information on the Apple ID website.
  • Downloads, Purchases & Subscriptions: The music, movies, books, TV shows, and apps you download, as well as any in-app purchases and subscriptions. We don’t allow targeting based on downloads of a specific app or purchases within a specific app (including subscriptions) from the App Store, unless the targeting is done by that app’s developer.
  • Apple News and Stocks: The topics and categories of the stories you read and the publications you follow, subscribe to, or enable notifications from.
  • Advertising: Your interactions with ads delivered by Apple’s advertising platform.

When selecting which ad to display from multiple ads for which you are eligible, we may use some of the above-mentioned information, as well as your App Store searches and browsing activity, to determine which ad is likely to be most relevant to you. App Store browsing activity includes the content and apps you tap and view while browsing the App Store. This information is aggregated across users so that it does not identify you. We may also use local, on-device processing to select which ad to display, using information stored on your device, such as the apps you frequently open.

As you can see, Apple does not currently allow developers to target downloads or purchases from within a specific app the developer does not own,3 but that doesn’t mean Apple cannot; again, the company has made clear it sees every app on the iPhone — especially their purchases — as Apple data, and this document makes very clear that Apple only sees data collection as problematic when it involves third parties. To that end, Apple could set up an auction-based advertising network that monetized on a per-install basis and run those ads within an Apple-controlled network that is available to third-party apps. It would basically be a better version of Facebook — well, in theory, Apple has admittedly never been really good at this sort of thing — but since only Apple sees the data (just as only Facebook sees the data from third-party apps), Apple gets to pat itself on the back all of the way to the bank.

This would, needless to say, be a breathtaking example of anti-competitive behavior; kneecapping your competitor via platform control and then taking over their business is what one would think antitrust law would be designed to stop. But then you look up and Apple has gotten away with its App Store policies for years, and Facebook is getting sued for limiting competition even as it faces an existential threat from TikTok, and who knows, maybe it would work.

I hinted at one of the objections to this happening above: Apple has tried to do advertising before, and failed miserably. As any Apple aficionado will tell you, ads aren’t in their nature, products are. But then again, had you told those same aficionados that Apple would be facing developer ire, antitrust lawsuits, and regulatory obstacles all over the world because of its insistence that it is owed 15-30% of all digital content consumed on the iPhone, they probably would have said that was impossible too. What is clear is that the $10 billion in revenue that Facebook won’t see this year will go somewhere, and Apple’s Services Narrative has never felt like a bigger opportunity.

There is a broader takeaway to this discussion; I wrote in the conclusion of 2020’s The End of the Beginning:

In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.

There was one company conspicuous in its absence, and that was Facebook. Real power in technology comes from rooting the digital in something physical: for Amazon that is its fulfillment centers and logistics on the e-commerce side, and its data centers on the cloud side. For Microsoft it is its data centers and its global sales organization and multi-year relationships with basically every enterprise on earth. For Apple it is the iPhone, and for Google is is Android and its mutually beneficial relationship with Apple (this is less secure than Android, but that is why Google is paying an estimated $15 billion annually — and growing — to keep its position). Facebook benefited tremendously from being just an app, but the freedom of movement that entailed meant taking a dependency on iOS and Android, and Apple has exploited that dependency in part, if not yet in full.

This, more than anything, is the way to understand the Meta bet, and why it matters so much to CEO Mark Zuckerberg. Investors may want the company to focus on what it is best at; Zuckerberg wants to build a company that is truly independent of anyone.

I wrote a follow-up to this Article in this Daily Update.

  1. Twitter did have a large run-up a year ago that has since disappeared 

  2. ex-China, anyways 

  3. This is why I have been intrigued by AppLovin’s approach

Gaming the Smiling Curve

Another week, another gaming acquisition. First Take-Two acquired Zynga, then Microsoft acquired Activision-Blizzard, and now Sony just announced the acquisition of Bungie.1 Each of these acquisitions is interesting in its own right, but taken as a set they paint a picture of industry evolution that extends far beyond gaming.

Take-Two and Mobile Consolidation

The straightforward explanation for Take-Two’s acquisition of Zynga is the fact that mobile captures more than 50% of gaming industry revenue and it is growing much faster (7% last year) than PC and console gaming (the gaming industry grew 1.4% as a whole); that is a problem for Take-Two given that nearly all of the company’s revenue comes from PC and console series like Grand Theft Auto, NBA 2K, Red Dead, Bordlerlands, and more.

Zynga, meanwhile, was among the least prepared of the major mobile gaming companies for the changes wrought by Apple’s App Tracking Transparency (ATT) policy, which was introduced with iOS 14 and rolled out over the first half of 2021. In the pre-ATT world everyone from e-commerce sellers to app developers could effectively offload the collection and analysis of conversion data and subsequent targeting of advertising to Facebook, to the benefit of everyone involved: individual developers and retailers did not need to bear the risk or expense of collecting and analyzing data, and could instead collectively outsource that job to the Facebook data factory, which had the benefit of making Facebook advertising that much more effective, not only to the benefit of Facebook’s bottom line but also to that of those that relied on its advertising platform.

ATT, meanwhile, didn’t ban data collection or analysis or targeting or any of the other aspects of advertising that many of its supporters object to; what it targeted was doing so collectively. That means that the policy has been a huge boon for fully integrated advertisers (i.e. advertisers that collect data, target, and show advertisements) like Google and Amazon. In this world the natural response has been consolidation; compare Zynga’s stock price over the last year to a company like AppLovin which was ahead of the curve in buying up multiple parts of the ad stack and combining them with its own titles to maximize the value of first party data:

Zynga versus AppLovin stock performance over the last year

AppLovin is down a bit with the general market drawdown, but it’s not an accident the company increased in price last fall while Zynga plummeted (the stock is up from its depths because of the not-yet-closed acquisition): one of the keys to Zynga’s turnaround was buying small independent studios and letting them stay independent; that’s no longer a viable approach in a post-ATT world, and Take-Two will have to centralize Zynga and only then leverage Zynga’s expertise to bring its valuable IP to mobile platforms in a much more complete way than it has previously.

Microsoft and Xbox Game Pass

Take-Two wasn’t the only company taking advantage of a company with a plummeting stock price; Microsoft did the same with Activision Blizzard:

Activision's stock price over the last year

Activision Blizzard does own King Digital, which still provides over $2 billion in revenue from Candy Crush and its various spin-offs, but in this case the stock price decline was primarily due to major issues regarding Activision Blizzard’s internal culture, including a lawsuit by the state of California. Microsoft, though, was well positioned to take advantage of Activision Blizzard’s troubles thanks to Xbox Game Pass.

Whenever a major platform acquires a developer on that platform, the first question users ask is if the platform owner will make the developer’s content exclusive. It’s an obvious question — why else would the platform owner buy it, given that they can still collect revenue from the developer, both directly via platform fees and indirectly via licensing fees? — but the answer is not always straightforward, thanks to the nature of software.

Software, including games, entails a massive investment in upfront development costs. You have to build (or license and adapt to) a game engine, write and build out the story, draw and develop the assets, etc. All of this is work that is both expensive and also only needs to be done once; it follows, then, that it is in the software developer’s economic interest to make the game available as widely as possible; after all, every additional copy of a game has zero marginal costs, which means that every additional copy of a game sold provides nothing but leverage on those fixed costs, and, once covered, pure profit (that noted, there are significant costs associated with supporting multiple platforms — I have seen estimates around 25~40% of extra costs, depending on the game — so going exclusive is not an entirely deadweight cost).

This makes the math around developer acquisitions a bit tricky for platform acquirers: to buy a gaming studio in order to make its games exclusive to the platform entails destroying a significant part of the game studio’s economic value; after all, you are acquiring a property like Call of Duty based on the revenue it grosses from PC, Xbox, and PlayStation — cutting off the latter means you overpaid. This is why it is not a surprise that Microsoft has already committed to keeping Activision Blizzard’s and ZeniMax’s most popular cross-platform games on PlayStation.

What makes Microsoft’s acquisition spree particularly compelling, though, is that Microsoft is trying to create a new business model for gaming: for $15/month you can play all of the games Microsoft owns, and those of any third-party developer who wishes to join up, on any platform that supports them. This includes not only the Xbox console and Windows PCs,2 but also the nascent Xbox streaming service, which makes console-level games available on mobile and PC and, soon enough, smart TVs or a potential Xbox streaming stick. This is a business model that not only makes sense given the evolution of technology towards cloud-centric services, but is also in-line with Microsoft’s core competency and fundamental nature.

More importantly, at least in the context of this Article, is the freedom of movement this gives Microsoft when it comes to acquisitions: all of ZeniMax’s titles, and many of Activision Blizzard’s titles in the future, are still available on PlayStation and Steam as individual purchases; if that is how you want to pay then Microsoft will accept your money, and avoid the loss of cutting you off. All of those games, though, are also available on Xbox Game Pass, because Microsoft is betting that many gamers will realize it’s a pretty good deal, even as it aligns with the company’s corporate goal of creating lifelong customers paying via subscription.

Sony and Exclusives

Major deals like Sony’s acquisition of Bungie, the once-makers of Halo (while owned by Microsoft) and current developers of Destiny (for which they negotiated their freedom), obviously don’t come together in two weeks. It’s tempting, though, to view it as defensive: if Microsoft were to withdraw a property like the afore-mentioned Call of Duty, well, Sony can always take away Destiny.

The truth, though, is that this is simply the most visible of a long line of acquisitions, going back two decades to Sony’s 2001 acquisition of Naughty Dog, of studios that make content for consoles that is worth switching over. Naughty Dog has made Crash Bandicoot, Uncharted, and The Last of Us; Incognito made the Twisted Metal series; Guerrilla Games made the Killzone and Horizon series; Sucker Punch made Sly Cooper, Infamous, and the Ghost of Tsushima; Insomniac made Ratchet and Clank and Spider-Man; Bluepoint Games made Shadow of the Colossus and Demon’s Souls; all were PlayStation exclusives, credited with Sony’s dominance over the last two console generations.

Sony, rather than chasing Microsoft, appears set for a more Nintendo-like trajectory: customers will buy their consoles because they have exclusive games that you can’t get anywhere else; unlike Nintendo, Sony consoles are on the cutting edge technologically, which means that the remaining 3rd-party publishers like EA will continue to support them. Sure, Microsoft has a good subscription, but Sony has games you can’t get anywhere else (and, rumors suggest, a new subscription service of its own, although it may not have the best titles available immediately, which makes sense given Sony’s strategy).

That is why I expect Bungie to continue to support Destiny across multiple platforms (PlayStation, Xbox, and PC), while new content beyond the upcoming Witch Queen expansion is probably going to come out on PlayStation first; whatever title lies beyond that, meanwhile, has a good chance of being PlayStation only (if you haven’t spent the money on supporting multiple platforms, it is an easier choice to be an exclusive).

Update: Bungie made clear in an FAQ that future Destiny content would be available on the first day on all platforms, which makes sense given that Destiny supports cross-play. Moreover, the same FAQ states that any IP beyond Destiny would also be cross-platform; this doesn’t detract from the general point about Sony and exclusives, but it certainly suggests that doesn’t apply in this case. This was an error on my part and I apologize.

Gaming and the Smiling Curve

The Smiling Curve, as I first explained in this 2014 Article, was a concept created by Acer founder Stan Shih to explain where the profits were in technological manufacturing; from Wikipedia:

A smiling curve is an illustration of value-adding potentials of different components of the value chain in an IT-related manufacturing industry…According to Shih’s observation, in the personal computer industry, both ends of the value chain command higher values added to the product than the middle part of the value chain. If this phenomenon is presented in a graph with a Y-axis for value-added and an X-axis for value chain (stage of production), the resulting curve appears like a “smile”.

I argued at the time that this framework was applicable to the publishing industry:

When people follow a link on Facebook (or Google or Twitter or even in an email), the page view that results is not generated because the viewer has any particular affinity for the publication that is hosting the link, and it is uncertain at best whether or not their affinity will increase once they’ve read the article. If anything, the reader is likely to ascribe any positive feelings to the author, perhaps taking a peek at their archives or Twitter feed.

Over time, as this cycle repeats itself and as people grow increasingly accustomed to getting most of their “news” from Facebook (or Google or Twitter), value moves to the ends, just like it did in the IT manufacturing industry or smartphone industry:

The Publishing Smiling Curve

I think this framework is also the best way to think about all of these acquisitions. To generalize the concept, the top right of the curve are companies that have a direct connection with customers, including the Aggregators; the top left are highly differentiated content makers:

The Smiling Curve in gaming

When it comes to mobile gaming, the dominant Aggregators on the top right side of the curve are Apple and Google and their respective App Stores; the most cynical interpretation of ATT is that Facebook was superseding both to become the most important way in which people discovered apps, and Apple, thanks to its OS-level control, cut them off at the knees, pushing Facebook down to the middle. The response from content makers, then, has been to consolidate and increase the leverage that comes from differentiated content.

Xbox Game Pass, meanwhile, is an attempt to build a position as an Aggregator; the initiative will be successful to the extent that gamers play games because they are in Game Pass, and increasingly shun games that have to be purchased individually (incentivizing holdouts to join Microsoft’s subscription). Microsoft is kick-starting this effort by buying its own differentiated content and overlaying their long-term incentives over any individual game studio’s incentives to maximize their short-term revenue by selling a game individually.

Sony is pursuing a similar strategy, but with a different business model: whereas Microsoft is increasingly device-agnostic (of course it helps that they sell both Xbox consoles and Windows), Sony is doubling down on the integration of hardware and software. Their best content is designed to not only make money in its own right but to also persuade customers to buy PlayStation consoles; the more PlayStation consoles there are the more attractive the platform is to 3rd-party developers.

What is much less viable is anything in the middle. The original PlayStation was almost completely dependent on 3rd-party games, relying on technical superiority to build its user base and attract developers; that approach reached its limit with the relative disappointment that was the PlayStation 3, and Sony has been focused on exclusives ever since. Content developers like Zynga, meanwhile, can’t depend on companies in the middle either: Apple’s rules have ensured that anyone who is not an Aggregator has to figure out how to make money on their own. There is still a market for 3rd-party developers on consoles and PCs — Steam is a major Aggregator on the latter, challenged by not just Microsoft but also Epic, who all are competing for the best developers — but it is increasingly important that content be highly differentiated and costs tightly contained.

The Upside of Acquisitions

There are a lot of seemingly scary implications in this analysis, including concepts like exclusives, lock-in, and the sense that the big are getting bigger. I think there is a strong argument, though, that the overall impact on consumers is on balance a positive one. Gaming, to a much greater extent than many other industries, is a zero sum game: time spent playing one title is time not spent playing another one. Moreover, the total cost of ownership for any particular gaming platform, relative to the time spent playing games, is a very favorable one. With regards to the latter point, the price of having access to everything is not an overwhelming one; with regards to the former the incentives to make a game that is truly exceptional, and thus truly differentiated, are higher than ever.

Phil Spencer, CEO of Microsoft Gaming, made an argument along these lines when I challenged him in a Stratechery interview about a potential loss of competition entailed in the company’s Activision Blizzard acquisition:

Phil Spencer: Ah. I mean, that’s maybe where we’ll differ in opinion. Some of this just comes down to the teams that become part of our team and our cultural journey with them that starts long ago. This will sound a little bit like a kind of gaming person, but I’ll say the thing that I have found that drives the teams internally to our organization is they want to do things they’ve never been able to do before. They want to reach more players of their creations than they’ve ever been able to create before. And I might argue the opposite, that the churn of “I need another holiday release next year” and then the year after and then the year after can be more stifle on creativity than the freedom that we’re able to give that says, “It’s not about one business model that works for us, it’s actually about multiple business models. It’s not about one screen that people will consume your game on. You pick the screen that’s right for you. And the input, if you want keyboard or mouse, you want touch, you want controller, you pick. You pick the subject matter that you want to work on.”

And frankly, when I look at the portfolio of games that we’ve been shipped over the last two or three years and some of the subject matter that our creators have decided to tackle, not always in the thinnest definition of what’s marketable at that time, I think those innovations and that kind of risk taking comes from having amazing teams that are thinking about what’s possible or even what’s not possible and how our tools and distribution can help them in creating things that they’ve never been able to create before. That’s what I feel.

Today, what I did after we announced this morning is I got to sit down with our studio leaders. The amount of energy they had for learning from other teams, because creators can get isolated because they’re so focused on the thing they’re doing right now. Now we get to sit there at the broadest level and have discussions about what people are thinking about, whether they’re challenging each other with sharing the learning that they have, what they aspire to go do. The energy in the room was awesome, it was a virtual room in a Teams call, but it was still awesome. I would say that that freedom to innovate, to try new things, because it’s not just down to one business model or one screen or even one device that somebody might buy is the thing that I found is most liberating for the teams here.

Of course that’s the answer I would expect.

PS: (laughing)

I think that one of the early questions about this deal is basically — the big company doesn’t think competition is particularly useful or valuable. “We want to give people freedom to explore.”

PS: Well, let me hit on that one. Sorry, I didn’t mean to interrupt, but let me hit on competition really quick, because I see competition as a little bit different. There’s a ton of competition in the games business. If I rewind 30 years ago, the video game business was dictated by who had shelf space at Egghead because there was such a constriction on distribution and funding and marketing of games that the portfolio of games I could choose from was so limited. I love the fact that when I look at the top 10 games that are being played, how many come from traditional places versus how many now are coming from creators that didn’t even exist 10 years ago. I love that there’s that creative turnover or just the diversity in where great games come from.

And then when I think about the platform side, the largest platforms for playing games or mobile devices, distribution on those devices are controlled by two companies. So for us, it’s how do we go invest in content and community so that we can actually have our distribution through our own content engagement that we have because the competition is out there and it’s so strong? I think the competition you talk about between individual teams and the competition to make the next paycheck, I understand maybe that’s motivating to certain teams, I’ve just found with our teams that they do much better work when our motivation is more about how many customers can we reach.

Forgive the extended excerpt, but I think this is essential, particularly the second part: so much of our thinking about competition is rooted in the analog world, a world of scarcity where there really was limited shelf space or limited telephone lines or limited railroad access; that just isn’t the case on the Internet, where anyone has access to everyone. This has dramatically increased the power of creators, who can not only go direct, but also plays Aggregators off against each other — that is the realm of competition that matters. If we must accept a world where platforms like the App Store have total power within their domains, then the answer is to build up alternative Aggregators that have compelling content of their own, waging a proper fight for the only scarce resource there is on the Internet: time.

  1. I will save the Wordle acquisition for another day! 

  2. Each of which has a standalone $10/month plan 

The Intel Split

Intel’s earnings are not until next Wednesday, but whatever it is that CEO Pat Gelsinger plans to discuss, it seems to me the real news about Intel came from the earnings of another company: TSMC.

From the Wall Street Journal:

Taiwan Semiconductor Manufacturing Co., the world’s largest contract chip maker, said it would increase its investment to boost production capacity by up to 47% this year from a year earlier as demand continues to surge amid a global chip crunch. TSMC said Thursday that it has set this year’s capital expenditure budget at $40 billion to $44 billion, a record high, compared with last year’s $30 billion.

Tim Culpan at Bloomberg described the massive capex figure as a “warning” to fellow chipmakers Intel and Samsung:

From a technology perspective, Samsung is the nearest rival. Yet a comparison is skewed by the fact that the South Korean company also makes display screens and puts most of its semiconductor spending toward commodity memory chips that TSMC doesn’t even bother to make.

Then there’s Intel, the U.S. would-be challenger that’s decided to join the foundry fray. In addition to manufacturing chips under its own brand, Intel Chief Executive Officer Pat Gelsinger last year decided he wants to take on TSMC and Samsung — and a handful of others — by offering to make them for external clients.

But Intel trails both of them in technology prowess, forcing the California company into the ironic position of relying on TSMC to produce its best chips. Gelsinger is confident that he can catch up. Maybe he will, but there’s no way the firm will be able to expand capacity and economies of scale to the point of being financially competitive.

It’s worse than that, actually: by becoming TSMC’s customer Intel is not only denying itself the scale of its own manufacturing needs, but also giving that scale to TSMC, improving the economics of their competitor in the process.

Gelsinger’s Design Tools

One of my favorite quotes from Michael Malone’s The Intel Trinity is about how “Moore’s Law” — the observation by Intel co-founder and second CEO Gordon Moore that transistor counts for integrated circuits doubled every two years — was not a law, but a choice:

[Moore’s Law] is a social compact, an agreement between the semiconductor industry and the rest of the world that the former will continue to strive to maintain the trajectory of the law as long as possible, and the latter will pay for the fruits of this breakneck pace. Moore’s Law has worked not because it is intrinsic to semiconductor technology. On the contrary, if tomorrow morning the world’s great chip companies were to agree to stop advancing the technology, Moore’s Law would be repealed by tomorrow evening, leaving the next few decades with the task of mopping up all of its implications.

Moore made that observation in 1965, and for the next 50 years that choice fell to Intel to make. One of the chief decision-makers was a young man in his 20s named Patrick Gelsinger. Gelsinger joined Intel straight out of high school, and worked on the team developing the 286 processor while studying electrical engineering at Stanford; he was the 4th lead for the 386 while completing his Masters. After he graduated Gelsinger became the lead of the 486 project; he was only 25.

The Intel 486 die
The Intel 486 die

Intel was, at this time, a fully integrated device manufacturer (IDM); while that term today refers to a company that designs and fabricates its own chips (in contrast to a company like Nvidia, which designs its own chips but doesn’t manufacture them, or TSMC, which manufactures chips but doesn’t design them), the level of integration has decreased over time as other companies have come to specialize in different parts of the manufacturing process. Back in the 1980s, though, Intel still had to figure out a lot of things for the first time, including how to actually design ever more microscopic chips. Gelsinger, along with three co-authors, described the problem in a 2012 paper entitled Coping with the Complexity of Microprocessor Design at Intel — a CAD History:

In his original 1965 paper, Gordon Moore expressed a concern that the growth rate he predicted may not be sustainable, because the requirement to define and design products at such a rapidly-growing complexity may not keep up with his predicted growth rate. However, the highly competitive business environment drove to fully exploit technology scaling. The number of available transistors doubled with every generation of process technology, which occurred roughly every two years. As shown in Table I, major architecture changes in microprocessors were occurring with a 4X increase of transistor count, approximately every second process generation. Intel’s microprocessor design teams had to come up with ways to keep pace with the size and scope of every new project.

Processor Intro Date Process Transistors Frequency
4004 1971 10 um 2,300 108 KHz
8080 1974 6 um 6,000 2 MHz
8086 1978 3 um 29,000 10 MHz
80286 1982 1.5 um 134,000 12 MHz
80386 1985 1.5 um 275,000 16 MHz
Intel486 DX 1989 1 um 1.2 M 33 MHz
Pentium 1993 0.8 um 3.1 M 60 MHz

This incredible growth rate could not be achieved by hiring an exponentially-growing number of design engineers. It was fulfilled by adopting new design methodologies and by introducing innovative design automation software at every processor generation. These methodologies and tools always applied principles of raising design abstraction, becoming increasingly precise in terms of circuit and parasitic modeling while simultaneously using ever-increasing levels of hierarchy, regularity, and automatic synthesis. As a rule, whenever a task became too painful to perform using the old methods, a new method and associated tool were conceived for solving the problem. This way, tools and design practices were evolving, always addressing the most labor-intensive task at hand. Naturally, the evolution of tools occurred bottom-up, from layout tools to circuit, logic, and architecture. Typically, at each abstraction level the verification problem was most painful, hence it was addressed first. The synthesis problem at that level was addressed much later.

This feedback loop between design and implementation is exactly what is necessary at the cutting edge of innovation. Clayton Christensen explained in The Innovator’s Solution:

When there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.

This is what Gelsinger did with the 486; from the afore-linked paper:

While the 386 design heavily leveraged the logic design of the 286, the 486 was a more radical departure with the move to a fully pipelined design, the integration of a large floating point unit, and the introduction of the first on-chip cache – a whopping 8K byte cache which was a write through cache used for both code and data. Given that substantially less of the design was leveraged from prior designs and with the 4X increase in transistor counts, there was enormous pressure for yet another leap in design productivity. While we could have pursued simple increases in manpower, there were questions of the ability to afford them, find them, train them and then effectively manage a team that would have needed to be much greater than 100 people that eventually made up the 486 design team…For executing this visionary design flow, we needed to put together a CAD system which did not exist yet.

To make a new chip, Intel needed to make new tools, as part of an overall integrated effort that ran from design to manufacturing.

Intel’s Ossification

Fast forward three decades and Intel is no longer on the cutting edge; instead the leading chip manufacturer in the world is TSMC, a company built on the idea that it does not do design; Morris Chang told the Computer History Museum:

When I was at TI and General Instrument, I saw a lot of IC [Integrated Circuit] designers wanting to leave and set up their own business, but the only thing, or the biggest thing that stopped them from leaving those companies was that they couldn’t raise enough money to form their own company. Because at that time, it was thought that every company needed manufacturing, needed wafer manufacturing, and that was the most capital intensive part of a semiconductor company, of an IC company. And I saw all those people wanting to leave, but being stopped by the lack of ability to raise a lot of money to build a wafer fab. So I thought that maybe TSMC, a pure-play foundry, could remedy that. And as a result of us being able to remedy that then those designers would successfully form their own companies, and they will become our customers, and they will constitute a stable and growing market for us.

This was skating to Christensen’s puck; again from The Innovator’s Solution:

Overshooting does not mean that customers will no longer pay for improvements. It just means that the type of improvement for which they will pay a premium price will change. Once their requirements for functionality and reliability have been met, customers begin to redefine define what is not good enough. What becomes not good enough is that customers can’t get exactly what they want exactly when they need it, as conveniently as possible. Customers become willing to pay premium prices for improved performance along this new trajectory of innovation in speed, convenience, and customization. When this happens, we say that the basis of competition in a tier of the market has changed.

TSMC was willing to build any chip that these new fabless companies could come up with; what is notable is that it was Gelsinger and Intel who made this modular approach possible, thanks to the tools they built for the 486. Again from that paper:

The combination of all these tools was stitched together into a system called RLS1 which was the first RTL2 to layout system ever employed in a major microprocessor development program…RLS succeeded because it combined the power of three essential ingredients:

  • CMOS (which enabled the use of a cell library)
  • A Hardware Description Language (providing a convenient input mechanism to capture design intent)
  • Synthesis (which provided the automatic conversion from RTL to gates and layout)

This was the “magic and powerful triumvirate”. Each one of these elements alone could not revolutionize design productivity. A combination of all three was necessary! These three elements were later standardized and integrated by the EDA3 industry. This kind of system became the basis for all of the ASIC4 industry, and the common interface for the fabless semiconductor industry.

The problem is that Intel, used to inventing its own tools and processes, gradually fell behind the curve on standardization; yes, the company had partnerships with EDA companies like Synopsys and Cadence, but most of the company’s work was done on its own homegrown tools, tuned to its own fabs. This made it very difficult to be an Intel Custom Foundry customer; worse, Intel itself was wasting time building tools that once were differentiators, but now were commodities.

This bit about tools isn’t new; Gelsinger announced Intel’s support for Synopsis and Cadence at last March’s announcement of Intel Foundry Services (IFS), and of course they did: Intel can’t expect to be a full-service foundry if it can’t use industry-standard design tools and IP libraries.

What I have come to appreciate, though, is exactly why Gelsinger announced IFS as only one part of Intel’s broader “IDM 2.0” strategy:

  • Part 1 was Intel’s internal manufacturing of its own designs; this was basically IDM 1.0.
  • Part 2 was Intel’s plan to use 3rd-party manufacturers like TSMC for its cutting edge products.
  • Part 3 was Intel Foundry Services.

I had been calling for something like IFS since the beginning of Stratechery, and had even gone so far as to advocate a split of the company a month before Gelsinger’s presentation. “IDM 2.0” suggested that Intel wasn’t going to go quite that far — and understandably so, given the history of Part 1 — but the more I think about Part 2, and how it connects to the other pieces, I wonder if I was closer to the mark than I realized.

Microsoft and Intel

In 2018, when I traced the remarkable turnaround Satya Nadella had led at Microsoft in an article entitled The End of Windows, I noted that Nadella’s first public event was to announce Office on iPad. This was an effort that was launched years previously under former CEO Steve Ballmer, but hadn’t launched because the Windows Touch version wasn’t ready. That, though, was precisely why Nadella’s launch was meaningful: he was signaling to the rest of the company that Windows would no longer be the imperative for Office; that same week Nadella renamed Windows Azure to Microsoft Azure, sending the exact same message.

I thought of this timing when Gelsinger spoke about TSMC making chips for Intel; that too was an initiative launched under a predecessor (Bob Swan). The assumption made by nearly everyone, though, was that Intel’s partnership with TSMC would only be a stopgap while the company got its manufacturing house in order, such that it could compete directly; indeed, that is the assumption underlying the opening of this Article.

This, though, is why TSMC’s announcement about its increased capital expenditure was such a big deal: a major driver of that increase appears to be Intel, for whom TSMC is reportedly building a custom fab. From DigiTimes:

TSMC plans to have its new production site in the Baoshan area in Hsinchu, northern Taiwan make 3nm chips for Intel, according to industry sources. TSMC will have part of the site converted to 3nm process manufacturing, the sources said. The facilities of the site, dubbed P8 and P9, were originally designed for an R&D center for sub-3nm process technologies. The P8 and P9 of TSMC’s Baoshan site will be capable of each processing 20,000 wafers monthly, and will be dedicated to fulfilling Intel’s orders, the sources indicated.

TSMC intends to differentiate its chip production for Intel from that for Apple, and has therefore decided to separate its 3nm process fabrication lines dedicated to fulfilling orders from these two major clients, the sources noted. The moves are also to protect the customers’ respective confidential products, the sources said. Intel’s demand could be huge enough to persuade TSMC to modify the pure-play foundry’s manufacturing blueprints, the sources indicated. The pair’s partnership is also likely to be a long-term one, the sources said.

Intel and Microsoft are bound by history, of course, but obviously their businesses are at the opposite ends of the computing spectrum: Intel deals in atoms and Microsoft in bits. What was common to both, though, was an unshakeable belief in the foundations of their business model: for Microsoft, it was the leverage over not just the desktop, but also productivity and enterprise servers, delivered by Windows; for Intel it was the superiority of their manufacturing. Nadella, before he could change tactics or even strategy, had to shake the Windows hangover that had corrupted the culture; Gelsinger needed to do the same to Intel, which meant taking on his own factories.

Think about the EAD issue I explained above: it must have been a slog to cajole Intel’s engineers into abandoning their homegrown solutions in favor of industry standards when the only beneficiaries were potential foundry customers — that is one of the big reasons why Intel Custom Foundry failed previously. However, if Intel were to manufacture its chips with TSMC, then it would have no choice but to use industry standards. Moreover, just as Windows needed to learn to compete on its own merits, instead of expecting Office or Azure to prop it up, Intel’s factories, denied monopoly access to cutting edge x86 chips, will now have to compete with TSMC to earn not just 3rd-party business, but business from Intel’s own design team.

The Intel Split

When I wrote that Intel should be broken up I focused on incentives:

This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a strait-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.

The key thing to understand about chips is that design has much higher margins; Nvidia, for example, has gross margins between 60~65%, while TSMC, which makes Nvidia’s chips, has gross margins closer to 50%. Intel has, as I noted above, traditionally had margins closer to Nvidia, thanks to its integration, which is why Intel’s own chips will always be a priority for its manufacturing arm. That will mean worse service for prospective customers, and less willingness to change its manufacturing approach to both accommodate customers and incorporate best-of-breed suppliers (lowering margins even further). There is also the matter of trust: would companies that compete with Intel be willing to share their designs with their competitor, particularly if that competitor is incentivized to prioritize its own business?

The only way to fix this incentive problem is to spin off Intel’s manufacturing business. Yes, it will take time to build out the customer service components necessary to work with third parties, not to mention the huge library of IP building blocks that make working with a company like TSMC (relatively) easy. But a standalone manufacturing business will have the most powerful incentive possible to make this transformation happen: the need to survive.

Intel is obviously not splitting up, but this TSMC investment sure makes it seem like Gelsinger recognizes the straitjacket Intel was in, and is doing everything possible to get out of it. To that end, it seems increasingly clear that the goal is to de-integrate Intel: Intel the design company is basically going fabless, giving its business to the best foundry in the world, whether or not that foundry is Intel; Intel the manufacturing company, meanwhile, has to earn its way (with exclusive access to x86 IP blocks as a carrot for hyperscalers building their own chips), including with Intel’s own CPUs.

Gelsinger’s Grovian Moment

It’s not clear that this will work, of course; indeed, it is incredibly risky, given just how expensive fabs are to build, and how critical it is that they operate at full capacity. Moreover, Intel is making TSMC stronger (while TSMC benefits from having another tentpole customer to compete with Apple). Given Intel’s performance over the last decade, though, it might have been more risky to stick with the status quo, in which Intel’s floundering fabs take down Intel’s design as well. In that regard it makes sense; in fact, one is reminded of the famous story of Moore and Gelsinger’s mentor, Andy Grove, deciding to get out of memory, which I wrote about in 2016:

Intel was founded as a memory company, and the company made its name by pioneering metal-oxide semiconductor technology in first SRAM and then in the first commercially available DRAM. It was memory that drove all of Intel’s initial revenue and profits, and the best employees and best manufacturing facilities were devoted to memory in adherence to Intel’s belief that memory was their “technology driver”, the product that made everything else — including their fledgling microprocessors — possible. As Grove wrote in Only the Paranoid Survive, “Our priorities were formed by our identity; after all, memories were us.”

The problem is that by the mid-1980s Japanese competitors were producing more reliable memory at lower costs (allegedly) backed by unlimited funding from the Japanese government, and Intel was struggling to compete…

Grove explained what happened next in Only the Paranoid Survive:

I remember a time in the middle of 1985, after this aimless wandering had been going on for almost a year. I was in my office with Intel’s chairman and CEO, Gordon Moore, and we were discussing our quandary. Our mood was downbeat. I looked out the window at the Ferris Wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why don’t you and I walk out the door, come back in and do it ourselves?”

Gelsinger was once thought to be next-in-line to be Intel’s CEO; he literally walked out the door in 2009 and for a decade Intel floundered under business types who couldn’t have dreamed of building the 486 or the tools that made it possible; now he has come back home, and is doing what must be done if Intel is to be both a great design company and a great manufacturing company: split them up.

I wrote a follow-up to this Article in this Daily Update.

  1. RTL to Layout Synthesis 

  2. Register-Transfer Level 

  3. Electronic Design Automation 

  4. Application-specific Integrated Circuit 

OpenSea, Web3, and Aggregation Theory

This was originally sent as a subscriber-only Update.

From Eric Newcomer:

The NFT-marketplace OpenSea is in talks to raise at a $13 billion valuation in a deal led by Coatue, sources tell me. Paradigm will also co-lead the $300 million funding round, according to a spokesperson for the firm. Kathryn Haun’s new crypto fund, which is currently operating under Haun’s initials “KRH,” is also participating in the funding round, sources tell me. Dan Rose at Coatue is spearheading the round and may take a board observer seat.

OpenSea confirmed the news on their blog:

In 2021, we saw the world awaken to the idea that NFTs represent the basic building blocks for brand new peer-to-peer economies. They give users greater freedom and ownership over digital goods, and allow developers to build powerful, interoperable applications that provide real economic value and utility to users. OpenSea’s vision is to become the core destination for these new open digital economies to thrive, building the world’s friendliest and most trusted NFT marketplace with the best selection.

This is, of course, a story about NFTs, at least in part, and by extension, a story about the so-called Web 3 née crypto economy that its fiercest advocates say is the future. But there are two other parts of this story that are very much at home on the Internet as it exists in the present: $13 billion, and “core destination.”

OpenSea’s Value

First, two more OpenSea stories from over the break. From Be In Crypto:

NFT marketplace OpenSea has frozen $2.2 million worth of Bored Ape (BAYC) NFTs after they were reported as being stolen. The NFTs on the marketplace now have a warning saying that it is “reported for suspicious activity.” Buying and selling of such items are suspended…

Meanwhile, there is a bit of a squabble happening over OpenSea over the Phunky Ape Yacht Club (PAYC). The NFT platform banned this NFT series because it was based on the Bored Ape Yacht Club NFTs. PAYC is virtually identical to BAYC, except for the fact it is mirrored.

This excerpt isn’t technically complete: buying and selling of the stolen NFTs — or of the PAYC NFTs — are suspended on OpenSea. The fact of the matter is that (references to) NFTs are, famously, stored on the blockchain (Ethereum in this case), and once those BAYC NFTs were transferred, by consent of their previous owner or not, the transaction cannot be undone without the consent of their new owner; said owner can buy or sell the NFTs to someone else, but not on OpenSea. It’s the same thing with with the BAYC rip-offs: they exist on the blockchain, whether or not OpenSea lists them for sale or not.

This, according to crypto advocates, is evidence of the allure of Web 3: because the blockchain is open and accessible by anyone, the stolen BAYC NFTs and the PAYC rip-offs can be sold on another market, or if one cannot be found, in a private transaction (leave aside, for the sake of argument and the brevity of this update, the question as to whether the fact that these transactions are irreversible is a feature or a bug).

Here’s the thing, though: this isn’t a new concept. What is the first answer given to anyone who is banned from Twitter, or demonetized on YouTube — two of the go-to examples Web 3 advocates give about the problem of centralized power on the Internet today? Start your own Twitter, or start your own blog, or set up a Substack. These answers are frustrating because they are true: the web is open.

Indeed, if this frustration sounds familiar, it is because it is the frustration of the regulator insisting that Aggregators are monopolies, that Google is somehow forcing users to not use Bing like some sort of railroad baron extorting farmers simply seeking to move grain to market, or that Facebook has a monopoly on social networking, ignoring the fact that we have far more ways to communicate than ever before.

In fact, what gives Aggregators their power is not their control of supply: they are not the only way to find websites, or to post your opinions online; rather, it is their control of demand. People are used to Google, or it is the default, so sites and advertisers don’t want to spend their time and money on alternatives; people want other people to see what they have to say, so they don’t want to risk writing a blog that no one reads, or spending time on a social network that because it lacks the network has no sense of social.

This is why regulations focused on undoing the control of supply are ineffective: the marginal cost nature of computing and the zero distribution cost of the Internet made it viable for the first time — and far more profitable — to control demand, not by forcing people to act against their will, but by making it easy for them to accomplish whatever it is they wished to do, whether that be find a website, buy a good, talk to their friends, or give their opinion. And now, to buy or sell NFTs.

This, then, is the reason that OpenSea received its $13 billion valuation: it is by far the dominant market for NFTs; should the market exist in the long run, the most likely entryway for end users will be OpenSea. This is a very profitable position to be in, even if alternatives are only a click away. It’s not like that reduces the profitability of a Google or a Facebook.

It is also why OpenSea’s bans have some amount of teeth to them: as I noted, you can still buy and sell these stolen and rip-off NFTs, just as you can still go to a website that is not listed in Google, communicate with a friend kicked off of Facebook, or state your opinions somewhere other than Twitter. The reduced demand, though, lowers the price, whether that price be traffic, convenience, or attention. Or, in the case of NFTs, ETH: not having access to OpenSea means there is less demand for these NFTs, and less demand means lower prices.

In short, OpenSea has power not because it controls the NFTs in question, but because it controls the vast majority of demand.

Crypto’s Aggregators

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply.

To that end, which side of the equation is impacted by the blockchain? The answer, quite obviously is supply. Indeed, one need only be tangentially aware of crypto to realize that the primary goal of so many advocates is to convert non-believers, the better to increase demand. This has the inverse impact of OpenSea’s ban: increased demand increases prices for scarce supply, which is to say, in terms that are familiar to any Web 3 advocate, that the incentives of Web 3’s most ardent evangelists are very much aligned.

The most valuable assets in crypto remain tokens; Bitcoin and Ethereum lead the way with market caps of $874 billion and $451 billion, respectively. What is striking, though, is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtual cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Moreover, as I explained in a follow-up to The Great Bifurcation, this view of crypto’s role relative to the web places it firmly in an ongoing progression away from technical lock-in and towards network effects:

Technical lock-in has decreased while network lock-in has increased

This is a dramatic simplification, to be clear, but I think it is directionally correct; the long-term trend, all of the hysteria around tech notwithstanding, is towards more openness and less lock-in. At the same time, this doesn’t mean that companies are any less dominant; rather, their means of dominance has shifted from the technical to the sociological.

Crypto, still largely valued on nothing more than the collective belief of its users, is the ultimate example to date of the power of a network, in every sense of the word. That that collective belief is a point of leverage for companies that can aggregate believers is the most natural outcome imaginable, even if it means that the lack of technical lock-in will likely prove to be more of an occasionally invoked escape hatch (and a welcome one at that) as opposed to the defining characteristic for the majority of users.