The Idea Adoption Curve

Matthew Yglesias, in his new book One Billion Americans, admits:

The One Billion Americans agenda — tripling the American population — is a radical suggestion that lies well outside the boundaries of conventional political arguments.

Ezra Klein, in his new book Why We’re Polarized, promises:

What I am trying to develop here isn’t so much an answer for the problems of American politics as a framework for understanding them. If I’ve done my job well, this book will offer a model that helps make sense of an era in American politics that can seem senseless.

Klein’s promise is very similar to the brand promise of Vox, the digital media site he co-founded with Yglesias and Melissa Bell: “Explain the news”. Yglesias, on the other hand, seems more invested in creating the news that future Vox might one day explain. I think the distinction is meaningful even with the news that both Yglesias and Klein are leaving the company they founded: in fact, I think the distinction explains their destinations.

Crossing the Chasm

In the introduction of his classic tech marketing book, Crossing the Chasm, Geoffrey A. Moore writes:

This book is unabashedly about and written specifically for marketing within high-tech enterprises. But high tech can be viewed as a microcosm of larger industrial sectors. In this context, the relationship between an early market and a mainstream market is not unlike the relationship between a fad and a trend. Marketing has long known how to exploit fads and how to develop trends. The problem, since these techniques are antithetical to each other, is that you need to decide which one—fad or trend—you are dealing with before you start. It would be much better if you could start with a fad, exploit it for all it was worth, and then turn it into a trend.

That may seem like a miracle, but that is in essence what high-tech marketing is all about. Every truly innovative high-tech product starts out as a fad—something with no known market value or purpose but with “great properties” that generate a lot of enthusiasm within an “in crowd” of early adopters. That’s the early market. Then comes a period during which the rest of the world watches to see if anything can be made of this; that is the chasm. If in fact something does come out of it—if a value proposition is discovered that can be predictably delivered to a targetable set of customers at a reasonable price—then a new mainstream market segment forms, typically with a rapidity that allows its initial leaders to become very, very successful.

The key in all this is crossing the chasm—performing the acts that allow the first shoots of that mainstream market to emerge. This is a do-or-die proposition for high-tech enterprises; hence it is logical that they be the crucible in which “chasm theory” is formed. But the principles can be generalized to other forms of marketing, so for the general reader who can bear with all the high-tech examples in this book, useful lessons may be learned.

Moore divided tech markets into five parts, that I summarized in 2015’s The End of Trickle-Down Technology:

The Technology Adoption Curve

  • Technology Enthusiasts love tech first and foremost, and are always looking to be on the cutting edge; they are the first to try a new product
  • Visionaries love new products as well, but they also have an eye on how those new products or technologies can be applied. They are the most price-insensitive part of the market
  • Pragmatists are a much larger segment of the market; they are open to new products, but they need evidence they will work and be worth the trouble, and they are much more price conscious
  • Conservatives are much more hesitant to accept change; they are inherently suspicious of any new technology and often only adopt new products when doing so is the only way to keep up. Because they don’t highly value technology, they aren’t willing to pay a lot
  • Skeptics are not just hesitant but actively hostile to technology

Allow me to take Moore at his word, and apply this model to something rather different than tech B2B marketing: ideas. It seems to me that Yglesias and Klein are focused on different parts of this adoption cycle. Yglesias is somewhere between an enthusiast and a visionary; the core concept of his book is closer to the former, and the policy prescriptions closer to the latter. Klein, meanwhile, is focused more on pragmatism, or even conservatism: explaining, instead of creating.

This is, to be sure, a crude simplification of two writers with nearly 40 years of material between them, but I don’t think it is an accident that Yglesias has set out on his own on Substack, whereas Klein is joining the New York Times.

The Idea Adoption Curve

I wrote last month about the New York Times‘s traditional role in setting the national news agenda; headlines in the New York Times in the morning were lead stories on national newscasts in the evening, and headlines in regional papers the following day. If you map this dynamic to Moore’s model, it might look like this:

The Idea Adoption Curve

Where, though, did the New York Times get its ideas? Obviously a lot of stories came from its own reporting, or that of its peers, but the “enthusiast” part of the curve was mostly centered in academia. Visionaries, meanwhile, were a collection of think tanks, journals, and speciality magazines that operated at a loss, which was fine because making money was never the point: getting ideas into publications like the New York Times was.

Idea generation in the analog age

What happened in the 2000s, when Yglesias and Klein first burst on the scene as part of the original generation of political bloggers, was the development of a new genre of “enthusiasts” who were creating and debating new ideas mostly for free. Sure, most of these bloggers found work with publications like American Prospect (Yglesias) or Washington Monthly (Klein), but those publications were political projects, not economic ones, with the goal of influencing the mass market, not monetizing it.

Vox, on the other hand, has been something much different, both in terms of mission statement and business model. “Explaining the news” is, from a certain perspective, about crossing the chasm on the idea curve; enthusiasts have created new ideas, and visionaries have refined them, and now the challenge is to spread those ideas to the population generally. Vox’s business model, meanwhile, is firmly on the right hand side of the curve. Advertising is all about scale, and the vast majority of the market falls to the right of the chasm.

The problem with this approach, though, is that publications simply aren’t as good at advertising as Facebook and Google. I wrote five years ago in Popping the Publishing Bubble:

Publishers and ad networks are locked in a dysfunctional relationship that doesn’t serve readers or advertisers, and it’s only a matter of time until advertisers — which again, care only about reaching potential customers, wherever they may be — desert the whole mess entirely for new, more efficient and effective advertising options that put them directly in front of the people they care about. That, first and foremost, is Facebook, but other social networks like Twitter, Snapchat, Instagram, Pinterest, and others will benefit as well:

A drawing of Facebook As a More Efficient Advertising Option

I don’t know the specifics of how Vox’s business is doing, although it is notable that the site’s previous ad inventory is now mostly filled with requests for donations; meanwhile, there is no confusion about the business models of either Substack or the New York Times.

Visionaries and Substack

Start with Yglesias, and Substack; this is obviously a business model near-and-dear to my heart, given that Stratechery was perhaps the first site built around the idea of a one-person publication supported by subscriptions at scale.1 I don’t have any illusions about reaching the mass market; Stratechery is very much predicated on capturing the “Visionary” part of the curve. Read again the explanation I excerpted above:

Visionaries love new products as well, but they also have an eye on how those new products or technologies can be applied. They are the most price-insensitive part of the market

The “price-insensitive” part is key: Stratechery and Substack publications like Slow Boring may not be that expensive, but they are, relative to text on the Internet, shockingly pricey. Subscribers, though, don’t mind, as long as what they are getting is consistently unique and provocative; look no further than One Billion Americans, or Yglesias’s Twitter account, to see why he ended up on Substack.

What is neat about this model is that it is far more sustainable and accessible than the old model of corporations and donors subsidizing think tanks, journals, and specialty magazines, and far more ideologically diverse than academia. It is important, though, to be honest about the model’s limitations: while I remain very bullish about the potential of subscription-based local news entities, it is difficult to envision a future where most people pay for news or analysis directly.

One way to understand why is to map the Idea Adoption Curve against a “Willingness-to-Pay Curve”; I suspect it looks something like this:

Willingness-to-pay on the Idea Adoption Curve

Enthusiasts — now on Twitter, for the most part — generate and debate ideas for free, while the real money, at least from an average revenue per user perspective, is with the visionaries. This is where the Substack model makes the most sense. What is notable, though, is that the chasm of ideas is also a chasm of monetization.

Push Versus Pull

How do ideas cross the chasm into popular acceptance and public policy? Vox tried to pull them over, with, I think, limited success; the New York Times, on the other hand, has been positioned to cross the chasm for decades. What has changed is that while the New York Times used to be mostly supported by advertising, which incentivized it to remain planted on the right side of the divide, pulling ideas from the fringes into the mainstream, its shift to subscriptions has pushed it to the left part of the curve, incentivizing the newspaper of record to more actively generate and push new ideas.

This has, as I noted in Never-Ending Niches, changed the New York Times:

To the extent that the New York Times has been successful online — and the company has been very successful indeed! — it follows that the company is well-placed in terms of both focus and quality, and in that order.

In this view, the fact that deeply reported articles about Chinese disinformation on Twitter are held as being low quality by the Chinese government is immaterial; what matters is that the New York Times‘ audience, which is mostly in the United States, finds it of high quality (I certainly do).

That’s an easy example, but there are ones that hit closer to home; for example, I thought this 2018 story that claimed that Facebook Gave Data Access to Chinese Firm Flagged by U.S. Intelligence was, as I wrote at the time, “deeply flawed at best, and willfully mendacious at worst.” It turns out, though, that I am not particularly interested in the “Everything tech does is bad” niche; that story was very high quality for much of the New York Times’s audience.

These two stories were in the News section, not Opinion, but the point is that the distinction matters less than ever before; the pursuit of subscription revenue pushes the New York Times to the left side of the chasm, which means that to the extent it still fulfills it role of conveying ideas across the chasm it is more focused on pushing particular points of view into the broader public, as opposed to pulling ideas the broader public may favor. This certainly makes the New York Times an attractive landing place for Klein, who, for all of his focus on “explaining the news”, has never been shy about his political preferences and desire to influence policy. Now he can do so from a platform that has long been defined by its ability to cross the chasm.

BuzzFeed Versus the New York Times

Yglesias and Klein’s departures from Vox weren’t the only media news of the week; BuzzFeed acquired HuffPost (and some cash from Verizon). In an interview on Recode Media, BuzzFeed CEO and founder — of both BuzzFeed and HuffPost! — Jonah Peretti took aim at the New York Times’ shift; from Recode:

The Times, Peretti allowed, has since refined a very good subscription business model, which has allowed it to make better journalism by hiring more and better talent. This is not a controversial opinion. But the next part may be: The New York Times, Peretti argued, can’t really be called “the paper of record” anymore — because of that same subscription model.

“A subscription business model leads towards being a paper for a particular group and a particular audience and not for the broadest public,” Peretti said. He’s alluding, in part, to the theory that the Times’s subscriber base wants to read a certain kind of news and opinion — middle/left of center, critical of Donald Trump, etc. — and that straying from that can cost it subscribers. But he’s also simply arguing that the act of requiring readers to pay to read cuts the Times off from a big audience.

Peretti’s solution to that problem, it turns out, sounds a whole lot like a combined BuzzFeed/HuffPost — publications that are widely distributed, supported by advertising, and free:

“Will a subscription newspaper that is read by a subset of society have as big an impact as it could on voters, on the broad public, on young people, on the more diverse rising generation of millennials and Gen Z?” he argued. “I think there’s a huge opportunity to serve those consumers. And not all of them are going to be subscribers to any publication.”

BuzzFeed is firmly planted on the right side of the Idea Adoption Curve; most of the publication’s content is about anything other than news or ideas, and its primary distribution network is Facebook (Twitter is for the left side of the curve, and Facebook the right). It also has an advertising kitchen sink business model, with a combination of premium advertising, programmatic advertising, affiliate marketing, e-commerce, etc., all of which only make sense at scale.

The Implications of Abundance

At the same time, it is worth noting that the New York Times has, contrary to Peretti’s implication, never been a newspaper for the masses. Sure, its subscription model is by default exclusionary, but only being available in printed form, mostly in New York, was far more exclusionary. The point about subscriptions driving a particular point of view is a valid one, but then again, it is not as if BuzzFeed has been shy about its political preferences either. The reality is that the implication of the Internet is that ideas are in abundance, and people will seek out what they already agree with, as opposed to accepting what is delivered to them.

This explains the newly prominent role of the right side of the Idea Adoption Curve; I noted while summarizing the Technology Adoption Curve:

Skeptics are not just hesitant but actively hostile to technology

This, translated to ideas, explains the prominence of conspiracy theories and misinformation. These strains of belief have always existed, in America in particular, but one implication of the Internet leveling the information playing field is that these alternative views of reality can both spread further than ever before, even as they are far more visible to everyone else. This is both reason for alarm, and for skepticism about said alarm: yes, the misinformation problem is worse than before, but much of our fear is rooted in newfound awareness of an old problem, and there is a strong case to be made that the emergence of valuable information makes up for the propagation of misinformation that mostly feeds confirmation biases.

What is indisputable, though, is that the nature of information and its spread has been fundamentally altered in a way unseen since the printing press. It affects Yglesias and Substack, Klein and the New York Times, and, one increasingly suspects, the very fabric of society and the foundation of our political institutions and organizing principles. And, if this is right, we are only now at the end of the beginning.

  1. Yes, subscription-based newsletters existed long before Stratechery, particularly on Wall Street, but at much higher price points []

Playing on Hard Mode

One way to understand how the Internet is different is to not only examine what business models work, but also the history of how those business models came to be. Start with text and images, long the province of newspapers: the first attempts at website monetization placed ads alongside article text; after all, that is how advertising was done.

Incredibly enough, it was a mere eight years ago that Facebook IPO’d with this as its business model: content that was important to you was in the center of the webpage, and ads were on the side (mobile didn’t monetize at all, which was why growth in mobile usage was listed as a risk factor in Facebook’s S-1); the company was optimistic that the Facebook Platform would provide a more traditional-to-tech means of monetization that could augment its ads business.

That, though, was the other “problem” with mobile: it made Facebook just an app, not a platform. It turned out, though, that this was the best possible thing that could have happened to the social media company: freed to be “just an app” the company doubled down on the News Feed, which already delivered personalized content, as the primary means of delivering personalized ads.

The rest, as they say, is history:

Facebook's stock price since IPO

Today would-be experts talk about Facebook’s business model as if it were always inevitable, that of course the same mechanism would work for Instagram, not to mention the company’s ever increasing number of competitors like Snapchat and TikTok; I presume they all purchased the company’s stock when it was down 50% from its IPO price in the fall of 2012, five months after the Instagram purchase. For the rest of us, though, including Facebook, it wasn’t obvious at all: succeeding on the Internet didn’t simply mean making a digital product, but also finding a business model that was native as well.

Easy Mode

And yet, for all of Facebook’s initial challenges, the truth is that the company was, relatively speaking, playing on easy mode. Yes, the company practically invented the modern growth hacking discipline and feed advertising, but its core product was about digitizing offline relationships that already existed, and the means by which it did that — text and photos, at least at the beginning — were native to the Internet. To the extent it was difficult to figure out how to monetize advertising it was because advertising was so easy that there was effectively infinite inventory.

You can make a similar argument about Google: yes, Larry Page and Sergey Brin created something truly superior with PageRank and the Google search engine, but once deployed Google instantly had access to an entire universe of web pages seemingly tailor-made to to make Google better at giving you the results you need. If anything the ease with which Google came to dominate the web has hindered the company in adjacent markets where skills like marketing and sales make a difference.

Twitter and Snapchat, in contrast to Facebook, had to create networks in Facebook’s shadow; Twitter focused on the interest graph, while Snapchat defined itself by being the anti-Facebook for a new generation. This was a more difficult path, but one still defined by zero marginal costs in terms of distribution and monetization. Google’s vertical search competitors faced a similar challenge: build something unique and differentiated in Google’s shadow, acquiring not just demand but also supply along the way. Still, like Facebook’s challengers, all of these companies are safely cocooned in the virtual world.

Amazon, in contrast, has played on a much higher difficulty setting from the beginning, selling and shipping physical items, with all of the marginal costs that entails. If anything the company has doubled down on the physical world, investing billions to deliver items in one day; I don’t think it is a coincidence it is Amazon that is Google’s true competitor.

OTAs and Pizza

Perhaps the easiest mode of all, though, was layering the Internet on top of real world business models. Consider OTAs — “Online Travel Agents” — the name gives it away! Instead of calling up a travel agent and being inherently limited to their knowledge and connections (and paying their commission), customers could access search engines that aggregated every flight and every hotel, displaying them in a way that was easy to compare and contrast. From a customer perspective it was a better experience in nearly every way: both more comprehensive and cheaper as well.

Of course, like most suppliers in an Aggregator-based value chain, hotels weren’t too pleased, but given that demand was increasingly concentrated on the likes of Booking.com they had no choice but to come onto the platform on the OTAs terms. Their response was, rationally, to consolidate and focus on loyalty programs and repeat customers. The OTAs, meanwhile, could simply take a skim off of all of the bookings they made, without needing to build their own hotels on one hand, or worry about infinite inventory depressing prices on the other.

There was a similar dynamic in an industry like pizza delivery: a company like Dominos existed for decades relying on phone calls for delivery; with the advent of the smartphone, though, the company quickly pivoted to mobile ordering, augmenting that capability with innovative apps and tracking services that let you make the exact pizza you wanted whenever you wanted and trace its route to your front door. The company’s success has been extraordinary, much like the OTAs, and for similar reasons: the Internet made an existing real world business model better, even as the real world constraints ensured the money-making opportunity existed.

Airbnb and Trust

There will be time over the next few days and weeks to get into the particulars of Airbnb and DoorDash’s businesses, but I thought this observation from FinTwit regular @modestproposal1 was notable:

This is, in a vacuum, a valid point; frankly, the biggest takeaway from my perspective is that Booking was drastically undervalued circa 2011 — the stock market certainly agrees:

Booking.com's stock price over time

The truth is that, as I just explained, the company was playing in easy mode: OTAs were an obvious business, with real world constraints that brought digital’s advantages to bear without its commoditizing downsides. At the same time, notice how BKNG’s share price has leveled out: over the last few years in particular, Google, the Super-Aggregator, has been extracting an ever greater share of OTA margins. Indeed, that’s the downside to having a business built on easy mode: anyone else can play the game just as easily.

Airbnb, on the other hand, has been building something truly unique; the company explains in its S-1:

Travel is one of the world’s largest industries, and its approach has become commoditized. The travel industry has scaled by offering standardized accommodations in crowded hotel districts and frequently-visited landmarks and attractions. This one-size-fits-all approach has limited how much of the world a person can access, and as a result, guests are often left feeling like outsiders in the places they visit.

Airbnb has enabled home sharing at a global scale and created a new category of travel. Instead of traveling like tourists and feeling like outsiders, guests on Airbnb can stay in neighborhoods where people live, have authentic experiences, live like locals, and spend time with locals in approximately 100,000 cities around the world. In our early days, we described this new type of travel with the tagline “Travel like a human.” Today, people simply refer to it with a single word: “Airbnb.”

Unsurprisingly Airbnb frames the commoditization of hotels as a negative, but it was precisely this commoditization that unlocked the OTAs, even as the OTAs accelerated said commoditization in a way that benefited customers with low prices and wide selections. And, as noted, left the OTAs susceptible to Google. Airbnb’s relationship with Google, though, is different:

We focus on unpaid channels such as SEO. SEO involves developing our platform in a way that enables a search engine to rank our platform prominently for search queries for which our platform’s content may be relevant.

The company explained in its Key Factors Affecting Our Performance:

We grow GBV by attracting new guests to book stays and experiences on our platform and through past guests who return to our platform to make new bookings. We attract most guests to Airbnb directly or through unpaid channels. During the nine months ended September 30, 2020, approximately 91% of all traffic to Airbnb came organically through direct or unpaid channels, reflecting the strength of our brand. We have also used paid performance marketing, for example on search terms including “Airbnb,” to attract guests. Our strategy is to increase brand marketing and use the strength of our brand to attract more guests via direct or unpaid channels and to decrease our performance marketing spend relative to 2019.

Airbnb did not, as far as I could see, specify the exact split between brand and performance marketing, but it makes intuitive sense that the company would be less dependent on Google search ads than other OTAs: its supply is unique, and its brand is a verb.

This is, to be sure, a far more difficult path to building a business than the OTAs on one hand, which simply layered digital onto real world business models, and search engines and social networks on the other, which created new business models with supply that was inherently digital. Airbnb created an entirely new sort of supply that previously didn’t exist. As the company notes in its S-1 introduction, the key was trust:

In 2008, Nate, a software engineer, joined Brian and Joe, and together the three founders took on a bigger design problem: how do you make strangers feel comfortable enough to stay in each other’s homes? The key was trust. The solution they designed combined host and guest profiles, integrated messaging, two-way reviews, and secure payments built on a technology platform that unlocked trust, and eventually led to hosting at a global scale that was unimaginable at the time.

I wrote about Airbnb and trust back in 2015 in Airbnb and the Internet Revolution:

In the interest of full disclosure, I’m actually writing this post while sitting in an apartment rented through Airbnb. The pictures were ok, but the plethora of reviews were effusive in their praise of this surprisingly large one-bedroom apartment with easy access to the train, so I took the plunge. Indeed, the reviews were spot-on: the apartment is beautiful, and I couldn’t be happier with my choice. One more thing — my family and I are working really hard to keep the place as pristine as it was when we moved in. After all, while I trusted the ratings over the pictures, future Airbnb sublessors will surely care greatly about my rating as well.

There isn’t the sort of community that Chesky promised; I haven’t met our sublessor in person, and likely never will. I don’t know his favorite coffee shops or taco places (or ramen joints for that matter), and I very much feel not at home. But despite that fact, some of the most important trappings of community do exist: the shared mores, and common accountability. My sublessor is incentivized to provide a great place, and I’m incentivized to keep it that way, and that more than anything is what makes Airbnb work. And, by extension, one of the big advantages of hotels — the trust instilled first by the concept and reinforced by the brand — begins to erode.

The commoditization of trust is far more injurious to hotels than you might think: it’s not simply that Airbnb is more competitive on one particular vector; rather, the “trust” vector was by far the biggest priority for both travelers and hosts. Hotels could be infinitely more inconvenient, expensive, or sterile relative to your typical homestay and it wouldn’t matter. In the pre-Airbnb days travelers — and sublessors — justifiably prioritized trust above all else. In other words, the implication of Airbnb building a platform of trust is not that a homestay is now more trustworthy than a hotel; rather, it’s that the trust advantage of a hotel has been neutralized, allowing homestays to compete on new vectors, including convenience, cost, and environmental factors. It turns out homestays are quite competitive indeed: to return to my personal anecdote, I am living in a beautiful, remodeled one bedroom apartment in one of the best neighborhoods in this city, and paying a fraction of the cost of a mid-tier hotel for the privilege.

This is what it takes to succeed in hard mode: Airbnb took a core differentiator of hotels — trust, a differentiator that OTAs depended on — and digitized it. But, critically, that digitization and resultant commoditization happened only on Airbnb, and was thus captured exclusively by the company. This, by extension, is what the comparison to OTAs miss: Airbnb is not riding the same wave that Booking et al did a decade ago, but are instead undertaking something far more ambitious: creating their own wave where none previously existed.

DoorDash and Selection

DoorDash has been playing on hard mode as well: while a company like Dominos created its own standardization and commoditized product designed for delivery, now with tech on top, DoorDash has undertaken the more Herculean task of creating a three-sided market of restaurants, drivers, and customers. This is the ultimate example of seeking to “make it up in volume”; the company explains in its S-1:

Our local logistics platform benefits from three powerful virtuous cycles:

  • Local Network Effects: Our ability to attract more merchants, including local favorites and national brands, creates more selection in our Marketplace, driving more consumer engagement, and in turn, more sales for merchants on our platform. Our strong national merchant footprint enables us to launch new markets and quickly establish a critical mass of merchants and Dashers, driving strong consumer adoption.

  • Economies of Scale: As more consumers join our local logistics platform and their engagement increases, our entire platform benefits from higher order volume, which means more revenue for local businesses and more opportunities for Dashers to work and increase their earnings. This, in turn, attracts Dashers to our local logistics platform, which allows for faster and more efficient fulfillment of orders for consumers.

  • Increasing Brand Affinity: Both our local network effects and economies of scale lead to more merchants, consumers, and Dashers that utilize our local logistics platform. As we scale, we continue to invest in improving our offerings for merchants, selection, experience, and value for consumers, and earnings opportunities for Dashers. By improving the benefits of our local logistics platform for each of our three constituencies, our network continues to grow and we benefit from increased brand awareness and positive brand affinity. With increased brand affinity, we expect that we will enjoy lower acquisition costs for all three constituencies in the long term.

DoorDash's flywheel

We have been successful in becoming the category leader in U.S. local food delivery logistics because of the value we create for merchants, consumers, and Dashers. DoorDash only works if it works for merchants, consumers, and Dashers, and we continually strive to improve how we serve all constituents.

DoorDash’s success relative to its competitors, particularly UberEats, is noteworthy:

We believe that the value we deliver to merchants, consumers, and Dashers is a key reason why we have become the largest and fastest growing business in the U.S. local food delivery logistics category, with 50% U.S. category share and 58% category share in suburban markets.

DoorDash versus the competition

What made DoorDash different from UberEats is that the former focused on maximum merchant selection and suburban markets, while the latter initially prioritized efficient delivery in urban areas. The problem for UberEats, though, is that it was not competing with only DoorDash, but also local delivery networks, and the shop with carryout right down the street. DoorDash, meanwhile, was creating an entirely new market in places filled with little else other than the aforementioned Dominos, which would always be far more efficient, with far less choice.

To put it another way, whereas Airbnb digitized trust, DoorDash digitized the urban experience of a wide selection of options and relative convenience for a suburban population that had the added benefit of large order sizes and convenient parking. And now, given the fact that both restaurants and drivers can multi-home, DoorDash can increasingly rely on its dominant share of customers to drive the other two sides of its market.


This isn’t all there is to say about these two companies: both deserve deeper dives into their financials on one hand, and a consideration of their broader societal impact on the other.

What both companies represent, though, is what it means to play on hard mode. Neither lodging nor logistics is inherently digital; both companies had to make them so, creating new markets that didn’t previously exist. That both Airbnb and DoorDash have done so to a sufficient degree to go public is not only impressive, but will increasingly be a roadmap for new startups, and a model for how the Internet will transform more and more components of the “real” world.

Apple’s Shifting Differentiation

If you ask Apple — or watch their seemingly never-ending series of events — they will happily tell you exactly what the company’s differentiation is based on; from this year alone:

This integration is at the core of Apple’s incredibly successful business model: the company makes the majority of its money by selling hardware, but while other manufacturers can, at least in theory, create similar hardware, which should lead to commoditization, only Apple’s hardware runs its proprietary operating systems.

Of course software is even more commoditizable than hardware: once written, software can be duplicated endlessly, which means its marginal cost of production is zero. This is why many software-based companies are focused on serving as large a market as possible, the better to leverage their investments in creating the software in the first place. However, zero marginal cost is not the only inherent quality of software: it is also infinitely customizable, which means that Apple can create something truly unique, and by tying said software to its hardware, make its hardware equally unique as well, allowing it to charge a sustainable premium.

This is, to be sure, a simplistic view of Apple: many aspects of its software are commoditized, often to Apple’s benefit, while many aspects of its hardware are differentiated. What is fascinating is that while modern Apple is indeed characterized by the integration of hardware and software, the balance of which differentiates the other has shifted over time, culminating in yesterday’s announcement of new Macs powered by Apple Silicon.

Apple 1.0: Software Over Hardware

When Steve Jobs returned to Apple in 1996, the company was famously in terrible financial shape; unsurprisingly the company’s computer lineup was in terrible shape as well: too many models that were too unremarkable. The only difference from PCs was that Macs had a different operating system that was technically obsolete, PowerPC processors that were falling behind x86, and also they were more expensive. Not exactly a winning combination!

Jobs made a number of changes in short order: he killed off the Macintosh clone market, re-asserting Apple’s integrated business model; he dramatically simplified the product lineup; and, having found a promising young designer already working at Apple named Jony Ive, he put all of the company’s efforts behind the iMac. This was truly a product where the hardware carried the software; the iMac was a cultural phenomenon, not because of Classic Mac OS’s ease-of-use, and certainly not because of its lack of memory protection, but simply because the hardware was so simple and so adorable.

Foxtrot and the iMac

OS X brought software to the forefront, delivering not simply a technically sound operating system, but one that was based on Unix, making it particularly attractive to developers. And, on the consumer side, Apple released iLife, a suite of applications that made a Mac useful for normal users. I myself bought my first Mac in this era because I wanted to use GarageBand; 16 years on and my musical ambitions are abandoned, but my Mac usage remains.

By that point I was buying a Mac despite its hardware: while my iBook was attractive enough, its processor was a Motorola G4 that was not remotely competitive with Intel’s x86 processors; later that year Jobs made the then-shocking-but-in-retrospect-obvious decision to shift Macs to Intel processors. In this case having the same hardware as everyone else in the industry would be a big win for Apple, the better to let their burgeoning software differentiation shine.

Apple 2.0: The Apex of Integration

Meanwhile, Apple had an exploding hit on its hands with the iPod, which combined beautiful hardware and superior storage capacity with iTunes, software that offloaded the complexity of managing your music to your far more capable Mac and, starting in 2003, your PC; notably Apple avoided the trap of integrating hardware (the iPod) with hardware (the Mac), which would have handicapped the former to prop up the latter. Instead the company took advantage of the flexibility of software to port iTunes to Windows.

The iPhone followed the path blazed by the iPod: while the first few versions of the iPhone were remarkably limited in their user-facing software capabilities, that was acceptable because much of that complexity was offloaded to the PC or Mac you plugged it into. To that point much of the software work had gone into making the iPhone usable on hardware that was barely good enough; RIM famously thought Jobs was lying about the iPhone’s capabilities at launch.

Over time the iPhone would gradually wean itself off of iTunes and the need to sync with a PC or Mac, making itself a standalone computer in its own right; it was also on its way to being the most valuable product in history. This was the ultimate in integration, both in terms of how the product functioned, and also in the business model that integration unlocked.

Apple 3.0: Hardware Over Software

Sixteen years on from the PowerPC-to-Intel transition, and Apple’s software differentiation is the smallest it has been since the dawn of OS X. Windows has a Subsystem for Linux, which, combined with the company’s laser focus on developers, makes Microsoft products increasingly attractive for software development. Meanwhile, most customers use web apps on their computers, PC or Mac. There has been an explosion in creativity, but that explosion has occurred on smartphones, and is centered around distribution channels, not one’s personal photo or movie library.

Those distribution channels and the various apps customers use to create and consume are available on both leading platforms, iOS and Android. I personally feel that the iPhone retains an advantage in the smoothness of its interface and quality of its apps, but Android is more flexible and well-suited to power users, and much better integrated with Google’s superior web services; there are strong arguments to be made for both ecosystems.

Where the iPhone is truly differentiated is in hardware: Apple has — for now — the best camera system, and has had for years the best system-on-a-chip. These two differentiators are related: smartphone cameras are not simply about lenses and sensors, but also about how the resultant image is processed; that involves both software and the processor, and what is notable about smartphone cameras is that Google’s photo-processing software is generally thought to be superior. What makes the iPhone a better camera, though, is its chip.

Apple Silicon and Sketch

It is difficult to overstate just how far ahead Apple’s A-series of smartphone chips is relative to the competition; AnandTech found that the A14 delivered nearly double the performance of its closest competitors for the same amount of power — indeed, the A14’s only true competitor was last year’s A13. At least, that is, as far as mobile is concerned; the most noteworthy graph from that AnandTech article is about how the A14 stacks up against those same Intel chips that power Macs:

AnandTech charts A-series chips versus Intel chips

Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.

Apple’s performance trajectory and unquestioned execution over these years is what has made Apple Silicon a reality today. Anybody looking at the absurdness of that graph will realise that there simply was no other choice but for Apple to ditch Intel and x86 in favour of their own in-house microarchitecture – staying par for the course would have meant stagnation and worse consumer products.

Today’s announcements only covered Apple’s laptop-class Apple Silicon, whilst we don’t know the details at time of writing as to what Apple will be presenting, Apple’s enormous power efficiency advantage means that the new chip will be able to offer either vastly increased battery life, and/or, vastly increased performance, compared to the current Intel MacBook line-up.

What makes the timing of this move ideal from Apple’s perspective is not simply that this is the year that the A-series of chips are surpassing Intel’s, but also the Mac’s slipping software differentiation. Sketch, makers of the eponymous vector graphics app, wrote, on the occasion of their 10th anniversary, a paean to Mac apps:

Ten years after the first release of Sketch, a lot has changed. The design tools space has grown. Our amazing community has, too. Even macOS itself has evolved. But one thing has remained the same: our love for developing a truly native Mac app. Native apps bring so many benefits — from personalization and performance to familiarity and flexibility. And while we’re always working hard to make Cloud an amazing space to collaborate, we still believe the Mac is the perfect place to let your ideas and imagination flourish.

The fly in Sketch’s celebratory ointment is that phrase “even macOS itself has evolved”; the truth is that most of the macOS changes over Sketch’s lifetime — which started with Snow Leopard, regarded by many (including yours truly) as the best version of OS X — have been at best cosmetic, at worst clumsy attempts to protect novice users that often got in the way of power users.

Meanwhile, it is the cloud that is the real problem facing Sketch: Figma, which is built from the ground-up as a collaborative web app, is taking the design world by storm, because rock-solid collaboration with good enough web apps is more important for teams than tacked-on collaboration with native software built for the platform.

Sketch, to be sure, bears the most responsibility for its struggles; frankly, that native app piece reads like a refusal to face its fate. Apple, though, shares a lot of the blame: imagine if instead of effectively forcing Sketch out of the App Store with its zealous approach to security, Apple had evolved AppKit, macOS’s framework for building applications, to provide built-in support for collaboration and live-editing.

Instead the future is web apps, with all of the performance hurdles they entail, which is why, from Apple’s perspective, the A-series is arriving just in time. Figma in Electron may destroy your battery, but that destruction will take twice as long, if not more, with an A-series chip inside!

Integration Wins Again

This isn’t the first time I have noted that Apple is inclined to fix ecosystem problems with hardware; five years ago, after the launch of the iPad Pro, I wrote in From Products to Platforms:

Note that phrase: “How could we take the iPad even further?” Cook’s assumption is that the iPad problem is Apple’s problem, and given that Apple is a company that makes hardware products, Cook’s solution is, well, a new product.

My contention, though, is that when it comes to the iPad Apple’s product development hammer is not enough. Cook described the iPad as “A simple multi-touch piece of glass that instantly transforms into virtually anything that you want it to be”; the transformation of glass is what happens when you open an app. One moment your iPad is a music studio, the next a canvas, the next a spreadsheet, the next a game. The vast majority of these apps, though, are made by 3rd-party developers, which means, by extension, 3rd-party developers are even more important to the success of the iPad than Apple is: Apple provides the glass, developers provide the experience.

The iPad has since recovered from its 2017 nadir in sales, but seems locked in at around 8% of Apple’s revenue, a far cry from the 20% share it had in its first year, when it looked set to rival the iPhone; I remain convinced that the lack of a thriving productivity software market that treated the iPad like the unique device Jobs thought it was, instead of a laptop replacement, is the biggest reason why.

Perhaps Apple Silicon in Macs will turn out better: it is possible that Apple’s chip team is so far ahead of the competition, not just in 2020, but particularly as it develops even more powerful versions of Apple Silicon, that the commoditization of software inherent in web apps will work to Apple’s favor, just as the its move to Intel commoditized hardware, highlighting Apple’s then-software advantage in the 00s.

Apple is pricing these new Macs as if that is the case: the M1 probably costs around $75 (an educated guess), which is less than the Intel chips it replaces, but Apple is mostly holding the line on prices (the new Mac Mini is $100 cheaper, but also has significantly less I/O). That suggests the company believes it can take both share and margin, and it’s a reasonable bet from my perspective. The company has the best chips in the world, and you have to buy the entire integrated widget to get them.

I wrote a follow-up to this article in this Daily Update.

Is the Internet Different?

Tim Wu has a new piece up on Medium called Ben Thompson’s “Stratechery”; the subtitle, I think, is more descriptive of Wu’s premise:

Smart, but a little too much Kool-Aid

Said premise is in the second paragraph:

Thompson has more recently begun to pronounce and analyze in the field of tech antitrust, and here he is on less solid ground. I appreciate that deep industry expertise is important in his area, especially, say, when designing remedies that make sense. Lacking a background in law or economics is not disqualifying. Nonetheless, I’d say Thompson’s readers are at risk of being misled if they rely too much on what he has to say about tech antitrust. For, as we shall see, his analysis relies too much on an idiosyncratic “digital markets are fundamentally different” thesis that really doesn’t hold up too well. Stated simply, I’d say he’s inducing his readers to drink too much of his “aggregation theory” Kool-Aid, as opposed to encouraging them to think more broadly or read more deeply to understand a slightly messier reality than he presents.

I appreciate Wu’s article, both in terms of its specifics (which I disagree with), its goals (which are implied), and its means (which I value). Time to drink some Kool-Aid!

(Orange-flavored, of course).

Wu’s Argument

Wu’s primary focus is my recent piece United States v. Google:

According to Thompson, the Google case needs be understood primarily through what he calls aggregation theory, which is something of a specialized version of what economists call a two-sided markets theory. His theory asserts that 1) the quality of the user experience, rather than control over distribution, is what determines the winners in digital markets; and 2) a lead based on quality is self-reenforcing, because either more suppliers are attracted or the winner, with more customers, gets more feedback on what makes for a better product. (For those with a background in economics, Thompson’s aggregation theory is basically a mixture of a two-sided market theory with some positive feedback loop stuff thrown in.) Thompson says that “aggregators” (platforms, in economic, if not technological, parlance) are in this manner different than traditional monopolists, for they “win by building ever better products for consumers”…

The problem is that his aggregation theory isn’t aspirational. Instead, it is presented as a description of how the internet has “fundamentally changed the plane of competition” in a world where “on the internet everything is just zero marginal bits.” It also takes as its assumptions: “Zero distribution costs. Zero marginal costs. Zero transactions.” In that, in some ways, it is like the older economic models from the 1960s, except that they were at least billed as models, not depictions of reality.

Wu appears to be serious in his statement that assertions about the importance of zero marginal costs are best understood by looking to the past; his analogy for Aggregation Theory reaches back to the 1920s:

Here’s my send-up of aggregation theory: Imagine this is the 1920s and we were speaking of the invention of brand advertising, and someone says, “whichever brand has the most people attracting it will create a buzz that further favors the winner. Hence, traditional metrics of competition are out the window.” I think we’d all agree that brand matters, and indeed the invention of powerful brands did change competition. But it might be a little too easy to think competition actually has changed forever. And we can see Thompson falling into the novelty trap by asserting things like “the internet has made transaction costs zero” — a sentence that would make any serious economist howl with laughter.

I can’t speak for serious economists — I’ve generally enjoyed my interactions with them, and have been treated with nothing but respect while presenting the idea of Aggregation Theory — but the only thing I find humorous here is the idea that the Internet has not had a massive impact on transactions costs.

Transactions Costs

Consider various iterations of two-sided markets, of which Wu believes the companies I call Aggregators are a not particularly special version of. At the most basic level you might have something like a neighborhood flea market: on one side the market is a place for people to sell things, and on the other a place to buy things. Ultimately, though, every transaction entails sourcing supply, acquiring a customer, and executing the transaction itself. All three of those activities are costly.

Over the course of the 20th century, larger and larger firms became more and more efficient at managing these transaction costs. The Great Atlantic & Pacific Tea Company, more commonly known as A&P, was the best example of this. The company expanded rapidly in the late 1800s, which was critical in acquiring ever more customers (the company was also a pioneer in advertising and using low prices on select items to get customers into stores); meanwhile, A&P built a back-end operation to match, vertically integrating into being a wholesaler, particularly of its own private label goods, which, combined with A&Ps scale, helped it deliver on those low prices. By 1930 the company had 16,000 stores doing nearly $3 billion in sales, accounting for a 10% share of nationwide grocery stores.

By 1950 A&P’s market share peaked at 15%, although by that time A&P had transitioned to fewer, larger stores (around 4,500); the company was also facing an antitrust lawsuit, that it would eventually settle with a favorable consent decree. What ultimately doomed A&P, though, was its inability to adjust to a grocery market increasingly dominated by national brands advertised on television, along with uncompetitive labor costs and a failure to expand from city centers to the exploding suburbs. Each of these entailed higher transaction costs, and A&P couldn’t bear them.

A few decades later Walmart would follow in A&P’s footsteps as far as dominance is concerned, although their strategy started with exurbs and suburbs and worked backwards; the fundamental limitations of needing to open stores to acquire customers, build out logistical networks to acquire and distribute goods at scale, and actually stock shelves and check out customers remained, though. Walmart too has reached about 15% market share (albeit of a larger general merchandise market).

Amazon, meanwhile, has leveraged the Internet to dramatically decrease its customer acquisition costs in particular: the fundamental insight driving the retailer is that on the Internet shelf space is both infinite and available to anyone with an Internet connection; the company is still smaller than Walmart — 5% of general merchandise last year, although that number surely made a huge leap because of the pandemic — but it got there much more quickly: Amazon is only 26 years old, while Walmart is 58.

Still, as Wu notes, Amazon has plenty of transaction costs of its own:

Here is the danger: If you think competition is all about flavor and buzz (in the 1890s) or Thompson’s aggregation theory (right now), you might end up overlooking all of the other strategies and factors that could also lead to a lasting advantage. Consider Amazon. Thompson says that “the internet has made distribution (of digital goods) free.” But, as implied, that hasn’t made the distribution of physical goods free. And that is why a company like Amazon can, and has, gained a major advantage by building up a large physical infrastructure (warehouses), not unlike a steel producer in the 20th century, and strongly relying on a loyalty program (Prime). So, it turns out Amazon’s competitive advantage isn’t all about the fact that “on the internet everything is just zero marginal bits.”

I completely agree; that’s why I have stated — contra Wu’s assertion — that Amazon is not an Aggregator. I mentioned Amazon specifically in 2017’s Defining Aggregators:1

Aggregators have all three of the following characteristics; the absence of any one of them can result in a very successful business (in the case of Apple, arguably the most successful business in history), but it means said company is not an Aggregator.

Direct Relationship with Users

This point is straight-forward, yet the linchpin on which everything else rests: Aggregators have a direct relationship with users. This may be a payment-based relationship, an account-based one, or simply one based on regular usage (think Google and non-logged in users).

Zero Marginal Costs For Serving Users

Companies traditionally have had to incur (up to) three types of marginal costs when it comes to serving users/customers directly.

  • The cost of goods sold (COGS), that is, the cost of producing an item or providing a service
  • Distribution costs, that is the cost of getting an item to the customer (usually via retail) or facilitating the provision of a service (usually via real estate)
  • Transaction costs, that is the cost of executing a transaction for a good or service, providing customer service, etc.

Aggregators incur none of these costs:

  • The goods “sold” by an Aggregator are digital and thus have zero marginal costs (they may, of course, have significant fixed costs)2
  • These digital goods are delivered via the Internet, which results in zero distribution costs3
  • Transactions are handled automatically through automatic account management, credit card payments, etc.4

This characteristic means that businesses like Apple hardware and Amazon’s traditional retail operations are not Aggregators; both bear significant costs in serving the marginal customer (and, in the case of Amazon in particular, have achieved such scale that the service’s relative cost of distribution is actually a moat).

Demand-driven Multi-sided Networks with Decreasing Acquisition Costs

Because Aggregators deal with digital goods, there is an abundance of supply; that means users reap value through discovery and curation, and most aggregators get started by delivering superior discovery.

Then, once an Aggregator has gained some number of end users, suppliers will come onto the Aggregator’s platform on the Aggregator’s terms, effectively commoditizing and modularizing themselves. Those additional suppliers then make the Aggregator more attractive to more users, which in turn draws more suppliers, in a virtuous cycle.

This means that for Aggregators, customer acquisition costs decrease over time; marginal customers are attracted to the platform by virtue of the increasing number of suppliers. This further means that Aggregators enjoy winner-take-all effects: since the value of an Aggregator to end users is continually increasing it is exceedingly difficult for competitors to take away users or win new ones.

This is in contrast to non-Aggregator and non-platform companies that face increasing customer acquisition costs as their user base grows. That is because initial customers are often a perfect product-market fit; however, as that fit decreases, the surplus value from the product decreases as well and quickly turns negative. Generally speaking, any business that creates its customer value in-house is not an Aggregator because eventually its customer acquisition costs will limit its growth potential.

Google is the canonical example of this definition, and the difference from Amazon, much less non-Internet two-sided markets, is significant. Start with the transaction costs: while scaling an Internet service is a profoundly difficult thing to do, requiring tremendous ingenuity, invention, and investment, the marginal transaction costs for serving one additional customer are zero. That is why Google, from the moment it launched, could be used by anyone in the world. Like Amazon, the company didn’t need to build out physical stores, but unlike Amazon, the company didn’t need to build out delivery infrastructure either. And unlike every retailer in existence it didn’t need to pay for supply. And — this is the part that makes Google truly unique — it didn’t even need to generate supply. The web already existed!

This is why Google can achieve 88 percent market share in the U.S. search market (according to the Department of Justice lawsuit), and achieve a similar level of share all over the entire world. The company’s scalability is effectively infinite, because serving additional customers is a function of fixed costs, not transaction costs; it really is not comparable to Amazon at all, in this regard, as the companies’ respective market shares demonstrates.

The same reality applies to Google’s marginal costs (including distribution); while Google spends a tremendous amount of fixed costs on its data centers and networking, any one search is “free”, including Google accepting the search term, computing the result, and delivering it to the user. Moreover, this same principle applies to Google’s advertising business: the vast majority of advertising on Google is acquired via self-service portals that price ads automatically via real-time auctions. Yes, the infrastructure necessary to enable this business requires substantial investment, but the only transaction costs on any one specific advertising purchase are credit card fees.

This is a business that requires more analysis than calling it a “two-sided market…with some positive feedback loop stuff thrown in”; for one, I would argue that all two-sided markets have positive feedback loops. Any market that touches the physical world, though, accumulates an ever increasing number of tiny costs along the way, whether that be labor costs, shipping costs, rent costs, etc.; moreover, the logistical challenges entailed in managing those costs incur their own cost in managerial complexity, and every investment to overcome those challenges become sunk costs, making it difficult to adjust when the market changes (this is particularly acute because it takes time to make these investments).

Google, on the other hand, faces none of these natural drags on scale. More search means better search, thanks to the ongoing feedback of billions of users ranking every Google search result (by clicking on the best result); more advertisers means better advertising results, for the same reason. This matters greatly for antitrust because at some point you need a theory of harm: how exactly is Google making things worse for users or advertisers?

Switching Costs

This is also why I disagree with Wu’s characterization of switching costs:

Whether this is core to his theory or not, Thompson also takes a highly anti-empirical approach to switching costs. He endorses the old 1990s idea that “competition is just one click away,” which may have been true in 1999, but that can’t be taken seriously now — if what he means is that the costs of leaving Google or Amazon or Facebook are close to zero. The real question is whether there are, for the average person, costs to switching from Facebook or Google to use something else — leaving behind Gmail, friends, and so on. The assertion that those costs are near zero is magical thinking. Indeed, one of Google’s most important strategies over this decade — its tell — has been to increase those switching costs, those barriers to entry.

Consider Google specifically: the company’s core product, Search, has for its supply the open web. Indeed, that is what let the company be at scale, instantly, in a way no other company ever has. It follows, though, that every other search engine — including Bing, DuckDuckGo, etc. — has access to the exact same supply.

Admittedly, Google does have its own local results in particular; Yelp has an entire website complaining about this fact, arguing that the search engine should be forced to include third-party content in its local search results because those results are better for consumers. That, though, is a mark in the competition’s favor: is Wu’s position that Google’s allegedly inferior local search results is lock-in?

Meanwhile, I am puzzled by the reference to Gmail; it is unclear to me why it is a competition concern when a user chooses to use a free email service that is quite obviously locked to the service provider. The relevant market here is not Gmail, it is email, and not only is there a huge amount of competition for hosted email, it is fairly simple to set up your own email server. Critically, there is absolutely nothing Google can do to stop you from doing so, even if they wanted to.

This drives at the biggest reason why I believe a distinct definition for “Aggregators” is important; while Wu casually conflates Aggregators and platforms (“in economic, if not technological, parlance”), I believe the distinction is substantial and crucial. I explore that distinction at length in A Framework for Regulating Competition on the Internet, but the critical point goes back to the email example:

  • Platforms are essential to their value chains. You can’t have a Windows app, for example, without the Windows API.
  • Aggregators, in contrast, are not essential, but they are convenient. You can go to a website directly — just type in its URL — but for most consumers, for most pieces of information, it is far easier to search.

What is fascinating about many of Google’s most ardent critics is that they themselves aspire to be Aggregators. Yelp, for example, doesn’t operate any local businesses. It doesn’t prepare the food, or cut the hair, or teach the class: its business model is to aggregate so many users that local businesses feel compelled to be on Yelp, the better to reach more customers than they could on their own. The same applies to Trip Advisor, or Expedia, or any other vertical search company. All of those sites are only a click away.

And, critically, so are the entities that actually provide the services or information that the user is seeking. Airlines and hotels invest heavily in loyalty programs, for example, because they want their best customers to come to them directly, not via an Aggregator, whether that Aggregator be Google or Booking.com. That not only sounds like competition, it also sounds like an exceptionally customer-friendly outcome.

The Missing Intersection

Where I think Wu and many tech critics go wrong is missing how the question of zero marginal costs and zero switching costs intersect. First, because Wu does not believe that Google is unique as far as scalability is concerned, he appears to assume that the company must be doing something nefarious to command such market share. And, by the same token, there must be some sort of unfair lock-in, because again, companies ought not be so dominant. This is a sure recipe for lazy arguments that end up criminalizing the basics of business.

How does all this relate to antitrust? Antitrust should be dealing with the reality of anticompetitive behavior in markets, not ideals of how companies work. And it is the difficult job of the law to determine which of these durable advantages just described are part of fair competition (for example, a better user experience) and which are not (for example, buying out dangerous rivals, or exclusionary deals that keep out competitors)…

We may summarize the problem for Thompson this way: Why, exactly, did Google pay Apple billions to gain control over distribution rights? And why, to bring the law into it, hasn’t Google settled the case? If aggregation theory is right — if competition has changed in the digital market and the best user experience wins — then Google doesn’t need to spend that money.

I honestly don’t think this is too complicated: defaults do matter, and given the fact that Google makes somewhere north of $250 in revenue per U.S. user it is well worth sharing some of that revenue to ensure it gets as many users as possible. Consider iOS specifically:

  • According to the Department of Justice “This agreement covers roughly 36 percent of all general search queries in the United States”; assuming that share of search queries is inline with share of revenue (which is almost certainly not the case, given the relative spending power of Apple’s userbase), the agreement covers $26.9 billion worth of revenue.
  • Further assume that, were Google not the default, the company would lose 25% of Apple device searches it might have gained otherwise; this equates to a revenue loss of $6.7 billion for the U.S. alone, and again, this number is almost certainly conservative given the relative spending power of Apple users.

Moreover, those searches would go to Google’s competitors, not only giving valuable data that would help make their search engines better, but also increasing the efficiency and relative attractiveness of their ad products. I do still believe that Google would continue to win on the merits, but it would be more costly, not less.

Which, of course, is why I support the Department of Justice’s lawsuit, and why I have been outspoken about acquisitions by Aggregators. Aggregators already have intrinsic advantages given the nature of costs on the Internet; I don’t believe they should be able to augment those advantages with contracts or by acquiring customers (I do not, however, favor a ban on other types of acquisitions).

The difference I have with Wu, as far as I can tell, is that I see these agreements and acquisitions as frosting on an Aggregation cake, as opposed to the fundamental drivers of their dominance. Google isn’t dominant because they broke the law, they are (arguably) breaking the law well after their dominance was established, and that distinction matters when it comes to crafting remedies and regulations that actually work.

Differentiation and Gatekeepers

The most disappointing part of Wu’s essay, though, at least on a personal level, is the conclusion:

This may be too much for some readers, but a last problem with aggregation theory is that its “winner take all” assertion assumes away the importance of differentiated user preferences. In other words, it tends to assume that there is one “user experience” that is preferred by everyone, and by depending on feedback, the product can be improved to match that…

Perhaps Thompson has addressed this somewhere, but I thought it important to point out. The model only works well, I think, either when consumers have identical preferences or when they want the greatest number of suppliers for some reason, or maybe when consumer value convenience even over what they’d called their own stated preferences (the so-called tyranny of convenience).

The very premise of this site is that the Internet takes preference differentiation to the extreme, resulting in never-ending niches that can be profitably filled. Moreover, this isn’t simply about small websites: just last month I wrote about how Disney is pursuing an integration strategy that is an antidote to Aggregators like Google and Facebook, and how that can be model for other media companies. The Article included this distinction:

Aggregators are content agnostic. Integrators are predicated on differentiation.

Facebook reduces all content to similarly sized rectangles in your feed: a deeply reported investigative report is given the same prominence and visual presentation as photos of your classmate’s baby; all that Facebook cares about is keeping you engaged. Content created by Disney, on the other hand, must be unique to Disney, and memorable, as it is the linchpin for their entire business.

So yes, I have “addressed this somewhere” — I addressed it three weeks ago. Contrary to Wu’s caricature of me, I don’t believe that Aggregation Theory applies to everything, but the things to which it does apply — like when “consumers have identical preferences or when they want the greatest number of suppliers” for something like search results — it matters a great deal. To that end, it seems rich to criticize me for “false confidence” and an unwillingness to “think more broadly or read more deeply” when one can’t be bothered to scroll down my home page.

Then again, perhaps an honest debate isn’t the goal. Wu foreshadowed his essay on Twitter:

Wu added, in a tweet he seems to have deleted, that my writing is “quack medicine”:

Tim Wu's tweet calling Stratechery "Quack Medicine"

This casts Wu’s comment that “Lacking a background in law or economics is not disqualifying”, in a different, more backhanded, light. For decades antitrust was indeed limited to those with a background in law or economics; that is the only way politicians, attorneys general, judges, general counsels, or the media would pay attention to what you had to say. The media in particular had a monopoly on the dissemination of information, and credentials were the way to get into their channel. It’s hard to escape the sense that a breakdown in gatekeepers bothers Wu.

And, by the same token, perhaps this is why I do believe that the Internet is something profoundly different; it has certainly made my career far different than it might have been were I a decade or two older. And, by extension, perhaps this is why I can see the benefit of Aggregators: Google and Facebook and Twitter may have been terrible for traditional media companies whose business models depended on controlling distribution, but they are fantastic for giving consumers exactly what they want, including a perhaps heretical view of antitrust.

Truth Seeking

This does raise the question as to why you should believe me, or anything else you read on the Internet. This interaction arose in response to Wu’s initial tweet:

My response:

This is why, whatever Wu’s motivations, I appreciated his piece. I do disagree, in part because I don’t think Wu understands my argument, but that’s ok: it’s an opportunity to make my argument in a different way, as I tried to do today, and perhaps change his mind, or yours. Or perhaps I failed, and you agree with Wu that I have created a theory where one ought not exist. Again, that’s fine: now you know what you believe better than you did before this piece, and perhaps you will view my other writing with increased skepticism. That’s a good thing!

More broadly, the pre-Internet world, governed as it was by gatekeepers, was certainly a more unified one, at least as far as conventional wisdom was concerned — this applied to law and economics just as much as anything else. At the same time, that does not mean the pre-Internet world had a better overarching grasp on the truth, given how much more difficult it was for dissenting voices to gain distribution. If I happen to be correct about Aggregation Theory, and that the way to understand Google depends on more than “two-sided market theory with some positive feedback loop stuff thrown in”, then I would like to think the fact that Stratechery can exist, without anyone’s permission, is a good thing.

After all, I am concerned about just how powerful these companies are. I am not intrinsically opposed to regulation — I believe in democratic oversight — but the risk of getting regulation wrong is that companies like Google become more entrenched, not less. This is why I objected to GDPR before it came in force, and why I spend time writing about antitrust (which, I might add, is not a traffic driver!). Getting the law and the economics right are important, particularly if the Internet challenges the fundamental assumptions underlying them.

  1. Please forgive me for the long excerpt, but despite Wu’s exhortation that one needs to read deeply to understand these issues, it appears he has not done me the same honor []
  2. And yes, in the very long run, all fixed costs are marginal costs; that said, while the amount of capital costs for Aggregators is massive, their userbase is so large that even over the long run the fixed costs per user are infinitesimal, particularly relative to revenue generated []
  3. In terms of the marginal customer; in aggregate there are of course significant bandwidth costs, but see the previous footnote []
  4. Credit card fees are a significant transaction cost that do limit some types of businesses, but will generally be ignored in this analysis []

United States v. Google

So it finally happened: the U.S. Department of Justice has filed a lawsuit against Google, alleging anticompetitive behavior under Section 2 of the Sherman Antitrust Act. And, as far as I can tell, everyone is disappointed in the DOJ’s case; Nilay Patel’s tweets were representative:

I can’t speak to Barr’s motivations for launching this lawsuit now, although it has been frustrating to see the degree to which antitrust seems to have been politicized, particularly the AT&T-Time Warner case; I can understand the instinctual skepticism and suspicion that this was timed for the election.

That noted, I think the conventional wisdom about the specifics of this lawsuit are mistaken: I believe the particulars of the Justice Department’s complaint have been foreshadowed for a long time, and make for a case stronger than most of Europe’s; if the lawsuit fails in court — as it very well may — it also points to where Congress should act to restrain the largest companies in the world.

Delrahim’s Preview

On June 11, 2019, Assistant Attorney General Makan Delrahim gave a speech to the Antitrust New Frontiers Conference in Israel that laid out the conceptual framework within which this lawsuit fits (Delrahim recused himself from this case specifically as he previously worked on Google’s DoubleClick acquisition). The most interesting part of the speech was focused on the history of antitrust, starting with Standard Oil:

Standard Oil acquired many refineries in the late 19th century. Refiners that would not sell were underpriced and driven out of the market. Price-cutting is the essence of competition, of course, but the Standard Oil case and later Supreme Court cases helped establish what would become settled law: there are some things that a monopolist cannot do. A company does not ordinarily violate the antitrust laws for merely exercising legitimately gained market power. But even if a company achieves monopoly position through legitimate means, it cannot take actions that do not advance plausible business goals but rather are designed to make it harder for competitors to catch up.

Later in the speech Delrahim specifically called out exclusivity agreements as an example of this illegal behavior:

Exclusivity is another important category of potentially anticompetitive conduct. The Antitrust Division has had a long history of analyzing exclusive conduct in traditional industries under both Sections 1 and 2 of the Sherman Act. Generally speaking, an exclusivity agreement is an agreement in which a firm requires its customers to buy exclusively from it, or its suppliers to sell exclusively to it. There are variations of this restraint, such as requirements contracts or volume discounts.

To be sure, in some circumstances, these can be procompetitive, especially where they enable OEMs and retailers to maximize output and overcome free-riding by contractual partners. In digital markets, they can be beneficial to new entrants, particularly in markets characterized by network effects and a dominant incumbent. They also can be anticompetitive, however, where a dominant firm uses exclusive dealing to prevent entry or diminish the ability of rivals to achieve necessary scale, thereby substantially foreclosing competition. This is true in digital markets as well.

As I noted at the time, this speech was clearly a very big deal:

Stepping back, the first and most important takeaway from this speech is that this focus on tech companies does not feel like a top-down directive from President Trump to focus on political enemies (like the AT&T case did). This is a substantive and far-reaching overview of why tech is worthy of being investigated, and my estimate as to whether an antitrust case will happen has increased considerably.

That case was filed yesterday: contrary to the complaints of many Google critics and competitors, including the European Union in the Google Shopping case, it is not focused on the Search Engine Results Pages (SERP), and, contrary to the (still ongoing) investigation from 50 state and territory attorneys general, it is not focused on Google’s ad business. The focus is narrow, and inline with Delrahim’s framework: Google may have earned its position honestly, but it is maintaining it illegally, in large part by paying off distributors.

The DOJ’s Case, and Google’s Response

The core of the DOJ’s argument is at the beginning of the complaint; this excerpt is a bit long, but that is because these five paragraphs contain basically the entire case:

For a general search engine, by far the most effective means of distribution is to be the preset default general search engine for mobile and computer search access points. Even where users can change the default, they rarely do. This leaves the preset default general search engine with de facto exclusivity. As Google itself has recognized, this is particularly true on mobile devices, where defaults are especially sticky.

For years, Google has entered into exclusionary agreements, including tying arrangements, and engaged in anticompetitive conduct to lock up distribution channels and block rivals. Google pays billions of dollars each year to distributors — including popular-device manufacturers such as Apple, LG, Motorola, and Samsung; major U.S. wireless carriers such as AT&T, T-Mobile, and Verizon; and browser developers such as Mozilla, Opera, and UCWeb — to secure default status for its general search engine and, in many cases, to specifically prohibit Google’s counterparties from dealing with Google’s competitors. Some of these agreements also require distributors to take a bundle of Google apps, including its search apps, and feature them on devices in prime positions where consumers are most likely to start their internet searches.

Google has thus foreclosed competition for internet search. General search engine competitors are denied vital distribution, scale, and product recognition—ensuring they have no real chance to challenge Google. Google is so dominant that “Google” is not only a noun to identify the company and the Google search engine but also a verb that means to search the internet.

Google monetizes this search monopoly in the markets for search advertising and general search text advertising, both of which Google has also monopolized for many years. Google uses consumer search queries and consumer information to sell advertising. In the United States, advertisers pay about $40 billion annually to place ads on Google’s search engine results page (SERP). It is these search advertising monopoly revenues that Google “shares” with distributors in return for commitments to favor Google’s search engine. These enormous payments create a strong disincentive for distributors to switch. The payments also raise barriers to entry for rivals—particularly for small, innovative search companies that cannot afford to pay a multi-billion-dollar entry fee. Through these exclusionary payoffs, and the other anticompetitive conduct described below, Google has created continuous and self-reinforcing monopolies in multiple markets.

Google’s anticompetitive practices are especially pernicious because they deny rivals scale to compete effectively. General search services, search advertising, and general search text advertising require complex algorithms that are constantly learning which organic results and ads best respond to user queries; the volume, variety, and velocity of data accelerates the automated learning of search and search advertising algorithms. When asked to name Google’s biggest strength in search, Google’s former CEO explained: “Scale is the key. We just have so much scale in terms of the data we can bring to bear.” By using distribution agreements to lock up scale for itself and deny it to others, Google unlawfully maintains its monopolies.

Google argues that this is “deeply flawed”; from the company’s blog:

The Department’s complaint relies on dubious antitrust arguments to criticize our efforts to make Google Search easily available to people.

Yes, like countless other businesses, we pay to promote our services, just like a cereal brand might pay a supermarket to stock its products at the end of a row or on a shelf at eye level. For digital services, when you first buy a device, it has a kind of home screen “eye level shelf.” On mobile, that shelf is controlled by Apple, as well as companies like AT&T, Verizon, Samsung and LG. On desktop computers, that shelf space is overwhelmingly controlled by Microsoft.

So, we negotiate agreements with many of those companies for eye-level shelf space. But let’s be clear—our competitors are readily available too, if you want to use them…

The bigger point is that people don’t use Google because they have to, they use it because they choose to. This isn’t the dial-up 1990s, when changing services was slow and difficult, and often required you to buy and install software with a CD-ROM. Today, you can easily download your choice of apps or change your default settings in a matter of seconds—faster than you can walk to another aisle in the grocery store.

Or, as Google founder Larry Page was fond of saying, “Competition is only a click away.”

Aggregation Theory

The problem with the vast majority of antitrust complaints about big tech generally, and online services specifically, is that Page is right. You may only have one choice of cable company or phone service or any number of physical goods and real-world services, but on the Internet everything is just zero marginal bits.

That, though, means there is an abundance of data, and Google helps consumers manage that abundance better than anyone. This, in turn, leads Google’s suppliers to work to make Google better — what is SEO but a collective effort by basically the entire Internet to ensure that Google’s search engine is as good as possible? — which attracts more consumers, which drives suppliers to work even harder in a virtuous cycle. Meanwhile, Google is collecting information from all of those consumers, particularly what results they click on for which searches, to continuously hone its accuracy and relevance, making the product that much better, attracting that many more end users, in another virtuous cycle:

Google benefits from two virtuous cycles

One of the central ongoing projects of this site has been to argue that this phenomenon, which I call Aggregation Theory, is endemic to digital markets. From the original Aggregation Theory Article:

The value chain for any given consumer market is divided into three parts: suppliers, distributors, and consumers/users. The best way to make outsize profits in any of these markets is to either gain a horizontal monopoly in one of the three parts or to integrate two of the parts such that you have a competitive advantage in delivering a vertical solution. In the pre-Internet era the latter depended on controlling distribution…Note how the distributors in all of these industries integrated backwards into supply: there have always been far more users/consumers than suppliers, which means that in a world where transactions are costly owning the supplier relationship provides significantly more leverage.

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

In short, increased digitization leads to increased centralization (the opposite of what many originally assumed about the Internet). It also provides a lot of consumer benefit — again, Aggregators win by building ever better products for consumers — which is why Aggregators are broadly popular in a way that traditional monopolists are not.

Unfortunately, too many antitrust-focused critiques of tech have missed this essential difference. I wrote about this mistake in Where Warren’s Wrong:

Perhaps it is best for Senator Warren’s argument that her article never does explain how these companies became so big, because the reason cuts at the core of her argument: Google, Facebook, Amazon, and Apple dominate because consumers like them. Each of them leveraged technology to solve unique user needs, acquired users, then leveraged those users to attract suppliers onto their platforms by choice, which attracted more users, creating a virtuous cycle that I have christened Aggregation Theory.

Aggregation Theory is the reason why all of these companies have escaped antitrust scrutiny to date in the U.S.: here antitrust law rests on the consumer welfare standard, and the entire reason why these companies succeed is because they deliver consumer benefit.

The European Union does have a different standard, rooted in a drive to preserve competition; given that the virtuous cycle described by Aggregation Theory does tend towards winner-take-all effects, it is not a surprise that Google in particular has faced multiple antitrust actions from the European Commission. Even the EU standard, though, struggles with the real consumer benefits delivered by Aggregators.

Consider the Google Shopping case: Google was found guilty of antitrust violations in a case brought by a shopping comparison site called Foundem, which complained about their site being buried when consumers were searching for items to buy. This complaint made no sense, as I explained in Ends, Means, and Antitrust:

If I search for a specific product, why would I not want to be shown that specific product? It frankly seems bizarre to argue that I would prefer to see links to shopping comparison sites; if that is what I wanted I would search for “Shopping Comparison Sites”, a request that Google is more than happy to fulfill:

Screen Shot 2017-06-28 at 6.40.22 PM

The European Commission is effectively arguing that Google is wrong by virtue of fulfilling my search request explicitly; apparently they should read my mind and serve up an answer (a shopping comparison site) that is in fact different from what I am requesting (a product)?

There is certainly an argument to be made that Google, not only in Shopping but also in verticals like local search, is choking off the websites on which Search relies by increasingly offering its own results. At the same time, there is absolutely nothing stopping customers from visiting those websites directly, or downloading their apps, bypassing Google completely. That consumers choose not to is not because Google is somehow restricting them — that is impossible! — but because they don’t want to. Is it really the purview of regulators to correct consumer choices willingly made?

Not only is that answer “no” for philosophical reasons, it should be “no” for pragmatic reasons, as the ongoing Google Shopping saga in Europe demonstrates. As I noted last December, the European Commission keeps changing its mind about remedies in that case, not because Google is being impertinent, but because seeking to undo an Aggregator by changing consumer preferences is like pushing on a string.

Regulating Aggregators

The solution, to be clear, is not simply throwing one’s hands up in the air and despairing that nothing can be done. It is nearly impossible to break up an Aggregator’s virtuous cycle once it is spinning, both because there isn’t a good legal case to do so (again, consumers are benefitting!), and because the cycle itself is so strong.

What regulators can do, though, is prevent Aggregators from artificially enhancing their natural advantages. From A Framework for Regulating Competition on the Internet:

Aggregators are different. Yes, they provide value to end users and to third-parties, at least for a time, but the incentives are warped from the beginning: 3rd-parties are not actually incentivized to serve users well, but rather to make the Aggregator happy. The implication from a societal perspective is that the economic impact of an Aggregator is much more self-contained than a platform, which means there is correspondingly less of a concern about limiting Aggregator growth.

Given that Aggregator power comes from controlling demand, regulators should look at the acquisition of other potential Aggregators with extreme skepticism. At the same time, whatever an Aggregator chooses to do on its own site or app is less important, because users and third parties can always go elsewhere, and if they don’t, that is because they are satisfied.

Here Facebook is a useful example: the company’s competitive position would be considerably shakier — and the consumer ad-supported ecosystem considerably healthier — if it had not acquired Instagram and WhatsApp, two other consumer-facing apps. At the same time, Facebook’s specific policies around what does or does not appear on its apps, or how it organizes its feed, has no reason to be a regulatory concern; I would argue the same thing when it comes to Google’s search results.

This same principle applies to contracts; from Where Warren’s Wrong:

Aggregators already have massive structural advantages in their value chains; to that end, there should be significantly more attention paid to market restrictions that are enforced by contracts.

Go back to Microsoft: in my estimation the most egregious antitrust violations committed by Microsoft were the restrictions placed on OEMs, both to ensure the installation of Internet Explorer as well as to suppress alternative operating systems. These were not violations rooted in market dominance, at least not directly, but rather contracts that OEMs could not afford to say ‘No’ to.

This is an area where the European Commission has gotten it right with regard to Google: as a condition of access to Google apps, most critically the Play Store, OEMs were prohibited from selling any phones with Android forks. This is a restriction on competition produced not by market dominance, at least not directly, but rather contracts that OEMs could not afford to say ‘No’ to.

This is also the issue with Apple’s App Store: the restriction on linking to a website for purchasing an ebook or subscribing to a streaming service is not rooted in any sort of technical limitation; rather, it is an arbitrary rule in the App Developer Agreement enforced by Apple’s App Review team. It has nothing to do with consumer security, and everything to do with Apple’s bottom line.

This is an area ripe for enhanced antitrust enforcement: these large tech companies have enough advantages, most of them earned through delivering what customers want, and abetted by the fundamental nature of zero marginal costs. Seeking to augment those advantages through contracts that suppliers can’t say ‘No’ to should be viewed with extreme skepticism.

This is exactly why I am so pleased to see how narrowly focused the Justice Department’s lawsuit is: instead of trying to argue that Google should not make search results better, the Justice Department is arguing that Google, given its inherent advantages as a monopoly, should have to win on the merits of its product, not the inevitably larger size of its revenue share agreements. In other words, Google can enjoy the natural fruits of being an Aggregator, it just can’t use artificial means — in this case contracts — to extend that inherent advantage.

Google’s Defense

Of course the fact that the Justice Department is focused on the correct issue with Google Search does not mean this lawsuit will be successful. Google’s defense is fairly straightforward.

First, Google will argue that its deals do not represent lock-in; users can change the default search option for the vast majority of touch points that Google pays for, and the fact they choose not to is because Google’s search is better. This last point is why the Justice Department took pains to emphasize the importance of data in improving search engine performance, because to the extent that is true, it means it is impossible for alternative search engines to catch up.

Second, Google will argue that its deals on non-Google platforms (i.e. Apple platforms and rival browsers) were because Google was willing to pay more than the competition, which is not only the open market at work, but also means that the associated products can be offered at a lower price or even for free. Here again the Justice Department is relying on a scale argument: Google’s monopoly in search means it has a monopoly in search advertising, which means it can afford to outbid all of its rivals.

Third, Google will argue that its deals for Android distribution and the tying of search defaults to Google Play Services (including the Play Store) is not only pro-consumer in its benefits, including free services, less fragmentation, and a larger market for apps, but is also Google’s just reward for having invested in the creation of Android. This last argument didn’t work in front of the European Commission, but it may be more effective before a U.S. judge. The Justice Department, meanwhile, probably has the strongest case on this point: sure, Google created Android, but it also made the choice to open source it, and if its attempts to re-seize control through blatant tying aren’t illegal then it’s hard to imagine what could be.

Google and Apple

There is, though, one more complicating factor: the weird dynamics of Google’s relationship with Apple. From the lawsuit:

Google has had a series of search distribution agreements with Apple, effectively locking up one of the most significant distribution channels for general search engines. Apple operates a tightly controlled ecosystem and produces both the hardware and the operating system for its popular products. Apple does not license its operating systems to third- party manufacturers and controls preinstallation of all apps on its products. The Safari browser is the preinstalled default browser on Apple computer and mobile devices. Apple devices account for roughly 60 percent of mobile device usage in the United States. Apple’s Mac OS accounts for approximately 25 percent of the computer usage in the United States…

Apple has not developed and does not offer its own general search engine. Under the current agreement between Apple and Google, which has a multi-year term, Apple must make Google’s search engine the default for Safari, and use Google for Siri and Spotlight in response to general search queries. In exchange for this privileged access to Apple’s massive consumer base, Google pays Apple billions of dollars in advertising revenue each year, with public estimates ranging around $8–12 billion. The revenues Google shares with Apple make up approximately 15–20 percent of Apple’s worldwide net income.

Although it is possible to change the search default on Safari from Google to a competing general search engine, few people do, making Google the de facto exclusive general search engine. That is why Google pays Apple billions on a yearly basis for default status. Indeed, Google’s documents recognize that “Safari default is a significant revenue channel” and that losing the deal would fundamentally harm Google’s bottom line. Thus, Google views the prospect of losing default status on Apple devices as a “Code Red” scenario. In short, Google pays Apple billions to be the default search provider, in part, because Google knows the agreement increases the company’s valuable scale; this simultaneously denies that scale to rivals.

What is fascinating about this relationship is the number of ways in which this can be interpreted. From one perspective, the fact that Google has to pay so much to Apple is evidence that there is competition in the market. From another perspective, the fact that Apple can extract so much money from Google is evidence that it is Apple that has monopoly-like power over its value chain. A third perspective — surely the one endorsed by the Justice Department — is that the fact that Google values the default position so highly is ipso facto evidence that default position matters.

I suspect the true answer is a mixture of all three, with a dash of collusion. The lawsuit notes:

Apple’s RSA incentivizes Apple to push more and more search traffic to Google and accommodate Google’s strategy of denying scale to rivals. For example, in 2018, Apple’s and Google’s CEOs met to discuss how the companies could work together to drive search revenue growth. After the 2018 meeting, a senior Apple employee wrote to a Google counterpart: “Our vision is that we work as if we are one company.”

This gets at a larger problem in many tech markets: the tendency towards duopoly, which often lets one company cover for the other acting anti-competitively. In the case of Apple and Google:

  • Android’s presence in the market means that Apple can act anticompetitively with its App Store policies (which Google is happy to ape).
  • Apple’s privacy focus justifies decisions like limiting trackers, restricting cookies, and cutting off in-app analytics; Google happily follows Apple’s lead, which impacts its advertising rivals far more than it does Google, improving their relative competitive position.
  • Apple earns billions of dollars giving its customers the best default search experience, even as that ensures that Google will remain the best search engine (and raises questions about the sincerity of Apple’s privacy rhetoric).

This isn’t the only duopoly: Google and Facebook jointly dominate digital advertising, Microsoft and Google jointly dominate productivity applications, Microsoft and Amazon jointly dominate the public cloud, and Amazon and Google jointly dominate shopping searches. And, while all of these companies compete, those competitive forces have set nearly all of these duopolies into fairly stable positions that justify cooperation of the sort documented between Apple and Google, even as any one company alone is able to use its rival as justification for avoiding antitrust scrutiny.

Anti-Aggregation

After the House Antitrust Committee released its report on the tech industry I wrote that it was important to distinguish between Anti-monopoly and Antitrust:

That, though, is why it is a mistake to read the report as some sort of technocratic document. There are, to be sure, a lot of interesting facts that were dug up by the committee, and some bad behavior, which may or may not be anticompetitive in the legal sense. Certainly the companies would prefer to have a legalistic antitrust debate, for good reason: it is exceptionally difficult to make the case that any of these companies are causing consumer harm, which is the de facto standard for antitrust in the United States. Indeed, what makes Google’s contention that “The competition is only a click away” so infuriating is the fact it is true.

What matters more is the context laid out by Letwin: there is a strain of political thought in America, independent of political party (although traditionally associated with Democrats), that is inherently allergic to concentrated power — monopoly in the populist sense, if not the legal one.

Hatred of monopoly is one of the oldest American political habits and like most profound traditions, it consisted of an essentially permanent idea expressed differently at different times. “Monopoly”, as the word was used in America, meant at first a special legal privilege granted by the state; later it came more often to mean exclusive control that a few persons achieved by their own efforts; but it always meant some sort of unjustified power, especially one that raised obstacles to equality of opportunity.

In other words, this subcommittee report is simply a new expression of an old idea; the details matter less than the fact it exists.

This is going to be a critical sentiment to keep in mind as this case unfolds. If I had to bet on an outcome, I would bet on Google winning. Apple and everyone else are free to enter into whatever contracts they wish, and consumers are free to undo the defaults that flow from those contracts. Where is the harm?

Google, of course, wants the conversation to stop there: as long as the argument is a legal one, or even an economic one, Aggregators have powerful justifications for their dominance. That, though, is why the real question is a political one: are we as a society comfortable with a few big companies having such an outsized role in our lives? If the answer is no, the ultimate answer will not be through the courts, but through new laws for a new era. Anti-aggregation, not antitrust.

Twitter, Responsibility, and Accountability

On September 8, 2004, longtime CBS anchor Dan Rather presented four documents about then-President George W. Bush’s Texas Air National Guard service in the early 1970s. Bush’s service had been a matter of some controversy throughout the 2004 presidential campaign, particularly given the rather suspect way in which his military records were released: on multiple occasions records deemed to have been lost were mysteriously found weeks or months later, and only the Bush campaign had managed to produce records placing the President in Alabama in 1972, when the Democratic National Committee accused him of being AWOL.

Rather was certain he had a smoking gun: documents from Bush’s squadron commander which detailed top-down pressure to adjust Bush’s record to appear more favorable than it appeared. The problem for Rather and CBS, though, is that the then-thriving blogosphere quickly demonstrated that the memos were almost certainly created in Microsoft Word with the default settings; the source for the memos, meanwhile, claimed to have burned them after faxing them to CBS, which, contra Rather’s claims, had not authenticated the documents.

CBS took the mistake seriously: the network commissioned a 234 page report by former U.S. Attorney General Dick Thornburgh and former Associated Press CEO Louis D. Boccardi that concluded:

The stated goal of CBS News is to have a reputation for journalism of the highest quality and unimpeachable integrity. To meet this objective, CBS News expects its personnel to adhere to published internal Standards based on two core principles: accuracy and fairness. The Panel finds that both the September 8 Segment itself and the statements and news reports by CBS News that followed the Segment failed to meet either of these core principles…

While the focus of the Panel’s investigation at the outset was on the Killian documents, the investigation quickly identified considerable and fundamental deficiencies relating to the reporting and production of the September 8 Segment and the statements and news reports during the Aftermath. These problems were caused primarily by a myopic zeal to be the first news organization to broadcast what was believed to be a new story about President Bush’s TexANG service, and the rigid and blind defense of the Segment after it aired despite numerous indications of its shortcomings.

CBS responded to the report by apologizing to viewers, firing the producer who obtained the forged documents, while Rather retired (it is widely assumed but not confirmed that Rather’s retirement was because of the controversy).

What is notable about this episode is that it was in many respects the pinnacle of how the Internet could make traditional media better: CBS got duped, both because it wanted to be first and also, one suspects, because of confirmation bias (Rather continued to argue that the story was true even if the documents were false), but in this case, there were many more outlets than the big three news networks, and those outlets, in a classic example of “more speech” leading to the truth, corrected the misinformation. And CBS, to its credit, corrected the record.

Of course bloggers are particularly well-suited to identifying the finer points of Times New Roman; they are not so strong at international reporting, particularly when it comes to a regime like Saddam Hussein’s. And, in the case of Judith Miller’s reporting about Iraq’s alleged weapons of mass destruction in the New York Times, this bit of misinformation worked to Bush’s benefit. As I wrote in 2016’s Fake News, this was particularly damaging:

Looking back, it’s impossible to say with certainty what role Miller’s stories played in the U.S.’s ill-fated decision to invade Iraq in 2003; the same sources feeding Miller were well-connected with the George W. Bush administration’s foreign policy team. Still, it meant something to have the New York Times backing them up, particularly for Democrats who may have been inclined to push back against Bush more aggressively. After all, the New York Times was not some fly-by-night operation, it was the preeminent newspaper in the country, and one generally thought to lean towards the left. Miller’s stories had a certain resonance by virtue of where they were published.

After Miller’s stories were shown to be bogus, New York Times editors wrote, in an unsigned editorial:

We have found a number of instances of coverage that was not as rigorous as it should have been. In some cases, information that was controversial then, and seems questionable now, was insufficiently qualified or allowed to stand unchallenged. Looking back, we wish we had been more aggressive in re-examining the claims as new evidence emerged — or failed to emerge…Complicating matters for journalists, the accounts of these exiles were often eagerly confirmed by United States officials convinced of the need to intervene in Iraq. Administration officials now acknowledge that they sometimes fell for misinformation from these exile sources. So did many news organizations — in particular, this one.

Miller would leave the paper a year later, and while the New York Times didn’t quite live up to CBS’s standard of accountability, at least the Editors were honest that they had screwed up.

What Happened in 2016?

On October 6, 2020, Greg Bensinger, a member of the New York Times editorial board, exhorted people to Take a Social Media Break Until You’ve Voted:

Social media is a cesspool, and it’s getting worse by the day. In the past few months, outright lies about miracle coronavirus cures, mail-in voting fraud and Senator Kamala Harris’s eligibility for the vice presidency have gone viral on Facebook, Twitter, YouTube and elsewhere. People believe them. The platforms, however, aren’t taking the threat of spreading misinformation seriously enough ahead of the election. Again. That’s why I urge Americans to take a bold step: Stay off social media at least until you’ve voted.

This fits the New York Times’ preferred narrative of the 2016 election, in which social media generally, and Facebook specifically, was to blame for President Trump’s victory; the editorial board wrote in 2017:

Chastened by criticism that Facebook had turned a blind eye to Russia’s manipulation of the social network to interfere in the 2016 election, the company’s executives now acknowledge a need to do better and have promised to be more transparent about who is paying for political ads. That’s a good start, but more is required — of Facebook, of social media giants generally and of Congress.

Missing from the list was the New York Times itself, which as the Columbia Journalism Review argued in 2017, led the way in making the 2016 election about anything but the issues:

In light of the stark policy choices facing voters in the 2016 election, it seems incredible that only five out of 150 front-page articles that The New York Times ran over the last, most critical months of the election, attempted to compare the candidate’s policies, while only 10 described the policies of either candidate in any detail.

In this context, 10 is an interesting figure because it is also the number of front-page stories the Times ran on the Hillary Clinton email scandal in just six days, from October 29 (the day after FBI Director James Comey announced his decision to reopen his investigation of possible wrongdoing by Clinton) through November 3, just five days before the election. When compared with the Times’s overall coverage of the campaign, the intensity of focus on this one issue is extraordinary. To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election.

Nate Silver said that is exactly what happened:

Hillary Clinton would probably be president if FBI Director James Comey had not sent a letter to Congress on Oct. 28. The letter, which said the FBI had “learned of the existence of emails that appear to be pertinent to the investigation” into the private email server that Clinton used as secretary of state, upended the news cycle and soon halved Clinton’s lead in the polls, imperiling her position in the Electoral College…

And yet, from almost the moment that Trump won the White House, many mainstream journalists have been in denial about the impact of Comey’s letter. The article that led The New York Times’s website the morning after the election did not mention Comey or “FBI” even once — a bizarre development considering the dramatic headlines that the Times had given to the letter while the campaign was underway…The motivation for this seems fairly clear: If Comey’s letter altered the outcome of the election, the media may have some responsibility for the result. The story dominated news coverage for the better part of a week, drowning out other headlines, whether they were negative for Clinton (such as the news about impending Obamacare premium hikes) or problematic for Trump (such as his alleged ties to Russia). And yet, the story didn’t have a punchline: Two days before the election, Comey disclosed that the emails hadn’t turned up anything new.

The lack of a punchline applies to many of the Facebook controversies since then: the United Kingdom’s Information Commissioner’s Office determined that the only scandal about Cambridge Analytica was the degree to which they oversold their capabilities;1 the afore-linked report from the Columbia Journalism Review highlighted how infinitesimal the scale of Russian interference on the platform was, and research shows that “fake news” makes up a fraction of American’s media diet; more recent research about voting fraud argued:

Contrary to the focus of most contemporary work on disinformation, our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in and the legitimacy of the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary and supportive role.

Just like her emails.

Twitter vs. the New York Post

It may seem odd to be re-litigating the 2016 election two weeks before the 2020 one, but last week’s decision by Facebook and Twitter to slow and ban respectively a sketchy story about Vice-President Joe Biden’s son Hunter cannot be understood without looking back to 2016. The Verge reported on Thursday:

Facebook has reduced the reach of a New York Post story that makes disputed claims about Vice President Joe Biden’s son, Hunter, pending a fact-check review. “While I will intentionally not link to the New York Post, I want be clear that this story is eligible to be fact checked by Facebook’s third-party fact checking partners. In the meantime, we are reducing its distribution on our platform,” tweeted Facebook policy communications manager Andy Stone.

Twitter banned linking to the Post’s report, but it cited a different policy: the site’s rules against posting hacked material. “In line with our hacked materials Policy, as well as our approach to blocking URLs, we are taking action to block any links to or images of the material in question on Twitter,” a spokesperson told The Verge. Clicking existing links will direct users to a landing page that warns them it may violate Twitter guidelines.

Twitter’s actions became even more extreme over the next few days, including banning follow-up stories from the New York Post and locking the newspaper’s Twitter account, even as the company’s explanation for its actions continued to shift; eventually the company unblocked the article, claiming that the story was now widely spread on the Internet.

The story, to be clear, appears to be fabricated, and comically so. The reason I think that is because the story has been relentlessly investigated and criticized both by other media outlets and people on social media — kind of like Dan Rather’s George Bush story was (both the New York Times and New York Magazine reported that the author of the New York Post story refused to put his name on it because he was so skeptical of the story’s sourcing).

This, to be very clear, does not exculpate Twitter’s actions: quite the opposite in fact. After all, as Twitter itself acknowledged, banning the link did not stop the news of the story from spreading. If anything Twitter’s actions had the opposite effect: it made the story spread far more widely than it would have otherwise, now with the additional suspicion that the powers-that-be must want to hide something. What was overshadowed were all of the stories making the case that the story may have been fabricated.

Stories Versus STORIES

Here’s the thing about social media: simply facilitating the transmission of information doesn’t make a story into a STORY; look no further than the coronavirus. I wrote in Zero Trust Information in March:

It is hard to think of a better example than the last two months and the spread of COVID-19. From January on there has been extensive information about SARS-CoV-2 and COVID-19 shared on Twitter in particular, including supporting blog posts, and links to medical papers published at astounding speed, often in defiance of traditional media. In addition multiple experts including epidemiologists and public health officials have been offering up their opinions directly.

That post was published the morning of March 11, when COVID-19 was still being mostly ignored. That all changed 12 hours later when (1) NBA player Rudy Gobert tested positive, leading the NBA to suspend play, (2) Tom Hanks announced on Twitter that he had tested positive, and (3) President Trump announced that the U.S. was suspending travel from Europe. Suddenly the coronavirus was a STORY that everyone knew about.

What is important to note is that no new facts about the coronavirus had emerged in those 12 hours: what mattered is who was talking about them. The NBA is an institution, Hanks a beloved actor, and Trump the President. Each is capable of creating a STORY in a way you or I cannot.

This, more than any other media entity, has long applied to the New York Times. Maxwell McCombs and Donald Shaw, in their seminal paper The Agenda-Setting Function of Mass Media, documented how stories became STORIES; McCombs noted in a book about the same topic:

The general pattern found is that the agenda-setting influence of the New York Times was greater than that of the local newspaper, which, in turn, was greater than that of the national television news.

This effect has certainly been diminished by the rise of cable news and the Internet, but even then, outlets like Fox News are often defined by their opposition to the New York Times, which is itself a testament to the New York Times‘s influence. It also explains why the Clinton email story, much like the Judith Miller stories a decade earlier, were such a big deal: it wasn’t simply that the New York Times was writing about a particular story, but they were writing about a story that seemed favorable to Republicans and a problem for Democrats. Surely then it must be important and true?

The point of this is not to debate whether or not the email story was true, or Hunter Biden’s laptop story. Rather, it’s to establish that while social media publishes everything, from mountains of misinformation and conspiracy theories to critical information about an impending pandemic, making something matter requires more than manufacturing zero marginal cost content. The New York Times has that power by default, while Twitter and Facebook only has that power to the extent they do the opposite of what most expect from them (which is to act as a utility for the conveyance of information).

Responsibility and Accountability

Thus the look back to 2016: the reason why I believe that the New York Times was more responsible than Facebook for the election outcome is rooted in a belief that making stories matter is far more important and impactful than making up stories. Unfortunately, the way in which the generally accepted narrative about the election shifted to blaming Facebook led to a crisis of accountability.

The first way in which accountability can go wrong is in the lack of it. Unlike CBS with the Bush papers, or the New York Times with its Iraq reporting, there has been little if any self-acknowledgment of the role the media played in the 2016 election outcomes. That is self-evidently bad.

What is more subtle, though, is the problems that comes from mis-assigning accountability. Making social media the scapegoat for 2016, for example, not only meant that there was no accountability for media coverage, but also led directly to Twitter wielding power it never should have.

It is, to be clear, absolutely outrageous that a communications platform unilaterally decided what was or was not true, outright barring a major publication from accessing its services, even if the story was false. Twitter’s role with regards to the Hunter Biden story should have been to facilitate more information sharing, in this case to disprove the story, not to arbitrarily decide what was or was not true. No, this wasn’t a crime — last week was also a demonstration that no platform has a monopoly on the distribution of information — but it was majorly at odds with the fundamental principles that undergird a liberal democracy.

It was also monumentally stupid: I noted this summer that Republican suspicion of tech was rooted not in traditional economic arguments about antitrust, but rather in political concerns about a traditionally Democratic industry controlling communications. Twitter basically wrote the case study for why, if you are concerned about the political problems entailed by major platforms, tech needs more regulation.

At the same time, I can understand why Twitter acted; Casey Newton applauded the move on Platformer, writing:

In the run-up to the 2020 election, platforms have been preparing for all manner of threats. One that they have warned about with some frequency is the “hack and leak” operation. A hack-and-leak occurs when a bad actor steals sensitive information, manipulates it, and releases it in an effort to influence public opinion. The most famous hack-and-leak is the dissemination of Hillary Clinton’s stolen emails in 2016, which may have affected the outcome of the election.

A hack-and-leak works because it exploits journalists’ natural fondness for writing about secret documents, ensuring that they get wide coverage — sometimes before reporters have a chance to closely examine their provenance. (It turns out that basically all humans have a fondness for reading secret documents, and one reason hack-and-leaks seem particularly threatening in the age of social networks is that platform sharing mechanisms allow these stories to spread around the world more or less instantly.)

Because of the role they play in amplifying big stories, platforms have taken the prospect of a hack-and-leak on the eve of the election quite seriously. And so when the New York Post dropped its story about a laptop of dubious origin containing what purported to be incriminating documents related to Joe Biden and his son, the Spidey senses of platform integrity teams all began to tingle in harmony.

Here is the issue: Newton rightly connected the dots to the Clinton email scandal, but instead of pointing the finger where it belonged — the journalists and publications that wrote about them incessantly — accountability was passed to the platforms, approvingly so:

In the run-up to the election, platforms have accepted two key responsibilities: to reduce the spread of harmful posts, and to reduce that spread quickly…A hack-and-leak operation represents one of the most difficult tests of this commitment — the operation is designed to spread far and wide long before all the real facts can be known.

There are cases in which I think platforms should act even faster than this — three hours is too long to decide whether to decide whether a single tweet from a high-profile account violates policy, I think. But to identify a potential hack-and-leak operation and restrict it within a few hours, before it hits the Twitter Trending page, deserves some credit.

Again, to be perfectly clear, Twitter’s actions made the story far bigger than it would have been otherwise. To applaud their action is akin to giving a participation trophy to a team that finished in last place: are we cheering the effort, or the actual results? As long as we care more about the former than the latter we are going to get more screw-ups from platforms looking to make up for mistakes that were never their responsibility in the first place.

This, though, is what I mean about the crisis of accountability. The reality is that tech has absolutely taken the criticisms about its role in 2016 to heart, and often to good effect: it is a good thing that Facebook is taking issues like foreign interference far more seriously than they did previously, and spending billions of dollars — to the point where it is seriously impacting its earnings — on security and moderation; Twitter too has gotten much more serious about policing its platform when it comes to everything from bots to abuse. Those are positives.

The problem is the temptation to go beyond the platform and start policing everyone else because you have been led to believe that you are accountable for everyone else’s failures. In fact, at the end of the day, it is the New York Post that is responsible for what it publishes, and it is a mistake for Twitter to take responsibility for that; the same applies to whatever is published by the New York Times, CBS, and any other media entity. Twitter (and Facebook) have an awesome amount of power, which means that both platforms need to not only be more cognizant about what they should seek to control, but also about what they should not. And, by extension, that is why it was so damaging to shift responsibility in 2016, because you can understand why both platforms might feel compelled to control too much.

A Framework for Moderation

The framework I would suggest these platforms follow is based on one I developed last year in A Framework for Moderation:

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

A drawing of The Position In the Stack Matters for Moderation
The further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

The addition I would make to this framework is that responsibility accrues to the layer of agency, both as a matter of principle and of practicality. In other words, the New York Post is responsible for what they publish, and not only should Twitter not decide whether or not that is acceptable as a matter of principle, it needs to recognize that doing so will not kill the story but rather make Twitter’s abuse of power the story. To go in the other direction — to make every part of the stack responsible for everything in the stack — is to inevitably end up in a world of ISP-level control on what we can or cannot see.

This is why I think that Facebook’s decision to slow the spread of this story, as opposed to outright censorship, was a reasonable one; spread is what Facebook has agency over, not necessarily the story itself. This is also why Facebook’s banning of Qanon is acceptable in a way Twitter’s link ban was not: the former happens on Facebook, and is thus Facebook’s responsibility.

This also leads to two further points about conservative objections to Twitter’s actions specifically. First, removing Section 230 protections will not solve this problem, but rather make it worse: not only will Twitter and Facebook be more motivated to censor potential misinformation, but so will every level of the Internet stack.

At the same time, suspicion of tech’s power is clearly justified, and not only when it comes to Twitter and Facebook. Notice how Apple’s App Store and Google’s Play Store are increasingly levers for control, both by liberal democracies (as in the case of TikTok and WeChat) and authoritarian regimes (like Belarus or China). The former raise questions as to whether tech executives can be trusted with their power, and the latter questions as to whether they can resist it being utilized.

To that end, perhaps Twitter’s actions will ultimately prove to be a good thing, simply for the fact this power was so brazenly displayed on a story that was probably inconsequential.2 The best solution to too much power is to devolve it and increase competition, whether that be national anchors, national newspapers, or national water coolers, and it may be time to figure out what that devolution should look like.

  1. The New York Times has not, as far as I have seen, covered ICO’s report, although it cited the scandal as evidence that Facebook needed regulation five days after the report’s release. []
  2. My point with this phrase was to emphasize the fact that points I was seeking to make in this Article don’t rest on whether the story in question is true or false; I regret that I was not more precise in my language []

Disney and Integrators Versus Aggregators

Disney just announced that Soul, its next Pixar film, will be released exclusively on Disney Plus,1 setting it up to be the most idealized piece of Disney content ever.

I’m not referring to the actual movie, which looks great; rather, consider how Disney is poised to make money from Soul:

  • Disney will earn money from Disney+ subscribers, and keep 100% of the margin.
  • Disney will create Soul-derived merchandise, much of which it will sell through its stores and at its theme parks, and keep 100% of the margin.
  • Disney will create Soul-derived features at those theme parks, most of which it fully owns-and-operates, and keep 100% of the margin.

You get the picture. And, at every transaction along the way, Disney will build an ever fuller picture of its customers. Disney, as always, will be selling Disney — it is just getting better and better at it as it more fully integrates its entire value chain.

A World of Aggregators

A central focus of this site has been Aggregators, particularly Google and Facebook. Aggregators don’t make content, because they don’t need to. Rather, by providing functionality consumers value, they become the most efficient way to reach those same consumers, which means that creators bring their content to the Aggregators.

A drawing of The Internet Revolution

Suppose, for example, a publisher commissions a piece of content about bananas. Creating that content costs money, and the publisher is eager to recoup their costs. The publisher could charge for that piece of content, but then the publisher needs to sell it, and selling is difficult and expensive, increasing the payback threshold. What most publishers have done is make the bananas content freely available along with advertising, and then done their best to drive traffic to the content. For example, a publisher might work to ensure the bananas content ranks highly in a Google search result for “bananas”, or make it easy for readers to share the bananas content on social networks. Both of these strategies have the benefit of being free, so why not? And, if the content needs a little boost, advertising on both Google and Facebook is both inexpensive and easily measurable, making it possible to achieve a positive return on investment, at least on the advertising spend.

The problem for our bananas publisher, though, is that every publisher ends up with the same strategy, and thus competing with each other for both traffic and keywords. It is not long before a positive return on that commissioned content is too high a bar to meet, which means fewer high quality bananas and a lot more stale banana bread.2

Integrating Niches

That leaves the world of Never-ending Niches that I wrote about earlier this year, and the strategy I just skimmed over — going directly to customers:

That left a single alternative: going around Google and Facebook and directly to users. That raises the question as to what are the vectors on which “destination sites” — those that attract users directly, independent of the Aggregators — compete? The obvious two candidates are focus and quality:

A drawing of the Vectors on Which Destination Sites Compete
Focus and quality as the determinants of success on the Internet

What is important to note, though, is that while quality is relatively binary, the number of ways to be focused — that is, the number of niches in the world — are effectively infinite; success, in other words, is about delivering superior quality in your niche — the former is defined by the latter.

A drawing of Every Niche Competes on its Own Terms
While quality is relatively binary, the number of ways to be focused — that is, the number of niches in the world — are effectively infinite; success, in other words, is about delivering superior quality in your niche — the former is defined by the latter.

This obviously isn’t a new concept to Stratechery readers — this is the entire strategic rationale of this site. Again, though, the fact that this is a one-person blog doesn’t mean that my competitive situation is any different than that of the New York Times or any other media entity on the Internet. In other words, to the extent that the New York Times has been successful online — and the company has been very successful indeed! — it follows that the company is well-placed in terms of both focus and quality, and in that order.

It’s important to note that simply being focused isn’t enough: companies that want to capture niches in Aggregator-dominated worlds need to pursue strategies that are in almost every respect orthogonal to Aggregators. Specifically, they need to focus on integration, and the preeminent example of this approach is Disney.

Disney and Differentiation

When Disney first unveiled Disney+ in 2019, I wrote in Disney and the Future of TV:

The best way to understand Disney+, which will cost only $6.99/month, starts with the name: this is a service that is not really about television, at least not directly, but rather about Disney itself. This famous chart created by Walt Disney himself remains as pertinent as ever:

Walt Disney's Disney Map

…This is the only appropriate context in which to think about Disney+. While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable — the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disney’s studios — but the larger project is Disney itself.

By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.

Disney+ has been a rare bright spot for Disney during the COVID pandemic, as so many other parts of the company, from cruise ships to theme parks to sports are predicated on in-person interactions. That Disney+ existed, though, was not simply good fortune: it has been clear that the world was headed this way for years — the primary function of the coronavirus crisis has been to accelerate trends that already underway — which is why Disney ultimately had no choice but to get into streaming.

Still, there were reasons for optimism even before the company launched Disney+: back in 2015, after Disney’s stock was pummeled when former CEO Bob Iger acknowledged that cord-cutting was affecting ESPN, I argued that Disney would be OK:

That’s not to say that everything is rosy in pay-TV land. If anything, to fixate on the fate of ESPN is akin to journalism observers only caring about how the New York Times is managing the transition away from print to first the Internet and now mobile. Things aren’t going perfectly but the company is surviving and continues to produce an incredible amount of compelling journalism. However, things are considerably worse for regional papers without the cachet or resources of the Times: publications are going out of business all over the place, and the number of working journalists has been cut nearly in half over the last 25 years.

I suspect a similar shakeout is coming in TV: as the pay TV bundle erodes an entire slew of cable channels will whither away, their targeted content replaced by online video, particularly YouTube. Meanwhile there will be an intense competition waged by a few streaming giants…for consumer attention and dollars. That competition will largely work in the favor of content creators, who ultimately create the differentiation that end users are willing to pay for…Such an outcome should provide hope to content creators of all types: there is a way to escape from the commoditization effect of Aggregation Theory, and that is through differentiation. In other words, the more things change, the more they stay the same.

Capturing that differentiation via integration, though, takes time, and requires a change in culture.

The Page One Meeting

It’s not an accident that the New York Times made another appearance in an article about Disney; they are, from a certain perspective, running the same type of business: differentiated content that both accrues more to a brand than a specific creator and which increasingly monetizes from consumers directly.

By the time Disney faced its 2015 crisis the New York Times was well on the way to solving its previous dependence on print advertising thanks to its burgeoning subscription business; even then, though, the last thing standing in the way of the New York Times and its future was its own culture and traditions, particularly the obsession with Page One. Nikki Usher wrote in the Columbia Journalism Review in 2015:

The Innovation Report leaked in May detailed how the Times was stuck in a culture dominated by the print newspaper. And in my own research for my book Making News at The New York Times, as well as here on CJR and elsewhere, I’ve chronicled just how ingrained this print-first focus has been.

Even recently, when I was visiting the Times, a junior reporter showed me her Page One story from earlier in the week. She expressed how “amazing” it was that she got it on the front page, and how it would validate her abilities to the masthead editors. And she had a stack of a half dozen papers on her desk to save for posterity.

Part of the problem was that the website and other digital properties had essentially been running of their own accord, out of synch with the news judgment of what stories were considered most important by masthead editors. Instead, as my research revealed, a handful of people were in charge of what went up on the Web, and when. A single homepage editor sat next to another editor, and the discussion about what stories to place when and where on Web was almost entirely decided by them…The sole discussion of Page One meetings about digital strategy has been a rundown of stories on the Web, but no discussion of timing, analytics, placement, or importance of these stories.

That led executive editor Dean Baquet to take a radical step: the Page One meeting would no longer discuss Page One. The New York Times wrote about itself:

The larger meeting will continue a yearslong evolution away from its front-page focus, responding to the needs of a constant news cycle. In recent years, Mr. Baquet had turned discussion toward story lines and trends, resources for breaking news, and rolling out enterprise and long-form stories in a reasoned way for digital users…But for the new age, Mr. Baquet has gone a step further, declaring that the big meeting — now held around a vast wooden table — will exclusively be a forum for planning coverage and for ranking items for digital display. One focus will be presentations for mobile devices, where more than half of Times readers now obtain their news.

It wasn’t enough to change the business model: Page One had so much gravity and prestige that the New York Times had to change how it worked to ensure it properly valued its future over its past.

Disney’s Reorganization

Yesterday Disney announced what it termed a Strategic Reorganization of Its Media and Entertainment Businesses:

In light of the tremendous success achieved to date in the Company’s direct-to-consumer business and to further accelerate its DTC strategy, The Walt Disney Company today announced a strategic reorganization of its media and entertainment businesses. Under the new structure, Disney’s world-class creative engines will focus on developing and producing original content for the Company’s streaming services, as well as for legacy platforms, while distribution and commercialization activities will be centralized into a single, global Media and Entertainment Distribution organization. The new Media and Entertainment Distribution group will be responsible for all monetization of content—both distribution and ad sales—and will oversee operations of the Company’s streaming services. It will also have sole P&L accountability for Disney’s media and entertainment businesses.

From a “people changing bosses” perspective, this seems like a relatively minor change; Media and Entertainment, which were relatively untouched in Disney’s last reorganization, has been split into three divisions (studios, general entertainment, and sports) that both make sense and, frankly, already existed.

That, though, is why the last line of that excerpt is so important: profit and loss responsibility is the ultimate indicator of control, which means that Media and Entertainment now answer to distribution, which has a clear mandate to emphasize streaming. From the announcement:

“Given the incredible success of Disney+ and our plans to accelerate our direct-to-consumer business, we are strategically positioning our Company to more effectively support our growth strategy and increase shareholder value,” [Disney CEO Bob] Chapek said. “Managing content creation distinct from distribution will allow us to be more effective and nimble in making the content consumers want most, delivered in the way they prefer to consume it. Our creative teams will concentrate on what they do best—making world-class, franchise-based content—while our newly centralized global distribution team will focus on delivering and monetizing that content in the most optimal way across all platforms, including Disney+, Hulu, ESPN+ and the coming Star international streaming service.”

In this view, Disney’s streaming services are NYTimes.com, and the company’s traditional outlets like TV and especially movie theaters are Page One: sure, they represented a past that is rapidly fading away, but it is a past with prestige, and it’s easy to see Disney’s content creators favoring them over streaming, particularly before COVID when these channels still drove much of the company’s revenue and profits. Chapek, though, is not letting this crisis go to waste, taking the decision about where to display the company’s content away from its creators, as well as the responsibility to maximize short-term revenue and profits.

Disney's Reorganization
By changing profit and loss responsibilities Disney is ensuring the whole company is a beneficiary of its content efforts.

The payoff is the long run: remember, Disney+ is about Disney as a whole, not just one particular line of business, but to achieve that singularity of focus across a company like Disney means designing incentives and lines of accountability that reflect that integration focus.

Aggregators Versus Integrators

More broadly, the totality of Disney’s approach demonstrates how an integrator ought to operate orthogonally to Aggregators in the world of content.

Aggregators are content agnostic. Integrators are predicated on differentiation.

Facebook reduces all content to similarly sized rectangles in your feed: a deeply reported investigative report is given the same prominence and visual presentation as photos of your classmate’s baby; all that Facebook cares about is keeping you engaged. Content created by Disney, on the other hand, must be unique to Disney, and memorable, as it is the linchpin for their entire business.

Aggregators provide leverage. Integrators capture margin.

Modularized content creators, like our bananas publisher, spend money to create content and then seek to recoup their costs by spreading that content as far and wide as possible. Google and Facebook are the most efficient means of achieving this goal. Disney, though, is increasingly focused on capturing more and more margin from its differentiated content, both when it is created and for decades to come.

Aggregators seek to serve the maximum number of consumers. Integrators seek to monetize consumers to the maximum extent.

Google and Facebook are so attractive to content creators precisely because they reach so many consumers; a few pennies or dollars from billions of people is a tremendous amount of money. Disney, meanwhile, particularly as it restricts its content to its own services, is limiting the size of its addressable market, but increasing the amount of money it can make per user in the market that remains.

Aggregators commoditize creation. Integrators operationalize creation.

Google doesn’t care from whence content comes, it simply wants content (this, unsurprisingly, has led to a whole host of businesses primarily predicated on being organic Google search results). Disney, meanwhile, wants to create differentiated content without unduly empowering individual content creators and giving them wholesale transfer pricing power; this leads the company to invest both in animation, which is wholly owned by Disney, and franchises, which are bigger and more valuable than the actors that bring them to life.

Aggregators avoid internal integration. Integrators avoid internal aggregation.

Aggregators get themselves in strategic trouble when they leverage their horizontal services to differentiate their own attempts at integration (Google made this mistake with Android a decade ago). Integrators, on the other hand, get in trouble when they serve specific audiences that don’t accrue to the whole. This was the mistake the New York Times was making with their focus on the front page, and as long as Disney’s studio and media divisions were responsible for the company’s theater and TV business they would be incentivized to serve those pre-existing audiences instead of Disney’s overall strategic goals.

Indy Integrators

In one of the above excerpts I put Stratechery in the same category as the New York Times; what is fascinating about the sorting effect that the Internet has on business models is that I could do the same thing with regards to Disney: yes, it is perhaps audacious to compare a one-person blog to the largest entertainment company the world has ever seen, but that is only because Aggregators are that much greater.

Just think about it: the success or failure of my business is predicated on differentiated content, high margins, high average revenue per customer, controlling content creation, and not being distracted by short-term money-making opportunities. All of this applies to the New York Times too: differentiated high-margin content, high prices, operationalized creation, and, at least for now, a combination of brand and pocketbook that is attracting and keeping stars at the expense of many other publications.

This model is still in its early days, but there is reason to be excited about the future. So much content in the analog era was predicated on reaching the mass market consumer with lowest common denominator content; after all, there simply weren’t that many choices. Google and Facebook, like junk food purveyors leveraging our evolutionary impulse for high caloric food, transformed that lowest common denominator approach into content strategies that increasingly scraped the barrel in terms of both quality and effort, simply because it was easier to make a living that way, at least for a while.

A long life, though, depends on healthy living, which in this case means building a business that doesn’t just produce differentiated content, but has an entire business model and integrated approach to match. The ultimate winners are the consumers that yes, pay for the content, but happily so, because it is something they value. There is room for plenty more.

  1. In markets where Disney Plus operates; it will be released theatrically at some point in others []
  2. And, some would argue, a banana republic. []

Anti-Monopoly vs. Antitrust

William Letwin, in Law and Economic Policy in America: The Evolution of the Sherman Antitrust Act, argued that the only way to understand the Sherman Antitrust Act, and by extension antitrust in America, was to understand an ancient strand of American politics:

Hatred of monopoly is one of the oldest American political habits and like most profound traditions, it consisted of an essentially permanent idea expressed differently at different times.

As Letwin notes, American distrust of monopolies had its roots in England and 1624’s Statute of Monopolies, which significantly constrained the ability of the King to grant exclusive privilege; colonial and state legislatures similarly passed laws restricting grants of exclusive power by governments, and while the Bill of Rights did not have an anti-monopoly provision (contra Thomas Jefferson’s wishes), one of the most divisive political questions for the first several decades of the United States was over the existence (or not) of a national bank, in large part because it was a government-granted monopoly.

Corporations were viewed with similar skepticism, for similar reasons. Letwin writes:

In America every corporation, whether or not it had an express monopoly, was considered monopolistic simply because it was a corporation. This was partly because all corporations before the end of the eighteenth century, and most of them before the Civil War, were chartered by special legislation. Each was authorized by a separate act that prescribed its distinctive organization and defined the rights and duties peculiar to it. No group of men could form a corporation unless the state legislature passed a special act in their favor, and those who succeeded were regarded as privileged above their fellows. The mere existence of a corporation was therefore proof that it was a monopoly.

The solution to this corporation-as-monopoly problem was not, as many critics demanded, the unwinding of corporations, or restrictions placed on new ones, but rather the opposite: starting in 1837 a wave of states passed laws allowing anyone to create a corporation, making the entire argument that corporations were by definition monopolies moot. However, Letwin argues that this only redirected America’s inherent anti-monopoly sentiment:

The distrust of corporations did not evaporate with the old complaint. After the middle of the nineteenth century new grounds were found for believing that corporations were monopolistic, and criticisms that had been subordinate now became prominent. They achieved their new importance partly because the fundamental American attitude toward government was changing at this time. The fear of oligarchy, which had been carried over from the colonial period and so carefully expressed in the Constitution, was subsiding. The fear of plutocracy, always present in some degree, grew sharper as Americans recognized the rapid growth of national wealth during and after the Civil War. Reasoning about monopolies accommodated itself to the new disposition: it was less often argued that monopolists would abolish representative government and more often they would use their wealth to make it serve their own interests.

The solution for this new wave of anti-monopolists would have been surprising to their forbears who were so skeptical about government power:

The chief attacks on monopolies after the Civil War became more specific. They were no longer directed at incorporation itself or corporations in the mass, but more particularly against certain practices — above all, economic abuses — that were attributed to some corporations. No one could by this time reasonably want or hope to solve the problem by abolishing corporations or by making it easier to establish more of them. The idea therefore began to spread that the power and injurious behavior of monopolistic corporations should be controlled by government regulation.

This increased focus on economic abuses, which largely originated with The Granger movement amongst midwestern farmers upset about railroad rates, coincided with an explosion of trusts designed to circumvent state-specific laws about illegal restraints of trade in the late 1800s. Letwin explains:

More and more attention was devoted to combinations of industrial firms, all of which, however organized, came by the end of the [1880s] to be called trusts. Public prominence was achieved first by the Standard Oil Company, which by 1880 controlled much of the country’s petroleum refining…The Cotton Oil Trust was organized in 1884 and the Linseed Oil Trust in the following year…[1887] saw the formation of the Sugar and Whisky Trusts, which until the end of the century contended for the unpopularity only with Standard Oil. Others, affecting lesser industries or smaller markets, added to the list, which by the end of the year included the Envelope, Salt, Cordage, Oil-Cloth, Paving-Pitch, School-Slate, Chicago Gas, St. Louis Gas, and New York Meat trusts…

These trusts became a new target for an old sentiment:

If we are to credit the judgment of most contemporary observers, the public seems to have had no difficulty in identifying the trusts as the latest version of monopoly and in transferring to them the antipathy which by long usage it had cultivated against all monopolies. The great fervor against trusts in 1888…was simply a familiar feeling raised to a high pitch, intense because the speed with which new trusts were being hatched made it seem that they would overrun everything unless some remedy were found soon…

There were numerous objections to the trusts — complaints of a traditional sort as well as newer ones suited to the character of these particular monopolies. Trusts, it was said, threaten liberty, because they corrupted civil servants and bribed legislators; they enjoyed privileges such as protection by tariffs; they drove out competitors by lowering prices, victimized consumers by raising prices, defrauded investors by watering stocks, put laborers out of work by closing down plants, and somehow or other abused everyone.

This rhetoric may seem familiar. Yesterday the Democratic-controlled House Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary released their majority staff report and recommendations about the subcommittees investigation of Google, Facebook, Amazon, and Apple, which stated on the first page:

They not only wield tremendous power, but they also abuse it by charging exorbitant fees, imposing oppressive contract terms, and extracting valuable data from the people and businesses that rely on them…Whether through self-preferencing, predatory pricing, or exclusionary conduct, the dominant platforms have exploited their power in order to become even more dominant.

To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons…The effects of this significant and durable market power are costly. The Subcommittee’s series of hearings produced significant evidence that these firms wield their dominance in ways that erode entrepreneurship, degrade Americans’ privacy online, and undermine the vibrancy of the free and diverse press. The result is less innovation, fewer choices for consumers, and a weakened democracy.

Or, to put it in Letwin’s words, these four companies are “somehow or other abus[ing] everyone.”

The Subcommittee Report

Forgive the exceptionally long introduction to this Article, but it is in part a function of the fact I have been writing about this topic for a very long time; while going through my archives I found a piece from 2017 entitled Everything is Changing; So Should Antitrust that seems particularly pertinent to many of the topics covered in the report, particularly the discussion of the slumping prospects of newspapers:

For long time Stratechery readers this analysis isn’t that novel; the shift in value chains that result from the Internet enabling zero distribution and zero transactional costs are the foundation of Aggregation Theory…There is another context, though: the increasing appreciation outside of technology of just how dominant companies like Google, Facebook, Amazon, and even Netflix have become, and more and more discussion about whether antitrust is the answer.

Here we are!

The problem is that much of this discussion is rooted in the old value chain, where power came from controlling distribution. What is critical to understand is that that world is fading away; the fundamental nature of the Internet is abundance, and the critical competency is discovery. Moreover, the platform that harnesses discovery also harnesses a virtuous cycle between users and suppliers that leads to a winner-take-all situation inherent in two-sided networks. In other words, to the extent these platforms are monopolies, said monopoly is much more akin to AT&T than it is to Standard Oil.

AT&T, though, at least beyond the network points, is not a good comparison point either: telephone service required an actual telephone wire, which is to say that AT&T’s monopoly was also rooted in the physical world. What makes what I call Aggregators fundamentally different is the fact that controlling demand matters more than controlling supply.

This matters for three reasons:

  • First, the fact that newspapers, for example, or perhaps one day WPP, are being driven out of business is not a reason for antitrust action; their problem is their business model is obsolete. The world has changed, and invoking regulation to try to change that reality is a terrible idea.
  • Second, the consumer-friendly approach of these platform companies is no accident: when market power comes from owning demand, then the way to gain power is to create a great experience for consumers. The casual way in which many antitrust crusaders ignore the fact that, for example, Amazon is genuinely beloved by consumers — and for good reason! — is frustrating intellectually and eye-rolling politically.
  • Third, the presence of these platforms creates incredible new opportunities for businesses that were never before possible. I already described how Dollar Shaving Club was enabled by platform companies; Amazon has also enabled a multitude of merchants, Facebook an entire ecosystem of apps and personalized startups, and Google every possible service under the sun.

In a 30-second commercial, of the sort that WPP might have made, drawing clear villains and easy narratives is valuable; the reality of Aggregators is far more complicated. That Google, Facebook, Amazon, and other platforms are as powerful as they are is not due to their having acted illegally but rather to the fundamental nature of the Internet and the way it has reorganized value chains in industry after industry.

This is the point where I thought much of the committee’s report went wrong. Monopolies were asserted with effectively zero evidence, and there was little to no mention of the positive impacts of these companies, even as basic business practices were described in the most sinister terms possible (Facebook and Instagram were accused of colluding?). Moreover, a lot of commentary felt stuck in 2012: Facebook forever competing against MySpace (but Instagram being a bargain was totally predictable!), Amazon against no one (Shopify was mentioned once), Google versus ten blue links, and Apple, well, they are in good shape: despite having arguably the most egregious practices under traditional antitrust law, the iPhone maker was the only company in the Executive Summary to be praised for its impact on society.1 In the committee’s telling, these companies are bad actors that do bad things, case closed.

That, though, is why it is a mistake to read the report as some sort of technocratic document. There are, to be sure, a lot of interesting facts that were dug up by the committee, and some bad behavior, which may or may not be anticompetitive in the legal sense. Certainly the companies would prefer to have a legalistic antitrust debate, for good reason: it is exceptionally difficult to make the case that any of these companies are causing consumer harm, which is the de facto standard for antitrust in the United States. Indeed, what makes Google’s contention that “The competition is only a click away” so infuriating is the fact it is true.

What matters more is the context laid out by Letwin: there is a strain of political thought in America, independent of political party (although traditionally associated with Democrats), that is inherently allergic to concentrated power — monopoly in the populist sense, if not the legal one. To more fully restate the first quote I used from Letwin:

Hatred of monopoly is one of the oldest American political habits and like most profound traditions, it consisted of an essentially permanent idea expressed differently at different times. “Monopoly”, as the word was used in America, meant at first a special legal privilege granted by the state; later it came more often to mean exclusive control that a few persons achieved by their own efforts; but it always meant some sort of unjustified power, especially one that raised obstacles to equality of opportunity.

In other words, this subcommittee report is simply a new expression of an old idea; the details matter less than the fact it exists.

The Future of Anti-Monopoly

This interpretation of the report means different things for different parties.

The committee specifically, and people concerned about tech power generally, should in my estimation spend a lot less time trying to shoehorn today’s tech companies, built to operate in a world of abundance, into antitrust laws built for a world of scarcity; from that earlier Stratechery article:

To that end any antitrust regulation, if it comes, needs a fresh approach rooted in the reality of the Internet. I agree that too much concentrated power has inherent problems; I also believe a structural incentive to provide a great customer experience, along with the potential to create completely new kinds of businesses, is worth preserving. Antitrust crusaders, to whom I am clearly sympathetic, ignore these realities at their political peril.

So much modern antitrust action against tech companies is like pushing on a string: the reason these companies have power is because so many customers choose to use them, and it is both difficult and probably unwise to try and regulate the individual choices of billions of users. At the same time, as I noted, I am sympathetic to the issue of just how much power these companies have: constraining that power, though, needs new laws that start with Internet assumptions, and anti-monopoly advocates would do well to focus on solutions that, instead of retracting privileges, extend them (a la incorporations in the 1800s).

These aren’t idle words: I have previously laid out ideas for regulating competition on the Internet. One thing that is critical is understanding that not all tech companies are the same: Apple and Android are traditional platforms, relatively well-served by traditional antitrust law (except for the fact that their duopoly helps both escape scrutiny together); Google and Facebook, though, are Aggregators, which require a different approach, which I laid out in 2019’s Where Warren’s Wrong. These include a focus on acquisitions and anticompetitive contracts, which I was glad to see were also a focus of the committee (although I think the focus on small-scale acquisitions missed why acquisitions can be a good thing).

It is tech companies that face the most uncertainty: in the short term, this report will not result in any sort of meaningful legislation — this session of Congress is nearly over. Moreover, I would not be surprised to see tech companies step up their lobbying for privacy regulation, which in nearly all cases ends up being anticompetitive (exporting your list of friends, for example, is forbidden under legislation like GDPR). This also isn’t the worst time to have a competition-oriented court case, either, given the current state of antitrust jurisprudence.

The big question is if the status quo will change: right now the anti-monopolists are still a decided minority, at least as far as tech companies go. These four companies are amongst the most popular in the U.S., and that was before the pandemic, when the tech industry kept the entire economy afloat for those with the luxury to complain about ads, and provided free entertainment for those that don’t.

At the same time, the political sands are shifting: most of the anti-monopolists are Democrats, but as I noted after the Committee’s hearing with the relevant CEOs, populist-oriented Republicans are extremely focused on the political power big companies inherently hold; it is not unrealistic to imagine these two populist strains fusing — likely in the Republican party — leaving tech companies flat-footed.

There would, to be sure, be a certain irony in a political realignment being what ultimately endangers these companies that appear entrenched for years to come; after all, it is technology itself that has already upended politics; the upheaval may only be getting started.

  1. From the report:

    Apple’s mobile ecosystem has produced significant benefits to app developers and consumers. Launched in 2008, the App Store revolutionized software distribution on mobile devices, reducing barriers to entry for app developers and increasing the choices available to consumers.

    Again, there is not a single positive word about the other three companies in the executive summary. []

2020 Bundles

If the famous Jim Barksdale quote is to be believed — the one about there only being two ways to make money in business, bundling and unbundling — then I am long past due for a follow-up to 2017’s The Great Unbundling. I wrote in the introduction:

To say that the Internet has changed the media business is so obvious it barely bears writing; the media business, though, is massive in scope, ranging from this site to The Walt Disney Company, with a multitude of formats, categories, and business models in between. And, it turns out that the impact of the Internet — and the outlook for the future — differs considerably depending on what part of the media industry you look at.

That article focused on three types of media: print, music, and TV (both broadcast and cable).

  • Print was completely unbundled and commoditized by the Google and Facebook Super Aggregators.
  • Music labels, thanks to the importance of their back catalogs, have been able to maintain their place — and profits — in the value chain, even as Apple and Spotify have grown their revenues.
  • TV has seen the jobs it has traditionally done become unbundled, with information going to Google, education to YouTube, story-telling to streaming, and escapism to everything from TikTok to video games to Netflix; sports, meanwhile, is well on its way to being the only reason to keep the traditional bundle.

What is critical to understand about Barksdale’s famous quote is that bundling and unbundling happen on different dimensions, and at different points in the value chain, often because of the transformative nature of technology. Netflix is an obvious example:

  • When the constraint on viewing video was the combination of time and having a dedicated cable or satellite dish, the bundle that emerged was a collection of independent networks (which could show different content at the same time) delivered over channels sold by the distributor that owned said cable or dish.
  • Streaming removed the constraint on time, which meant one single “network” — Netflix — could, at least in theory, contain all of the content. Therefore Netflix doesn’t contain different channels or networks, but rather a huge number of shows, some produced in house, some bought, and others “rented” from traditional networks.

In this case Netflix helped unbundle traditional TV, even as it created a new bundle of its own.

Looking at just TV, though, is insufficient when it comes to thinking about entertainment bundles broadly. Just listen to Netflix CEO Reed Hastings, who wrote in the company’s 2018 Q4 letter to shareholders:

In the US, we earn around 10% of television screen time and less than that of mobile screen time. In 2 other countries, we earn a lower percentage of screen time due to lower penetration of our service. We earn consumer screen time, both mobile and television, away from a very broad set of competitors. We compete with (and lose to) Fortnite more than HBO. When YouTube went down globally for a few minutes in October, our viewing and signups spiked for that time. Hulu is small compared to YouTube for viewing time, and they are successful in the US, but non-existent in Canada, which creates a comparison point: our penetration in the two countries is pretty similar. There are thousands of competitors in this highly-fragmented market vying to entertain consumers and low barriers to entry for those with great experiences. Our growth is based on how good our experience is, compared to all the other screen time experiences from which consumers choose. Our focus is not on Disney+, Amazon or others, but on how we can improve our experience for our members.

It it tempting to dismiss Hastings’ assertion as being self-serving, particularly as Netflix is viewed as a dominant force in Hollywood, but what is striking about the bundles that have emerged over the last couple of years — and over the last couple of weeks! — is the degree to which they span different types of media, often with widely varying business goals.

Netflix: Bundle as Business Model

I already explained how Netflix’s bundle is a reorganization of the TV value chain enabled by the removal of linear time as a constraint (I first wrote about this in 2015’s Netflix and the Conservation of Attractive Profits). What is also worth highlighting, particularly for purposes of comparison below, is Netflix’s business model.

This one is quite straightforward: Netflix makes money by selling subscriptions to its bundle. Of course the execution of this strategy is considerably more complex. For example, Netflix focuses on evergreen content because that means that ever more content increases the attractiveness of Netflix to new subscribers, reducing customer acquisition costs. Netflix is also focused on quantity as much as quality, recognizing that people sometimes just want something to watch, and that there is value in being the default choice.

For purposes of this article, though, Netflix is straightforward:

  • Netflix bundles individual shows
  • Netflix’s business model is selling subscriptions to that bundle.

Other bundles are much less straightforward.

Disney+: Bundle as CRM

At first glance, Disney+ seems a lot like Netflix: pay a monthly price, and get access to a bunch of different shows. However differences quickly become apparent:

  • Disney+ only has Disney (and Disney-owned 21st Century) content, whereas Netflix has both its own content and content from other networks.
  • Disney+ is significantly cheaper than Netflix ($6.99 versus $12.99).
  • Disney+ does not necessarily include everything on Disney+; for example, last month Disney released Mulan on Disney+ for an additional $29.99.

These differences make sense when you realize that Disney+ is not simply about earning subscription revenue; rather, it is a direct-to-consumer touchpoint for Disney’s entire business. I explained in Disney and the Future of TV:

While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable — the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disney’s studios — but the larger project is Disney itself.

By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.

This is also why Disney is comfortable being so aggressive in price: the company could have easily tried charging $9.99/month or Netflix’s $12.99/month — the road to profitability for Disney+ would have surely been shorter. The outcome for Disney as a whole, though, would be worse: a higher price means fewer customers, and given the multitude of ways that Disney has to monetize customers throughout their entire lives that would have been a poor trade-off to make.

The Mulan strategy fits this model. While Disney’s hand was certainly forced by the COVID pandemic, the company’s overall goal is to maximize revenue per customer via its highly differentiated IP; to that end, just as Disney+ is a way to connect with customers and lure them to Disney World or a Disney Cruise, it is equally effective at serving as a platform for shifting the theater window to customers’ living rooms.

Amazon: Bundle as Churn Management

It took me a long time to figure out what exactly Amazon was trying to accomplish with Amazon Prime Video, its now 14 year-old streaming service. The most obvious explanation is that it was a way to acquire customers for Amazon Prime, the then-2-day shipping service with which it was bundled. But then Amazon started offering Prime Video subscriptions on its own — was it a Netflix competitor? Or was Prime Video a loss leader for Amazon Channels, where Amazon made money selling other streaming services? Meanwhile, Amazon Prime keeps adding on more and more disparate services: delivery, video, music, video games, photo storage, a clothing service, books, magazines — the Prime benefits page has 28 different items listed!

To some extent, the answer is “all of the above”, but it is notable that many customers only find out about many of these features after they are subscribers; Amazon may tell you about the book benefits when you buy an e-book for example, Amazon Music when you set up an Echo, or remind you about Prime Video when you checkout. The most valuable impact in these cases is giving you yet another reason to not churn.

This makes sense when you remember that the business model for the Amazon Prime bundle is less subscription revenue than it is increasing usage of Amazon.com. Back in 2015 Prime customers were estimated to spend an average of $1,500/year on Amazon, compared to $625 for non-Prime customers; according to eMarketer earlier this year, 80% of Prime subscribers start their product search on Amazon, and only 12% on Google, while that split is 50/50 for non-Prime subscribers.

To that end, simply keeping Amazon Prime subscribers is the biggest possible win for Amazon broadly. Thus the continuous drive to add more and more features; sure, you may find 90% of them useless, along with everyone else, but as anyone who has managed a feature-rich product knows, the 10% that are considered useful vary considerably!

Microsoft: Bundle as Market Expansion

One of the most interesting stories to watch over the coming few years will be the competition between the PS5 and Xbox Series X and Series S. Last Thursday I explained why the companies are heading in very different directions:

In short, Sony is treating the PS5 like a console, and gamers like gamers, just as they did last generation. It is an overall aligned strategy that makes a lot of sense…

Microsoft is seeking to get out of the traditional console business, with its loss-leading hardware and fight over exclusives, and into the services business broadly; that’s why Xbox Game Pass, the cloud streaming service that is available not only on Xbox and PC but also on Android phones (Apple has blocked it from iOS for business model reasons), is included. In Microsoft’s view of the world the Xbox is just a specialized device for accessing their game service, which, if they play their cards right, you will stay subscribed to for years to come.

Sony is following the traditional razor and blades model that has long characterized consoles: try and not lose too much money on the consoles, and make up the difference in game licenses, its online service, and in-game purchases. It’s a model that gamers are familiar with, even if it ends up being a pricey one: this generation games are expected to hit the $70 mark, plus more for additional downloadable content that may or may not be necessary to the core experience.

Microsoft is taking a different approach: with Xbox Game Pass you not only get access to over 100 games, along with all of the other usual online services you might expect, but for an additional $10/month, you can get an Xbox Series S as well ($20/month for the more capable Series X)! Notice the framing there, which is the opposite of how I put it on Thursday: given the fact that consoles have always been an up-front purchase, the natural way to think about Microsoft’s monthly pricing option is that it is a 24-month installment plan for the $299 Series S or the $499 Series X, with Xbox Game Pass added on top. Given that Microsoft’s strategy is all about subscriptions, though, it makes sense to consider the console itself as the bundled benefit.

What is compelling about Microsoft’s approach is its potential for expanding the gaming market. Traditional gamers will still be attracted to Sony’s model and its exclusives, but for folks whose last console was the Wii (for which they only ever bought Wii Sports), the $25/month Xbox bundle is a pretty attractive way to not only get a console, but over 100 games; if Microsoft pulls this bundle off, its overall revenue and especially profit could surpass Sony’s more traditional approach, particularly in the long run.

Apple: Bundle as Money-Maker

All of this is a long way of explaining why I was relatively underwhelmed by last week’s announcement of Apple One. Apple One includes Apple Music, Apple TV+, Apple Arcade, and iCloud storage, for either individuals or families; Apple Premier adds on Apple News+ and Apple Fitness+.

Some of the aforementioned bundle strategies are applicable to Apple One, others less so:

  • While Apple’s primary business model remains selling hardware at a profit, the company has a longstanding goal of increasing services revenue; selling a bundle potentially helps in that regard.
  • Apple doesn’t really need any help connecting to its customers, although iCloud Storage does make for a better overall Apple experience.
  • Apple One may reduce churn, particularly if customers are attached to the Apple-only services like Apple Arcade, Apple News+, and Apple Fitness+, but the truth is that Apple’s churn is already quite low
  • On the flipside, Apple One doesn’t really make an iPhone any more accessible, particularly since Apple is competing against Android, as opposed to non-consumption.

To me the biggest hangup is the first one: the degree to which a bundle is compelling is the degree to which it is integrated with and contributes to a company’s core business model, and, in contrast to these other four companies, it’s a bit of a stretch to see how Apple One really move the needle when it comes to buying an iPhone or not.

To that end, what would be much more compelling is attaching Apple One to the iPhone explicitly. I thought this might be coming after last year’s iPhone announcement:

It does feel like there is one more shoe yet to drop when it comes to Apple’s strategic shift. The fact that Apple is bundling a for-pay service (Apple TV+) with a product purchase is interesting, but what if Apple started including products with paid subscriptions?

That may be closer than it seems. It seemed strange yesterday’s keynote included an Apple Retail update at the very end of the keynote, but I think this slide explained why:

iPhone monthly pricing

Not only can you get a new iPhone for less if you trade in your old one, you can also pay for it on a monthly basis (this applies to phones without a trade-in as well). So, in the case of this slide, you can get an iPhone 11 and Apple TV+ for $17/month.

Apple also adjusted their AppleCare+ terms yesterday: now you can subscribe monthly and AppleCare+ will carry on until you cancel, just as other Apple services like Apple Music or Apple Arcade do. The company already has the iPhone Upgrade Program, that bundles a yearly iPhone and AppleCare+, but this shift for AppleCare+ purchased on its own is another step towards assuming that Apple’s relationship with its customers will be a subscription-based one.

To that end, how long until there is a variant of the iPhone Upgrade Program that is simply an all-up Apple subscription? Pay one monthly fee, and get everything Apple has to offer. Indeed, nothing would show that Apple is a Services company more than making the iPhone itself a service, at least as far as the customer relationship goes.

The problem is that Apple’s financing programs — both the one pictured above, and also the iPhone Upgrade Program — continue to be funded by 3rd-parties; Apple is making it easier to buy an iPhone, but is still focused on getting its money right away. And, as long as it sticks with this approach, its Apple One bundle feels more like a money-grab, and less like a strategic driver of the business.1

Microsoft and ZeniMax

Stepping back, what is notable about all of these examples is that only Netflix is a pure content play. Disney, Amazon, Microsoft, and Apple are all using content to differentiate something in physical space. This makes logical sense: content, with its zero marginal costs and zero distribution costs2 is non-rivalrous, which makes it challenging to monetize directly. Indeed, this is why bundling is so important to Netflix: its value is always having something to watch, as opposed to having the one thing you have to have.

At the same time, content is highly differentiated, while goods in physical space are rivalrous but subject to commoditization. The former, delivered in a bundle with the latter, makes it possible to charge a premium for or drive increased usage of the product as a whole, whether that be a theme park or cruise ship, a cardboard box or console.

What is interesting is that the same forces that broke up the old distribution-based bundles and reduced content to a differentiator for physical goods and experiences, have also made content directly monetizable through business models like subscriptions. This, in turn, has made content-only bundles that much more difficult to create: the higher the bar there is for any individual content creator to join a bundle, the more necessary it is to have an orthogonal business model that justifies clearing that bar.

This also explains Microsoft’s purchase of ZeniMax, the publisher for Elder Scrolls, Doom, Fallout, and many other popular games. Going forward Microsoft can ensure those games are available from day one on Xbox Game Pass (and not on PS53), using content to differentiate their service. However, given that ZeniMax’s alternative was continuing to sell games directly, Microsoft has to pay $7.5 billion in cash for the opportunity.

What is even more interesting about the ZeniMax acquisition, though, are the implications for Steam, the PC gaming service. Steam, like Netflix, is a content-only play, although it is a marketplace, not a subscription service. Microsoft will probably leave ZeniMax games on Steam, but if those same games are available on Xbox Game Pass for $15/month, how many folks will be willing to pay full price? In short, Microsoft could potentially wield the content-differentiated bundle it is building across services and devices not only against Sony, its console competitor, but Steam, its services competitor.

This also gives insight as to why Apple’s bundle is underwhelming: PS5 and Steam are real competitors — Microsoft is arguably an underdog to both — which means the Xbox maker has to be creative. Apple, meanwhile, simply wants to make a bit more money on an audience that doesn’t have anywhere else to go. Perhaps the best way to make money is to not need a bundle at all.

  1. Moreover, Apple runs the risk of actually cannibalizing itself, as its most ardent customers happily take a discount for services they would have subscribed to anyways. []
  2. On a marginal basis, to be clear; obviously Netflix’s bandwidth bills are massive []
  3. Microsoft will honor current PS5 deals, and handle future games on a case-by-case basis. []

Nvidia’s Integration Dreams

Back in 2010, Kyle Conroy wrote a blogpost entitled, What if I had bought Apple stock instead?:

Currently, Apple’s stock is at an all time high. A share today is worth over 40 times its value seven years ago. So, how much would you have today if you purchased stock instead of an Apple product? See for yourself in the table below.

Conroy kept the post up-to-date until April 1, 2012; at that point, my first Apple computer, a 2003 12″ iBook, which cost $1,099 on October 22, 2003, would have been worth $57,900. Today it would be worth $311,973.

I thought of this meme, which pops up every time Apple’s stock hits a new all-time high, while considering the price Apple paid for P.A. Semi back in 2008; for a mere $278 million the company acquired the talent and IP foundation that would undergird its A-series of chips, which have powered every iPad and every iPhone since 2010, and, before the end of the year, at least one Mac (the rest of the line will follow within two years).

So I was curious: what would $278 million in 2008 Apple stock look like today? The answer is $5.5 billion, which, honestly, is still an absolute bargain, and a reminder that the size of an acquisition is not necessarily correlated with its impact.

Nvidia Acquires ARM

Over the weekend Nvidia consummated the biggest chip deal in history when it acquired Arm1 from Softbank for around $40 billion in stock and cash. Nvidia founder and CEO Jensen Huang wrote in a letter to Nvidia employees:

We are joining arms with Arm to create the leading computing company for the age of AI. AI is the most powerful technology force of our time. Learning from data, AI supercomputers can write software no human can. Amazingly, AI software can perceive its environment, infer the best plan, and act intelligently. This new form of software will expand computing to every corner of the globe. Someday, trillions of computers running AI will create a new internet — the internet-of-things — thousands of times bigger than today’s internet-of-people. Uniting NVIDIA’s AI computing with the vast reach of Arm’s CPU, we will engage the giant AI opportunity ahead and advance computing from the cloud, smartphones, PCs, self-driving cars, robotics, 5G, and IoT.

These are big ambitions for a big purchase, and Wall Street apparently agrees; yesterday Nvidia’s market cap increased by $17.5 billion, nearly covering the $21.5 billion in shares Nvidia will give Softbank in the deal. Indeed, it is Nvidia’s stock that is probably the single most important factor in this deal. Back in 2016, when Softbank acquired Arm, Nvidia was worth about $34 billion; after yesterday’s run-up, the company’s marketcap was $318 billion.

The first takeaway is that selling Arm for $32 billion means that the company was yet another terrible investment by Softbank; simply buying Nvidia shares — or, for that matter, an S&P 500 index fund, which is up 55% since then — would have provided a much better return than the ~5% Softbank earned from Arm.

The second takeaway is the inverse: Nvidia is acquiring a company that was its marketcap peer four years ago for a relative pittance. Granted, Nvidia’s stock may not stay at its current lofty height — the company has a price-to-earnings ratio of over 67, well above the industry average of 27 — but that is precisely why a majority-stock acquisition makes sense; Nvidia’s stock may retreat, but Arm will still be theirs.

Nvidia’s Integration

Beginning my analysis with stock prices is not normally what I do; I’m generally more concerned with the strategies and business models of which stock price is a result, not a driver. The truth, though, is that once you start digging into the details of Nvidia and ARM, it is rather difficult to see what strategy might be driving this acquisition.

Start with Nvidia: the company is perhaps the shining example of the industry transformation wrought by TSMC; freed of the need to manufacture its own chips, Nvidia was focused from the beginning on graphics. Its TNT cards, released in the late 1990s, provided 3D graphics for games while also powering Windows (previously hardware 3D graphics were only available via add-on cards); its GeForce line, released in 1999, put Nvidia firmly at the forefront of the industry, a position it retains today.

It was in 2001 that Nvidia released the GeForce 3, which had the first pixel shader; instead of a hard-coded GPU that could only execute a pre-defined list of commands, a shader was software, which meant it could be programmed on the fly. This increased level of abstraction meant the underlying graphics processing unit could be much simpler, which meant that a graphics chip could have many more of them. The most advanced versions of Nvidia’s just-announced GeForce RTX 30 Series, for example, has an incredible 10,496 cores.

This level of scalability makes sense for video cards because graphics processing is embarrassingly parallel: a screen can be divided up into an arbitrary number of sections, and each section computed individually, all at the same time. This means that performance scales horizontally, which is to say that every additional core increases performance.

It turns out, though, that graphics are not the only embarrassingly parallel problem in computing. Another obvious example is encryption: brute forcing a key entails running the exact same calculation over-and-over again; the chips doing the calculation don’t need to be complex, they simply need as many cores as possible (this is why graphics cards are very popular for blockchain applications; miners are basically endlessly brute-forcing encryption keys).

What is most enticing for Nvidia, though, is machine learning. Training on large datasets is an embarrassingly parallel problem, which means it is well-suited for graphics cards. The trick, though, is in decomposing a machine learning algorithm into pieces that can be run in parallel; graphics cards were designed for, well, graphics, which meant that programmers had to work in graphics programming languages like OpenGL.

This is why Nvidia transformed itself from a modular component maker to an integrated maker of hardware and software; the former were its video cards, and the latter was a platform called CUDA. The CUDA platform allows programmers to access the parallel processing power of Nvidia’s video cards via a wide number of languages, without needing to understand how to program graphics.

Here the kicker: CUDA is free, but that is because the integration is so tight. CUDA only works with Nvidia video cards, in large part because many of the routines are hand-tuned and optimized. It is a tremendous investment that has paid off in a major way: CUDA is dominant in machine learning, and Nvidia graphics cards cost hundreds of dollars ($1500 in the case of the aforementioned RTX 3090). Apple isn’t the only company that understands the power of differentiating premium hardware with software.

Arm’s Neutrality

Arm’s business model could not be more different. The company, founded in 1990 as a joint venture between Acorn Computers, Apple, and VLSI Technology, doesn’t sell any chips of its own; rather, it licenses chip designs to companies which actually manufacture ARM chips. Except even that isn’t quite right: most ARM licensees actually contract with manufacturers like TSMC to make physical chips, which are then sold to OEMs. The entire ecosystem is extremely modular; consider an Oppo smartphone, with a MediaTek chip:

The modular smartphone ecosystem

Arm chips appear in many more devices than smartphones — most micro-controllers in embedded systems are Arm designs — and Arm designs more than CPUs; the company’s catalog includes everything from GPUs to AI accelerator chips. It also licenses less than full designs: Apple, for example, designs its own chips, but uses the ARM Instruction Set Architecture (ISA) to communicate with them. The ARM ISA is the platform that ties this entire ecosystem together; programs written for one ARM chip will run on all ARM chips, and each of those chips results in a licensing fee for Arm.

What makes Arm’s privileged position viable is the same one that undergirds TSMC’s: neutrality. I wrote about the latter in Intel and the Danger of Integration:

In 1987, Morris Chang founded Taiwan Semiconductor Manufacturing Company (TSMC) promising “Integrity, commitment, innovation, and customer trust”. Integrity and customer trust referred to Chang’s commitment that TSMC would never compete with its customers with its own designs: the company would focus on nothing but manufacturing.

This was a completely novel idea: at that time all chip manufacturing was integrated a la Intel; the few firms that were only focused on chip design had to scrap for excess capacity at Integrated Device Manufacturers (IDMs) who were liable to steal designs and cut off production in favor of their own chips if demand rose. Now TSMC offered a much more attractive alternative, even if their manufacturing capabilities were behind.

In time, though, TSMC got better, in large part because it had no choice: soon its manufacturing capabilities were only one step behind industry standards, and within a decade had caught-up (although Intel remained ahead of everyone). Meanwhile, the fact that TSMC existed created the conditions for an explosion in “fabless” chip companies that focused on nothing but design.

Integrated intel was competing with a competitive modular ecosystem

For example, in the late 1990s there was an explosion in companies focused on dedicated graphics chips: nearly all of them were manufactured by TSMC. And, all along, the increased business let TSMC invest even more in its manufacturing capabilities.

That article was about TSMC overtaking Intel in fabrication, but a similar story can be told about Arm overtaking Intel in mobile. Intel was relentlessly focused on performance, but smartphones needed to balance performance with battery concerns. Arm, which had been spending years designing highly efficient processors for embedded applications, had both the experience and the business model flexibility to make mobile a priority.

The end result made everyone a winner (except Intel): nearly every smartphone in the world runs on an ARM-derived chip (either directly or, in the case of companies like Apple, the ARM ISA), which is to say that Arm makes money when everyone else in the mobile ecosystem makes money.

The Nvidia-ARM Mismatch

Notice that an ARM license, unlike the CUDA platform, is not free. That makes sense, though: CUDA is a complement to Nvidia’s proprietary graphics cards, which command huge margins. ARM license fees, on the other hand, can and are paid by everyone in the ecosystem, and in return everyone in the ecosystem gets equal access to Arm’s designs and ISA. It’s not free, but it is neutral.

That neutrality is gone under Nvidia ownership, at least in theory: now Nvidia has early access to ARM designs, and the ability to push changes in the ARM ISA; to put it another way, Nvidia is now a supplier for many of the companies it competes with, which is a particular problem given Nvidia’s reputation for both pushing up prices and being difficult to partner with. Here again Apple works as an analogy: the iPhone maker is notorious for holding the line on margins, prioritizing its own interests, and being litigious about intellectual property; Nvidia has the same sort of reputation. So does Intel, for that matter; the common characteristic is being vertically integrated.

Of course Nvidia is insistent that ARM licensees have nothing to worry about. Huang noted in that letter to Nvidia employees:

Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with NVIDIA’s world-leading GPU and AI technology.

Notice that last bit: Huang is not only arguing that Nvidia will serve Arm customers neutrally, but that Nvidia itself will adopt Arm’s business model, licensing its IP to competitive chip-makers. It’s as if this is an acquisition in reverse: the $318 billion acquirer is fitting itself into a world defined by its $40 billion acquisition.

Color me skeptical; not only is Nvidia’s entire business predicated on selling high margin chips differentiated by highly integrated software, but Nvidia’s entire approach to the market is about doing what is best for Nvidia, without much concern for partners or, frankly customers. It is a luxury afforded those that are clearly best in class, which by extension means that sharing is anathema; why trade high margins at the top of the market for low margins and the headache of serving everyone?

In short, this deal feels like the inverse of the P.A. Semi deal not simply in terms of the price tag, but in its overall impact on the acquirer. I have a hard time believing that Nvidia is going to change its approach.

Or maybe that’s the entire point.

Huang’s Dream

By far the best articulation of the upside of this deal came, unsurprisingly, from Huang. What was notable about said articulation, though, was that it came 46 minutes into the investor call about the acquisition, and only then in response to a fairly obvious question: why does Nvidia need to own ARM, instead of simply license it (like Apple, which has a perpetual license to the ARM ISA, and is not affected by this acquisition)?

What was so striking about Huang’s answer was not simply its expansiveness — I’ve transcribed the entire answer below — but also the way in which he delivered it; unlike the rest of the call, Huang’s voice was halting and uncertain, as if he were scared of his own ambition. I know this excerpt is long, but it’s essential:

We were delightful licensees of ARM. As you know we used ARM in one of our most important new initiatives, the Bluefield GPU. We used it for the Nintendo Switch — it’s going to be the most popular and success game console in the history of game consoles. So we are enthusiastic ARM licensees.

There are three reasons why we should buy this company, and we should buy it as soon as we can.

Number one is this: as you know, we would love to take Nvidia’s IP through ARM’s network. Unless we were one company, I think the ability for us to do that and to do that with all of our might, is very challenging. I don’t take other people’s products through my channel! I don’t expose my ecosystem to to other company’s products. The ecosystem is hard-earned — it took 30 years for Arm to get here — and so we have an opportunity to offer that whole network, that vast ecosystem of partners and customers Nvidia’s IP. You can do some simple math and the economics there should be very exciting.

Number two, we would like to lean in very hard into the ARM CPU datacenter platform. There’s a fundamental difference between a datacenter CPU core and a datacenter CPU chip and a datacenter CPU platform. We last year decided we would adopt and support the ARM architecture for the full Nvidia stack, and that was a giant commitment. The day we decided to do that we realized this was for as long as we shall live. The reason for that is that once you start supporting the ecosystem you can’t back out. For all the same reasons, when you’re a computing platform company, people depend on you, you have to support them for as long as you shall live, and we do, and we take that promise very seriously.

And so we are about to put the entire might of our company behind this architecture, from the CPU core, to the CPU chips from all of these different customers, all of these different partners, from Ampere or Marvell or Amazon or Fujitsu, the number of companies out there that are considering building ARM CPUs out of their ARM CPU cores is really exciting. The investments that Simon and the team have made in the last four years, while they were out of the public market, has proven to be incredibly valuable, and now we want to lean hard into that, and make ARM a first-class data center platform, from the chips to the GPUs to the DPUs to the software stack, system stack, to all the application stack on top, we want to make it a full out first-class data center platform.

Well, before we do that, it would be great to own it. We’re going to accrue so much value to this architecture in the world of data centers, before we make that gigantic investment and gigantic focus, why don’t we own it. That’s the second reason.

Third reason, we want to go invent the future of cloud to edge. The future of computing where all of these autonomous systems are powered by AI and powered by accelerated computing, all of the things we have been talking about, that future is being invented as we speak, and there are so many great opportunities there. Edge data centers — 5G edge data centers — autonomous machines of all sizes and shapes, autonomous factories, Nvidia has built a lot of software as you guys have seen — Metropolis, Clara, Isaac, Drive, Jarvis, Aerial — all of these platforms are built on top of ARM, and before we go and see the inflection point, wouldn’t it be great if we were one company.

And so the timing is really quite important. We’ve invested so much across all of these different areas, that we felt that we really had to take the opportunity to own the company and collaborate deeply as we invent the future. That’s the answer.

It turns out this is very much an Nvidia vision after all. Nvidia is not setting out to be a partner, someone that gets along with everyone in exchange for a couple of pennies in licensing fees. Quite the opposite: Huang wants to own it all.

In this vision Nvidia’s IP is the CUDA to its graphics chips — the complement to its grander ambitions. Huang has his sights set firmly on Intel, but while Intel has leveraged its integration of design and manufacturing, Nvidia is going to leverage its integration of chip design and software. Huang’s argument is that it is the lack of software — a platform, as opposed to simply a chip or a core — that is limiting ARM in the data center, and that Nvidia intends to build that software.

On one hand, this is exciting for ARM licensees, particularly companies like Amazon that have invested in ARM chips for the data center; note, though, that Nvidia isn’t doing this out of charity. Huang twice mentioned the importance of capturing the upside he believes Nvidia will generate, which ultimately means increased license fees. Sure, Nvidia will be able to make more changes to ARM to suit the data center than they could have as licensor, but the real goal is to tie ARM into an Nvidia software platform until licensees have no choice but to pay what will undoubtedly be ever-increasing licensing fees (which, it should be noted, will still result in chips that less expensive than Intel’s).

I don’t know if it will work; data centers are about the density of processing power, which is related to but still different than performance-per-watt, ARM’s traditional advantage relative to Intel, and there are a huge amount of 3rd-parties involved in such a transition. There is a lot about this vision that is out of Nvidia’s control — it’s more of a dream. What is comforting in a way, though, is just how true this dream is to what makes Nvidia unique: this isn’t about adopting ARM’s approach, it’s about co-opting it for a vision of integration that makes Nvidia an object of inevitability, not affection.

And, to return to the beginning, it is a bet that is a relatively free one. If Nvidia’s stock is over-priced, then it is buying Arm for an even bigger discount than it seems; the vision Huang laid out, though, is a reason to believe Nvidia’s stock price is actually correct. Might as well roll the dice on a P.A. Semi-type outcome.


Three additional notes about this transaction:

  • As I noted above, Apple has a perpetual license to ARM. The specific details of this license are unknown — we now know that Apple can extend the ISA for its own uses — but my understanding is that the terms are locked in. That is why Apple didn’t feel any motivation to acquire ARM itself, even if Nvidia, a company that Apple does not get along with, was the alternative suitor.
  • This vision of Arm’s future is in many ways incompatible with ARM’s neutral past, but the truth is Arm is already facing disruption of its own. RISC-V is an open-source ISA that is increasingly popular for embedded controllers in particular, in large part because it not only gets rid of Arm control, but also Arm license fees. I would expect investment in RISC-V to accelerate on this news, but it’s worth noting that it is just that — an acceleration of what was inevitable in the long run.
  • One of the biggest regulatory questions around this acquisition is China. On one hand, China has reason to fear an American company — which is subject to U.S. export controls — acquiring more processor technology. On the other hand, Arm China is actually a joint venture, the CEO of which has gone rogue; it’s not clear if Arm is actually in control. It’s possible that this acquisition happens without China’s approval and without ARM China, which is 20% of Arm’s sales. Huang’s dream, though, is perhaps enough to justify this nightmare.
  1. Throughout this article I will write “Arm” when I am referring to the company, and “ARM” when I am referring to said company’s IP []