Rights, Laws, and Google

The first and most important takeaway from Kashmir Hill’s excellent article in the New York Times about Mark, the man flagged by Google as a purveyor of Child Sexual Abuse Material (CSAM) for taking pictures of his son’s penis and sending them to their family doctor, and who subsequently lost nearly every aspect of his digital life when Google deleted his account, are the tremendous trade-offs entailed in the indiscriminate scanning of users’ cloud data.

On one hand, it seems like an incredible violation of privacy to have a private corporation effectively looking through every photo you upload, particularly when those uploads happen as part of the expected way in which your smartphone operates (users technically agree to this scanning, but as part of an endless End User License Agreement that is both ridiculously long and, more pertinently, inescapable if you want to use your phone as it was intended). Moreover, Google doesn’t simply scan for CSAM photos that are already known to exist via the PhotoDNA database of photos of exploited children; the company also leverages machine learning to look for new CSAM that hasn’t yet been identified as such.

On the other hand, as horrific as the material in the PhotoDNA database is, much of it has been floating around the Internet for years, which is to say the abuse depicted happened long ago; Google’s approach has the potential to discover abuse as it is happening, making it possible for the authorities to intercede and rescue the child in question. Hill’s story noted that in 2021 the CyberTipline at the National Center for Missing and Exploited Children, the only entity legally allowed to hold CSAM (NCMEC also manages the PhotoDNA database), “alerted authorities to ‘over 4,260 potential new child victims'”. We don’t know how many of those children were subsequently rescued, but a question worth posing to anyone unilaterally opposed to Google’s approach is how big that number would have to be to have made it worthwhile?

But, to return to the original hand, one of those 4,260 potential new child victims was Mark’s son (and another was taken by Cassio, a second father found by Hill caught in the same predicament, for the same reasons): the question for those applauding Google’s approach is how big the number of false positives would have to be to shut the whole thing down?

It was the exploration of these trade-offs that was at the heart of the Update I wrote about Hill’s story last week; as I noted there are no easy answers:

Nearly every aspect of this story is incredibly complex, and I understand and respect arguments on both sides: should there be scanning of cloud-related content? Should machine learning be leveraged to find new photos? Is it reasonable to obliterate someone’s digital life — except for what you give the police — given the possibility that they may be committing horrific crimes? These are incredibly difficult questions, particularly in the absence of data, because the trade-offs are so massive.

However, it seemed to me that one aspect of the case was very clear:

There is, though, one part of the story that is black-and-white. Google is unquestionably wrong to not restore the accounts in question. In fact, I am stunned by the company’s approach in these cases. Even if you grant the arguments that this awesome exercise of surveillance is warranted, given the trade-offs in question, that makes it all the more essential that the utmost care be taken in case the process gets it wrong. Google ought to be terrified it has this power, and be on the highest alert for false positives; instead the company has gone in the opposite direction, setting itself as judge, jury, and executioner, even when the people we have collectively entrusted to lock up criminals ascertain there was no crime. It is beyond arrogant, and gives me great unease about the company generally, and its long-term investments in AI in particular.

Not that it matters, one may argue: Google can do what they want, because they are a private company. That is an argument that may ring familar.

Tech and Liberty

In 2019 I discussed the distinction between public and private restrictions on speech in Tech and Liberty:

Alexander Hamilton was against the Bill of Rights, particularly the First Amendment. This famous xkcd comic explains why:

Free Speech by xkcd

According to Randall Munroe, the author, the “Right to Free Speech” is granted by the First Amendment, which was precisely the outcome Hamilton feared in Federalist No. 84:

I go further, and affirm that bills of rights, in the sense and to the extent in which they are contended for, are not only unnecessary in the proposed Constitution, but would even be dangerous. They would contain various exceptions to powers not granted; and, on this very account, would afford a colorable pretext to claim more than were granted. For why declare that things shall not be done which there is no power to do? Why, for instance, should it be said that the liberty of the press shall not be restrained, when no power is given by which restrictions may be imposed? I will not contend that such a provision would confer a regulating power; but it is evident that it would furnish, to men disposed to usurp, a plausible pretense for claiming that power. They might urge with a semblance of reason, that the Constitution ought not to be charged with the absurdity of providing against the abuse of an authority which was not given, and that the provision against restraining the liberty of the press afforded a clear implication, that a power to prescribe proper regulations concerning it was intended to be vested in the national government. This may serve as a specimen of the numerous handles which would be given to the doctrine of constructive powers, by the indulgence of an injudicious zeal for bills of rights.

Hamilton’s argument is that because the U.S. Constitution was created not as a shield from tyrannical kings and princes, but rather by independent states, all essential liberties were secured by the preamble (emphasis original):

WE, THE PEOPLE of the United States, to secure the blessings of liberty to ourselves and our posterity, do ORDAIN and ESTABLISH this Constitution for the United States of America.

Hamilton added:

Here, in strictness, the people surrender nothing; and as they retain every thing they have no need of particular reservations.

Munroe, though, assumes the opposite: liberty, in this case the freedom of speech, is an artifact of law, only stretching as far as government action, and no further. Pat Kerr, who wrote a critique of this comic on Medium in 2016, argued that this was the exact wrong way to think about free speech:

Coherent definitions of free speech are actually rather hard to come by, but I would personally suggest that it’s something along the lines of “the ability to voluntarily express (and receive) opinions without suffering excessive penalties for doing so”. This is a liberal principle of tolerance towards others. It’s not an absolute, it isn’t comprehensive, it isn’t rigorously defined, and it isn’t a law.

What it is is a culture.

The context of that 2019 Article was the differing decisions between Facebook and Twitter in terms of allowing political ads on their platforms; over the ensuing three years the willingness and length to which these and other large tech platforms have been willing to go to police speech has expanded dramatically, even as the certainty that private censorship is ‘good actually’ has become conventional wisdom. I found this paragraph in a New York Times article about Elon Musk’s attempts to buy Twitter striking:

The plan jibes with Mr. Musk’s, Mr. Dorsey’s and Mr. Agrawal’s beliefs in unfettered free speech. Mr. Musk has criticized Twitter for moderating its platform too restrictively and has said more speech should be allowed. Mr. Dorsey, too, grappled with the decision to boot former President Donald J. Trump off the service last year, saying he did not “celebrate or feel pride” in the move. Mr. Agrawal has said that public conversation provides an inherent good for society. Their positions have increasingly become outliers in a global debate over free speech online, as more people have questioned whether too much free speech has enabled the spread of misinformation and divisive content.

In other words, the culture has changed; the law persists, but it does not and, according to the New York Times, ought not apply to private companies.

Scienter

The Google case is not about the First Amendment, either legally or culturally. The First Amendment is not absolute, and CSAM is an obvious example. In 1957’s Roth v. United States the Supreme Court held that obscene speech was not protected by the First Amendment; Justice William Brennan Jr. wrote:

All ideas having even the slightest redeeming social importance — unorthodox ideas, controversial ideas, even ideas hateful to the prevailing climate of opinion — have the full protection of the guaranties, unless excludable because they encroach upon the limited area of more important interests. But implicit in the history of the First Amendment is the rejection of obscenity as utterly without redeeming social importance. This rejection for that reason is mirrored in the universal judgment that obscenity should be restrained, reflected in the international agreement of over 50 nations, in the obscenity laws of all of the 48 States, and in the 20 obscenity laws enacted by the Congress from 1842 to 1956.

This reasoning is a reminder that laws ultimately stem from culture; still, the law being the law, definitions were needed, which the Supreme Court provided in 1973’s Miller v. California. Obscene works (1) appeal to the prurient interest in sex, (2) portrays in a patently offensive way sexual conduct specifically defined by a relevant law and (3) lack serious literary, artistic, political, or scientific value. The Supreme Court went further in terms of CSAM in 1982’s New York v. Ferber, holding that the harm inflicted on children is sufficient reason to make all forms of CSAM illegal, above and beyond the standards set forth by Miller. Justice Byron White wrote:

Recognizing and classifying child pornography as a category of material outside the protection of the First Amendment is not incompatible with our earlier decisions. “The question whether speech is, or is not, protected by the First Amendment often depends on the content of the speech”…

The test for child pornography is separate from the obscenity standard enunciated in Miller, but may be compared to it for the purpose of clarity. The Miller formulation is adjusted in the following respects: a trier of fact need not find that the material appeals to the prurient interest of the average person; it is not required that sexual conduct portrayed be done so in a patently offensive manner; and the material at issue need not be considered as a whole. We note that the distribution of descriptions or other depictions of sexual conduct, not otherwise obscene, which do not involve live performance or photographic or other visual reproduction of live performances, retains First Amendment protection. As with obscenity laws, criminal responsibility may not be imposed without some element of scienter on the part of the defendant.

“Scienter”, the “knowledge of the nature of one’s act”, is what ties this judicial history back to the original discussion of Google’s actions against Mark. As Hill explained in the New York Times:

I have seen the photos that Mark took of his son. The decision to flag them was understandable: They are explicit photos of a child’s genitalia. But the context matters: They were taken by a parent worried about a sick child.

The problem in this case comes from who is determining scienter.

Google and the Bill of Rights

Quite clearly Mark did not intend for the pictures he took for his son’s telemedicine to be used for pornographic purposes. The San Francisco Police Department, which had been notified by Google after a human reviewer confirmed the machine learning-driven discovery of Mark’s photos of his son, agreed. From Hill’s story:

In December 2021, Mark received a manila envelope in the mail from the San Francisco Police Department. It contained a letter informing him that he had been investigated as well as copies of the search warrants served on Google and his internet service provider. An investigator, whose contact information was provided, had asked for everything in Mark’s Google account: his internet searches, his location history, his messages and any document, photo and video he’d stored with the company.

The search, related to “child exploitation videos,” had taken place in February, within a week of his taking the photos of his son. Mark called the investigator, Nicholas Hillard, who said the case was closed. Mr. Hillard had tried to get in touch with Mark but his phone number and email address hadn’t worked. “I determined that the incident did not meet the elements of a crime and that no crime occurred,” Mr. Hillard wrote in his report. The police had access to all the information Google had on Mark and decided it did not constitute child abuse or exploitation.

Mark asked if Mr. Hillard could tell Google that he was innocent so he could get his account back. “You have to talk to Google,” Mr. Hillard said, according to Mark. “There’s nothing I can do.” Mark appealed his case to Google again, providing the police report, but to no avail…A Google spokeswoman said the company stands by its decisions, even though law enforcement cleared the two men.

In short, the questions about Google’s behavior are not about free speech; they do, though, touch on other Amendments in the Bill of Rights. For example:

  • The Fourth Amendment bars “unreasonable searches and seizures”; while you can make the case that search warrants were justified once the photos in question were discovered, said photos were only discovered because Mark’s photo library was indiscriminately searched in the first place.
  • The Fifth Amendment says no person shall be deprived of life, liberty, or property, without due process of law; Mark lost all of his data, email account, phone number, and everything else Google touched forever with no due process at all.
  • The Sixth Amendment is about the rights to a trial; Mark was not accused of any crime in the real world, but when it came to his digital life Google was, as I noted, “judge, jury, and executioner” (the Seventh Amendment is, relatedly, about the right to a jury trial for all controversies exceeding $20).

Again, Google is not covered by the Bill of Rights; all of these Amendments, just like the First, only apply to the government. The reason why this case is useful, though, is it is a reminder that specific legal definitions are distinct from questions of right or wrong.

Working backwards, Google isn’t legally compelled to give Mark a hearing about his digital life (Sixth Amendment); they are wrong not to. Google isn’t legally compelled to give Mark due process before permanently deleting his digital life (Fifth Amendment); they are wrong not to. Google isn’t legally compelled to not search all of the photographs uploaded to Google (by default, if you click through all of the EULA’s); they are…well, this is where it gets complicated.

I started out this Article discussing the impossible trade-offs presented by questions of CSAM. People can and do make the case that to not search for this vileness, particularly if there is a chance that it can lead to the rescue of an abused child, is its own wrong. Resolving this trade-off in this way, though — that is, to violate the spirit and culture of the Fourth Amendment — makes it all the more essential to honor the spirit and culture of the Fifth and Sixth.

Paper Barriers

James Madison answered Hamilton’s objections in a speech to Congress introducing the Bill of Rights. What is interesting is that while Hamilton took it as a given that people would know and value their rights, Madison assumed the culture would run in the opposite direction, making an articulation of those rights important not just to restrain the government, but to remind the majority to not trample the rights of the minority:

But I confess that I do conceive, that in a Government modified like this of the United States, the great danger lies rather in the abuse of the community than in the Legislative body. The prescriptions in favor of liberty ought to be levelled against that quarter where the greatest danger lies, namely, that which possesses the highest prerogative of power. But this is not found in either the Executive or Legislative departments of Government, but in the body of the people, operating by the majority against the minority.

It may be thought that all paper barriers against the power of the community are too weak to be worthy of attention. I am sensible they are not so strong as to satisfy gentlemen of every description who have seen and examined thoroughly the texture of such a defence; yet, as they have a tendency to impress some degree of respect for them, to establish the public opinion in their favor, and rouse the attention of the whole community, it may be one means to control the majority from those acts to which they might be otherwise inclined.

This Article is a manifestation of Madison’s hope. Start with the reality that it seems quaint in retrospect to think that any of the Bill of Rights would be preserved absent the force of law. This is one of the great lessons of the Internet and the rise of Aggregators: when suppressing speech entailed physically disrupting printing presses or arresting pamphleteers, then restricting government, which retains a monopoly on real world violence, was sufficient to preserve speech. Along the same lines, there was no need to demand due process or a restriction on search and seizure on any entity but the government, because only the government could take your property or send you to jail.

Aggregators, though, make private action much more possible and powerful than ever before: yes, if you are kicked off of Twitter or Facebook, you can still say whatever you want on a street corner; similarly, if you lose all of your data and phone and email, you are still not in prison — and thank goodness that is the case! At the same time, it seems silly to argue that getting banned from a social media platform isn’t an infringement on individual free speech rights, even if it is the corporations’ own free speech rights that enable them to do just that legally, just as it is silly to argue that losing your entire digital life without recourse isn’t a loss of property without due process. The big Internet companies are manifesting Madison’s fears of the majority operating against the minority, and there is nothing the Bill of Rights can do about it.

What remains are those paper barriers, and what respect they might still engender, if it is possible to “rouse the attention of the whole community.” Rights are larger than laws, and Google has violated the former, even if they are not bound by the latter. The company ought not only change its policy with regards to Mark and Cassio, but fundamentally re-evaluate the balance it has struck between its unprecedented power over people’s lives and the processes it has in place to ensure that power is not abused. If it doesn’t, the people ought to, with what power they still conserve, do it for them.

I wrote a follow-up to this Article in this Daily Update.

Instagram, TikTok, and the Three Trends

Back in 2010, during my first year of Business School, I helped give a presentation entitled “Twitter 101”:

The introductory slide from a Twitter 101 presentation in business school

My section was “The Twitter Value Proposition”, and after admitting that yes, you can find out what people are eating for lunch on Twitter, I stated “The truth is you can find anything you want on Twitter, and that’s a good thing.” The Twitter value proposition was that you could “See exactly what you need to see, in real-time, in one place, and nothing more”; I illustrated this by showing people how they could unfollow me:

A slide noting that Twitter is what you make of it

The point was that Twitter required active management of your feed, but if you put in the effort, you could get something uniquely interesting to you that was incredibly valuable.

Most of the audience didn’t take me up on it.

Facebook Versus Instagram

If there is one axiom that governs the consumer Internet — consumer anything, really — it is that convenience matters more than anything. That was the problem with Twitter: it just wasn’t convenient for nearly enough people to figure out how to follow the right people. It was Facebook, which digitized offline relationships, that dominated the social media space.

Facebook’s social graph was the ultimate growth hack: from the moment you created an account Facebook worked assiduously to connect you with everyone you knew or wish you knew from high school, college, your hometown, workplace, you name an offline network and Facebook digitized it. Of course this meant that there were far too many updates and photos to keep track of, so Facebook ranked them, and presented them in a feed that you could scroll endlessly.

Users, famously, hated the News Feed when it was first launched: Facebook had protesters outside their doors in Palo Alto when it was introduced, and far more online; most were, ironically enough, organized on Facebook. CEO Mark Zuckerberg penned an apology:

We really messed this one up. When we launched News Feed and Mini-Feed we were trying to provide you with a stream of information about your social world. Instead, we did a bad job of explaining what the new features were and an even worse job of giving you control of them. I’d like to try to correct those errors now…

The errors to be corrected were better controls over what might be shared; Facebook did not give the users what they claimed to want, which was abolishing the News Feed completely. That’s because the company correctly intuited a significant gap between its users stated preference — no News Feed — and their revealed preference, which was that they liked News Feed quite a bit. The next fifteen years would prove the company right.

It was hard to not think of that non-apology apology while watching Adam Mosseri’s Instagram update three weeks ago; Mosseri was clear that videos were going to be an ever great part of the Instagram experience, along with recommended posts. Zuckerberg reiterated the point on Facebook’s earnings call, noting that recommended posts in both Facebook and Instagram would continue to increase. A day later Mosseri told Casey Newton on Platformer that Instagram would scale back recommended posts, but was clear that the pullback was temporary:

“When you discover something in your feed that you didn’t follow before, there should be a high bar — it should just be great,” Mosseri said. “You should be delighted to see it. And I don’t think that’s happening enough right now. So I think we need to take a step back, in terms of the percentage of feed that are recommendations, get better at ranking and recommendations, and then — if and when we do — we can start to grow again.” (“I’m confident we will,” he added.)

Michael Mignano calls this recommendation media in an article entitled The End of Social Media:

In recommendation media, content is not distributed to networks of connected people as the primary means of distribution. Instead, the main mechanism for the distribution of content is through opaque, platform-defined algorithms that favor maximum attention and engagement from consumers. The exact type of attention these recommendations seek is always defined by the platform and often tailored specifically to the user who is consuming content. For example, if the platform determines that someone loves movies, that person will likely see a lot of movie related content because that’s what captures that person’s attention best. This means platforms can also decide what consumers won’t see, such as problematic or polarizing content.

It’s ultimately up to the platform to decide what type of content gets recommended, not the social graph of the person producing the content. In contrast to social media, recommendation media is not a competition based on popularity; instead, it is a competition based on the absolute best content. Through this lens, it’s no wonder why Kylie Jenner opposes this change; her more than 360 million followers are simply worth less in a version of media dominated by algorithms and not followers.

Sam Lessin, a former Facebook executive, traced this evolution from the analog days to what is next in a Twitter screenshot entitled “The Digital Media ‘Attention’ Food Chain in Progress”:

Lessin’s five steps:

  1. The Pre-Internet ‘People Magazine’ Era
  2. Content from ‘your friends’ kills People Magazine
  3. Kardashians/Professional ‘friends’ kill real friends
  4. Algorithmic everyone kills Kardashians
  5. Next is pure-AI content which beats ‘algorithmic everyone’

This is a meta observation and, to make a cheap play on words, the first reason why it made sense for Facebook to change its name: Facebook the app is eternally stuck on Step 2 in terms of entertainment (the app has evolved to become much more of a utility, with a focus on groups, marketplace, etc.). It’s Instagram that is barreling forward. I wrote last summer about Instagram’s Evolution:

The reality, though, is that this is what Instagram is best at. When Mosseri said that Instagram was no longer a photo-sharing app — particularly a “square photo-sharing app” — he was not making a forward-looking pronouncement, but simply stating what has been true for many years now. More broadly, Instagram from the very beginning — including under former CEO Kevin Systrom — has been marked first and foremost by evolution.

To put this in Lessin’s framework, Instagram started out as a utility for adding filters to photos put on other social networks, then it developed into a social network in its own right. What always made Instagram different than Facebook, though, is the fact that its content was default-public; this gave the space for the rise of brands, meme and highlight accounts, and the Instagram influencer. Sure, some number of people continued to use Instagram primarily as a social network, but Meta, more than anyone, had an understanding of how Instagram usage had evolved over time.

Kylie Jenner and Kim Kardashian asking Instagram to be Instagram

In other words, when Kylie Jenner posts a petition demanding that Meta “Make Instagram Instagram again”, the honest answer is that changing Instagram is the most Instagram-like behavior possible.

Three Trends

Still, it’s understandable why Instagram did back off, at least for now: the company is attempting to navigate three distinct trends, all at the same time.

The first trend is the shift towards ever more immersive mediums. Facebook, for example, started with text but exploded with the addition of photos. Instagram started with photos and expanded into video. Gaming was the first to make this progression, and is well into the 3D era. The next step is full immersion — virtual reality — and while the format has yet to penetrate the mainstream this progression in mediums is perhaps the most obvious reason to be bullish about the possibility.

The trend in mediums online

The second trend is the increase in artificial intelligence. I’m using the term colloquially to refer to the overall trend of computers getting smarter and more useful, even if those smarts are a function of simple algorithms, machine learning, or, perhaps someday, something approaching general intelligence. To go back to Facebook, the original site didn’t have any smarts at all: it was just a collection of profile pages. Twitter came along and had the timeline, but the only smarts there was the ability to read a time stamp: all of the content was presented in chronological order. What made Facebook’s News Feed work was the application of ranking: from the very beginning the company tried to present users the content from their network that it thought you might be most interested in, mostly using simple signals and weights. Over time this ranking algorithm has evolved into a machine-learning driven model that is constantly iterating based on every click and linger, but on the limited set of content constrained by who you follow. Recommendations is the step beyond ranking: now the pool is not who you follow but all of the content on the entire network; it is a computation challenge that is many orders of magnitude beyond mere ranking (and AI-created content another massive step-change beyond that).

The trend in AI and content online

The third trend is the change in interaction models from user-directed to computer-controlled. The first version of Facebook relied on users clicking on links to visit different profiles; the News Feed changed the interaction model to scrolling. Stories reduced that to tapping, and Reels/TikTok is about swiping. YouTube has gone further than anyone here: Autoplay simply plays the next video without any interaction required at all.

The trend in UI online

One of the reasons Instagram got itself in trouble over the last few months is by introducing changes along all of these vectors at the same time. The company introduced more video into the feed (Trend 1), increased the percentage of recommended posts (Trend 2), and rolled out a new version of the app that was effectively a re-skinned TikTok to a limited set of users (Trend 3). It stands to reason that the company would have been better off doing one at a time.

That, though, would only be a temporary solution: it seems likely that all of these trends are inextricably intertwined.

Medium, Computing, and Interaction Models

Start with medium: text is easy, which is why it was the original medium of the Internet; effectively anyone can create it. The first implication is that there is far more text on the Internet than anything else; it also follows that the amount of high quality text is correspondingly high as well (a small fraction of a large number is still very large). The second implication has to do with AI: it is easier to process and glean insight from text. Text, meanwhile, takes focus and the application of an acquired skill for humans to interpret, not dissimilar to the deliberate movement of a mouse to interact with a link.

Photos used to be more difficult: digital cameras came along around the same time as the web, but even then you needed to have a dedicated device, move those photos to your computer, then upload them to a network. What is striking about the impact of smartphones is that not only did they make the device used to take pictures the same device used to upload and consume them, but they actually made it easier to take a picture than to write text. Still, it took time for AI to catch up: at first photos were ranked using the metadata surrounding them; only over the last few years has it become possible for services to understand what the photo actually is. The most reliable indicator of quality — beyond a like — remains the photo that you stop at while scrolling.

The ease of making a video followed a similar path to photos, but more extreme: making and uploading your own videos before the smartphone was even more difficult than photos; today the mechanics are just as easy, and it’s arguably even easier to make something interesting, given the amount of information conveyed by a video relative to photos, much less a text. Still, videos require more of a commitment than text or photos, because consuming them takes time; this is where the user interaction layer really matters. Lessin again, in another Twitter screenshot:

I saw someone recently complaining that Facebook was recommending to them…a very crass but probably pretty hilarious video. Their indignant response [was that] “the ranking must be broken.” Here is the thing: the ranking probably isn’t broken. He probably would love that video, but the fact that in order to engage with it he would have to go proactively click makes him feel bad. He doesn’t want to see himself as the type of person that clicks on things like that, even if he would enjoy it.

This is the brilliance of Tiktok and Facebook/Instagram’s challenge: TikTok’s interface eliminates the key problem of what people want to view themselves as wanting to follow/see versus what they actually want to see…it isn’t really about some big algorithm upgrade, it is about relesing emotional inner tension for people who show up to be entertained.

This is the same tension between stated and revealed preference that Facebook encountered so many years ago, and its exactly why I fully expect the company to, after this pullback, continue to push forward with all three of the Instagram changes it is exploring.

Instagram’s Risk

Still, there is considerably more risk this time around: when Facebook pushed forward with the News Feed it was the young upstart moving aside incumbents like MySpace; it’s not as if its userbase was going to go backwards. This case is the opposite: Instagram is clearly aping TikTok, which is the young upstart in the space. It’s possible its users decide that if they must experience TikTok, they might as well go for the genuine thing.

This also highlights why TikTok is a much more serious challenge than Snapchat was: in that case Instagram’s network was the sword used to cut Snapchat off at the knees. I wrote in The Audacity of Copying Well:

For all of Snapchat’s explosive growth, Instagram is still more than double the size, with far more penetration across multiple demographics and international users. Rather than launch a “Stories” app without the network that is the most fundamental feature of any app built on sharing, Facebook is leveraging one of their most valuable assets: Instagram’s 500 million users…Instagram and Facebook are smart enough to know that Instagram Stories are not going to displace Snapchat’s place in its users lives. What Instagram Stories can do, though, is remove the motivation for the hundreds of millions of users on Instagram to even give Snapchat a shot.

Instagram has no such power over TikTok, beyond inertia; in fact, the competitive situation is the opposite: if the goal is not to rank content from your network, but to recommend videos from the best creators anywhere, then it follows that TikTok is in the stronger relative position. Indeed, this is why Mosseri spent so much time talking about “small creators” with Newton:

I think one of the most important things is that we help new talent find an audience. I care a lot about large creators; I would like to do better than we have historically by smaller creators. I think we’ve done pretty well by large creators overall — I’m sure some people will disagree, but in general, that’s what the data suggests. I don’t think we’ve done nearly as well helping new talent break. And I think that’s super important. If we want to be a place where people push culture forward, to help realize the promise of the internet, which was to push power into the hands of more people, I think that we need to get better at that.

There is the old Internet AMA question as to whether you would rather fight a horse-sized duck or 100 duck-sized horses. The analogy here is that in a world of ranking a horse-sized duck that everyone follows is valuable; in a world of recommendations 100 duck-sized horses are much more valuable, and Instagram is willing to sacrifice the former for the latter.

Meta’s Reward

The payoff, though, will not be “power” for these small creators: the implication of entertainment being dictated by recommendations and AI instead of reputation and ranking is that all of the power accrues to the platform doing the recommending. Indeed, this is where the potential reward comes in: this power isn’t only based on the sort of Aggregator dynamics underpinning dominant platforms today, but also absolutely massive amounts of investment in the computing necessary to power the AI that makes all of this possible.

In fact, you can make the case that if Meta survives the TikTok challenge, it will be on its way to the sort of moat enjoyed by the likes of Apple, Amazon, Google, and Microsoft, all of which have real world aspects to their differentiation. There is lots of talk about the $10 billion the company is spending on the Metaverse, but that is R&D; the more important number for this moat is the $30 billion this year in capital expditures, most of which is going to servers for AI. That AI is doing recommendations now, but Meta’s moat will only deepen if Lessin is right about a future where creators can be taken out of the equation entirely, in favor of artificially-generated content.

What is noteworty is that AI content will be an essential part of any sort of Metaverse future; I wrote earlier this year in DALL-E, the Metaverse, and Zero Marginal Content:

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before:

  • Game developers pushed the limits on text, then images, then video, then 3D
  • Social media drives content creation costs to zero first on text, then images, then video
  • Machine learning models can now create text and images for zero marginal cost

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

These AI challenges, I would add, apply to monetization as well: one of the outcomes of Apple’s App Tracking Transparency changes is that advertising needs to shift from a deterministic model to a probabilistic one; the companies with the most data and the greatest amount of computing resources are going to make that shift more quickly and effectively, and I expect Meta to be top of the list.

None of this matters, though, without engagement. Instagram is following the medium trend to video, and Meta’s resources give it the long-term advantage in AI; the big question is which service users choose to interact with. To put it another way, Facebook’s next two decades are coming into sharper focus than ever; it is how well it navigates the TikTok minefield over the next two years that will determine if that long-term vision becomes a reality.

I wrote a follow-up to this Article in this Daily Update.

Political Chips

Last Friday AMD surpassed Intel in market capitalization:

Intel vs AMD market caps

This was the second time in history this happened — the first was earlier this year — and it may stick this time; AMD, in stark contrast to Intel, had stellar quarterly results. Both stocks are down in the face of a PC slump, but that is much worse news for Intel, given that they make worse chips.

It’s also not a fair comparison: AMD, thirteen years on from its spinout of Global Foundries, only designs chips; Intel both designs and manufactures them. It’s when you include AMD’s current manufacturing partner, TSMC, that Intel’s relative decline becomes particularly apparent:

Intel, AMD, and TSMC market caps

Of course an Intel partisan might argue that this comparison is unfair as well, because TSMC manufactures chips for a whole host of companies beyond AMD. That, though, is precisely Intel’s problem.

Intel’s Stumble

The late Clay Christensen, in his 2004 book Seeing What’s Next, predicted trouble for Intel:

Intel’s well-honed processes — which are almost unassailable competitive strengths in fights for undershot customers hungering for performance increases — might inhibit its ability to fight for customers clamoring for customized products. Its exacting manufacturing process could hamper its ability to deliver customized products. Its sales force could have difficulty adapting to a very different sales cycle. It would have to radically alter its marketing process. The VCE model predicts that operating “fast fabs” will be an attractively profitable point in the value chain in the future. The good news for IDMs such as IBM and Intel is that they own fabs. The bad news is that their fabs aren’t fast. Entrants without legacy processes could quite conceivably develop better proprietary processes that can rapidly deliver custom processors.

This sounds an awful lot like what happened over the ensuing years: one of TSMC’s big advantages is its customer service. Given the fact that the company was built as a pure play foundry it has developed processes and off-the-shelf building blocks that make it easy for partners to build custom chips. This was tremendously valuable, even if the resultant chips were slower than Intel’s.

What Christensen didn’t foresee was that Intel would lose the performance crown; rather, he assumed that performance would cease to be an important differentiator:

If history is any guide, motivated innovators will continue to do the seemingly impossible and find unanticipated ways to extend the life of Moore’s Law. Although there is much consternation that at some point Moore’s Law will run into intractable physical limits, the only thing we can predict for certain is that innovators will be motivated to figure out solutions.

But this does not address whether meeting Moore’s Law will continue to be paramount to success. Everyone always hopes for the emergence of new, unimagined applications. But the weight of history suggests the unimagined often remains just that; ultimately ever more demanding applications will stop appearing or will emerge much more slowly than anticipated. But even if new, high-end applications emerge, rocketing toward the technological frontier almost always leaves customers behind. And it is in those overshot tiers that disruptions take root.

How can we tell if customers are overshot? One signal is customers not using all of a product’s functionality. Can we see this? There are ever-growing populations of users who couldn’t care less about increases in processing power. The vast majority of consumers use their computers for word processing and e-mail. For this majority, high-end microprocessors such as Intel’s Itanium and Pentium 4 and AMD’s Athlon are clearly overkill. Windows XP runs just fine on a Pentium III microprocessor, which is roughly half as fast as the Pentium 4. This is a sign that customers may be overshot.

Obviously Christensen was wrong about a Pentium III being good enough, and not just because web pages suck; rather, the infinite malleability of software really has made it possible to not just create new kinds of applications but to also substantially rework previous analog solutions. Moreover, the need for more performance is actually accelerating with the rise of machine-learning based artificial intelligence.

Intel, despite being a chip manufacturer, understood the importance of software better than anyone. I explained in a Daily Update earlier this year about how Pat Gelsinger, then a graduate student at Stanford, convinced Intel to stick with a CISC architecture design because that gave the company a software advantage; from an oral history at the Computer Museum:

Gelsinger: We had a mutual friend that found out that we had Mr. CISC working as a student of Mr. RISC, the commercial versus the university, the old versus the new, teacher versus student. We had public debates of John and Pat. And Bear Stearns had a big investor conference, a couple thousand people in the audience, and there was a public debate of RISC versus CISC at the time, of John versus Pat.

And I start laying out the dogma of instruction set compatibility, architectural coherence, how software always becomes the determinant of any computer architecture being developed. “Software follows instruction set. Instruction set follows Moore’s Law. And unless you’re 10X better and John, you’re not 10X better, you’re lucky if you’re 2X better, Moore’s Law will just swamp you over time because architectural compatibility becomes so dominant in the adoption of any new computer platform.” And this is when x86– there was no server x86. There’s no clouds at this point in time. And John and I got into this big public debate and it was so popular.

Brock: So the claim wasn’t that the CISC could beat the RISC or keep up to what exactly but the other overwhelming factors would make it the winner in the end.

Gelsinger: Exactly. The argument was based on three fundamental tenets. One is that the gap was dramatically overstated and it wasn’t an asymptotic gap. There was a complexity gap associated with it but you’re going to make it leap up and that the CISC architecture could continue to benefit from Moore’s Law. And that Moore’s Law would continue to carry that forward based on simple ones, number of transistors to attack the CISC problems, frequency of transistors. You’ve got performance for free. And if that gap was in a reasonable frame, you know, if it’s less than 2x, hey, in a Moore’s Law’s term that’s less than a process generation. And the process generation is two years long. So how long does it take you to develop new software, porting operating systems, creating optimized compilers? If it’s less than five years you’re doing extraordinary in building new software systems. So if that gap is less than five years I’m going to crush you John because you cannot possibly establish a new architectural framework for which I’m not going to beat you just based on Moore’s Law, and the natural aggregation of the computer architecture benefits that I can bring in a compatible machine. And, of course, I was right and he was wrong.

That last sentence needs a caveat: Gelsinger was right when it came to computers and servers, but not smartphones. There performance wasn’t free, because manufacturers had to be cognizant of power consumption. More than cognizant, in fact — power usage was the overriding concern. Tony Fadell, who created the iPod and led the development of the first three generations of the iPhone, told me in an interview earlier this year:

You have to have that point of view of that every nanocoulomb is sacred and compatibility doesn’t matter, we’re going to use the best bits, but we’re not going to make sure it has to be the same look and feel. It doesn’t have to have the same principles that is designed for a laptop or a standalone desktop computer, and then bring those down to something that’s smaller form factor, and works within a certain envelope. You have to rethink all the principles. You might use the bits around, and put them together in different ways and use them differently. That’s okay. But your top concept has to be very, very different about what you’re building, why you’re building it, what you’re solving, and the needs of that new environment, which is mobile, and mobile at least for a day or longer for that battery life.

The key phrase there is “compatibility doesn’t matter”; Gelsinger’s argument for CISC over RISC rested on the idea that by the time you remade all of the software created for CISC, Intel would have long since overcome the performance delta between different architectures via its superior manufacturing, which would allow compatibility to trump the competition. Smartphones, though, provided a reason to build up the software layer from scratch, with efficiency, not performance, as the paramount goal.1

All of this still fit in Christensen’s paradigm, I would note: foundries like TSMC and Samsung could accommodate new chip designs that prioritized efficiency over performance, just as Christensen predicted. What he didn’t foresee in 2004 was just how large the smartphone market would be. While there are a host of reasons why TSMC took the performance crown from Intel over the last five years, a major factor is scale: TSMC was making so many chips that it had the money and motivation to invest in Moore’s Law.

The most important decision was shifting to extreme ultraviolet lithography at a time when Intel thought it was much too expensive and difficult to implement; TSMC, backed by Apple’s commitment to buy the best chips it could make, committed to EUV in 2014, and delivered the first EUV-derived chips in 2019 for the iPhone.

Those EUV machines are made by one company — ASML. They’re worth more than Intel too (and Intel is a customer):

Intel, AMD, TSMC, and ASML market caps

The Dutch company, to an even greater degree than TSMC, is the only lithography maker that can afford to invest in the absolute cutting edge.

From Technology to Economics

In 2021’s Internet 3.0 and the Beginning of (Tech) History, I posited that the first era of the Internet was defined by technology, i.e. figuring out what was possible. Much of this technology, including standards like TCP/IP, DNS, HTTP, etc. was developed decades ago; this era culminated in the dot com bubble.

The second era of the Internet was about economics, specifically the unprecedented scale possible in a world of zero distribution costs.

Unlike the assumptions that undergird Internet 1.0, it turned out that the Internet does not disperse economic power but in fact centralizes it. This is what undergirds Aggregation Theory: when services compete without the constraints of geography or marginal costs, dominance is achieved by controlling demand, not supply, and winners take most.

Aggregators like Google and Facebook weren’t the only winners though; the smartphone market was so large that it could sustain a duopoly of two platforms with multi-sided networks of developers, users, and OEMs (in the case of Android; Apple was both OEM and platform provider for iOS). Meanwhile, public cloud providers could provide back-end servers for companies of all types, with scale economics that not only lowered costs and increased flexibility, but which also justified far more investments in R&D that were immediately deployable by said companies.

Chip manufacturing obviously has marginal costs, but the fixed costs are so much larger that the economics are not that dissimilar to software (indeed, this is why the venture capital industry, which originated to support semiconductor startups, so seamlessly transitioned to software); today TSMC et al invest billions of dollars into a single fab that generates millions of chips for decades.

That increase in scale is why a modular value chain ultimately outcompeted Intel’s integrated approach, and it’s why TSMC’s position seems so impregnable: sure, a chip designer like MediaTek might announce a partnership with Intel to maybe produce some lower-end chips at some point in the future, but there is a reason it is not a firm commitment and not for the leading edge. TSMC, for at least the next several years, will make the best chips, and because of that will have the most money to invest in what comes next.

Scale, though, is not the end of the story. Again from Internet 3.0 and the Beginning of (Tech) History:

This is why I suspect that Internet 2.0, despite its economic logic predicated on the technology undergirding the Internet, is not the end-state…After decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality…

To the extent the Internet is as meaningful a shift [as the printing press] — and I think it is! — is inversely correlated to how far along we are in the transformation that will follow — which is to say we have only gotten started. And, after last week, the world is awake to the stakes; politics — not economics — will decide, and be decided by, the Internet.

Time will tell if my contention that an increasing number of nations will push back against American Internet hegemony by developing their own less efficient but independent technological capabilities is correct; one could absolutely make the case that the U.S.’s head start is so overwhelming that attempts to undo Silicon Valley centralization won’t pan out anywhere other than China, where U.S. Internet companies have been blocked for a generation.

Chips, though, are very much entering the political era.

Politics and the End-State

Taiwan President Tsai Ing-wen shared, as one does, some pictures from lunch on social media:

Taiwan President Tsai Ing-wen's Facebook post featuring TSMC founder Morris Chang

The man with glasses and the red tie in the first picture is Morris Chang, the founder of TSMC; behind him is Mark Liu, TSMC’s chairman. They were the first guests listed in President Tsai’s write-up of the lunch with House Speaker Nancy Pelosi, which begins:

台灣與美國共享的,不只有民主自由人權的價值,在經濟發展和民主供應鏈的合作上,我們也持續共同努力。

Taiwan and the United States not only share the values ​​of democracy, freedom and human rights, but also continue to work together on economic development and democratic supply chains.

That sentence captures why Taiwan looms so large, not only on the occasion of Pelosi’s visit, but to world events for years to come. Yes, the United States supports Taiwan because of democracy, freedom and human rights; the biggest reason why that support may one day entail aircraft carriers is because of chips and TSMC. I wrote two years ago in Chips and Geopolitics:

The international status of Taiwan is, as they say, complicated. So, for that matter, are U.S.-China relations. These two things can and do overlap to make entirely new, even more complicated complications.

Geography is much more straightforward:

A map of the Pacific

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only.

The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel’s focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don’t require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC.

This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC’s foundries — and Samsung’s — are within easy reach of Chinese missiles is a major issue.

China, meanwhile, is investing heavily in catching up, although Semiconductor Manufacturing International Corporation (SMIC), its Shanghai-based champion, only just started manufacturing on a 14nm process, years after TSMC, Samsung, and Intel. In the long run, though, the U.S. faced a scenario where China had its own chip supplier, even as it threatened the U.S.’s chip supply chain.

This reality is why I ultimately came down in support of the CHIPS Act, which passed Congress last week. I wrote in a Daily Update:

This is why Intel’s shift to being not simply an integrated device manufacturer but also a foundry is important: yes, it’s the right thing to do for Intel’s business, but it’s also good for the West if Intel can pull it off. That, by extension, is why I’m fine with the CHIPS bill favoring Intel…AMD, Qualcomm, Nvidia, et al, are doing just fine under the current system; they are drivers and beneficiaries of TSMC’s dominance in particular. The system is working! Which, to the point above, is precisely why Intel being helped disproportionately is in fact not a flaw but a feature: the goal should be to counteract the fundamental forces pushing manufacturing to geopolitically risky regions, and Intel is the only real conduit available to do that.

Time will tell if the CHIPS Act achieves its intended goals; the final version did, as I hoped, explicitly limit investment by recipients in China, which is already leading chip makers to rethink their investments. That this is warping the chip market is, in fact, the point: the structure of technology drives inexorably towards the most economically efficient outcomes, but the ultimate end state will increasingly be a matter of politics.

I wrote a follow-up to this Article in this Daily Update.


  1. As an example of how efficiency trumped performance, the first iPhone’s processor was actually underclocked — better battery life was more of a priority than faster performance. 

Big Ten Blame

Penn State has always been a usurper, at least to me.

In 1984 the Supreme Court ruled that the NCAA’s attempt to control individual universities’ college football TV rights was an illegal restraint on trade; while the lawsuit was instigated by the University of Oklahoma, it was conferences that were the biggest beneficiary, as it simply made more sense to negotiate TV rights as a collective of similarly situated schools. This was a problem for independent Penn State; its traditional rivals like Pitt, Syracuse, and Boston College joined the basketball-focused Big East, leaving the football power to make overtures to the Big Ten.

The 1990 announcement that Penn State was joining the conference was a controversial one within the Big Ten itself. A fair number of the conference’s athletic directors were opposed to the move, and most of the coaches (it is university presidents that make the decision, and even there Penn State only received the minimum 7 votes in favor); I was a 10 year-old sports fan of a then-decrepit football team in Wisconsin that had nothing going for it other than Big Ten pride, and resented the idea that Penn State was going to come in and potentially dominate the conference.

It all feels quite quaint here in 2022, and not just because Wisconsin has had more football success than Penn State; the Big Ten added Nebraska in 2011, and Maryland and Rutgers in 2014, in both cases setting off seismic shifts in the college landscape. Both expansions made sense on the edges, both figuratively and literally: Nebraska was a traditional football power neighboring Iowa that, more importantly, gave the conference the 12 teams necessary to stage a lucrative conference championship game. Maryland and Rutgers bordered Pennsylvania — Penn State was a well-established member of the Big Ten by this point, in practice if not in my mind — and, more importantly, brought the Washington D.C. and New York City markets to the Big Ten’s groundbreaking cable TV network.

It is the latest expansion announcement, though, that blows apart the entire concept of a regional conference founded on geographic rivalries: UCLA and USC will join the Big Ten in 2024. They are not, needless to say, in a Big Ten border state:

That, though, very much fits the reality of 2022: geography doesn’t matter, but attention does, and after the SEC grabbed Texas and Oklahoma last year, the two Los Angeles universities represented the two biggest programs not yet in the two biggest conferences. As for the timing, it’s all about TV: the Big Ten is in the middle of negotiating new rights packages this summer, while the Pac-12’s rights package ends in 2024. One thing is for sure: everyone in college sports, particularly those who, like 10-year-old me, value tradition, know exactly who to blame:

This appears to be true as far as the mechanics of the Big Ten’s expansion: Fox reportedly instigated the UCLA and USC talks with the Big Ten. Figuring out why it was Fox, though — and ESPN with the SEC — exculpates both networks from ultimate responsibility.

A Brief History of TV

TV’s origins are, unsurprisingly, in radio: thanks to the magic of over-the-air broadcasting a radio station could deliver audio to anyone in its geographic area with a compatible listening device. Originally all of said audio was generated at the radio station, but given that most people wanted to listen to similar things, it made sense to link stations together and broadcast the same content all at once. In 1928 NBC became the first coast-to-coast radio network, linking together radio stations with phone lines; those stations weren’t owned by NBC, but were rather affiliates: local owners would actually operate the stations and sell local ads, and pay a fee to NBC for content that, because it was funded by stations across the country, was far more compelling than anything that could be produced locally.

TV followed the same path: thanks to the increased cost of producing video relative to audio, the economic logic of centralized content production was even more compelling (as were the proceeds from selling ads nationally). The content was more compelling as well, leading to further innovation in distribution, specifically the advent of cable that I wrote about earlier this year. That new distribution led to further innovation in content: new networks were created specifically for cable, even though cable had originally been created to help people receive over-the-air broadcasts.

In fact, new cable networks were so compelling that local broadcast stations (particularly those unaffiliated with the national networks) were worried about losing carriage, leading the FCC to institute “must-carry” rules that compelled cable networks to carry local broadcast networks for free. However, in the 1980s must-carry rules were ruled by a federal Appeals Court to be an infringement on the cable carriers’ First Amendment rights, threatening local broadcast networks that depended on the rules for access to an increasingly large percentage of homes.

This ruling was an impetus for the Cable Television Consumer Protection and Competition Act of 1992, which among other provisions, gave local broadcast stations the choice between must-carry status or charging retransmission fees; if the station chose the latter, then they forewent the former. This led to a clear bifurcation in broadcast channels: cheap and mostly local programming chose must-carry, while stations with the most desirable programming — which per the aforementioned point, were affiliated with national networks — chose the latter. Those national networks took notice: a significant part of the increase in cable fees over the last thirty years has been driven by national networks increasing the fees they charge affiliates for programming; those affiliates recoup those fees by increasing their retransmission fees (cable companies, for their part, continue to break out these fees on bills — my “Broadcast TV Surcharge” in Wisconsin is $21/month).

Local stations originally pushed back against this shift, de-affiliating with networks that pushed too hard. The problem, though, came back to content: networks had everything from popular sitcoms and dramas to national news to late night talk shows. The most important bit of content, though, was sports. Moreover, the importance of sports has only increased as those other content offerings have been unbundled by the Internet: streaming services have sitcoms and dramas, websites have all of the news you could ever want to consume, and social media provides all kinds of comedy and commentary; the one exception to Internet disruption are live games between teams you care about.

Fox’s Contrarian Bet

The importance of sports to ESPN is self-explanatory: the entire point of the network is to show games, particularly as its SportsCenter and talk show franchises have suffered from competition with the Internet. ESPN’s parent company, Disney, has jumped into this competition with both feet, launching Disney+ and taking a controlling stake in Hulu.

That controlling stake came from a deal that Disney announced in 2017: the company acquired the majority of 21st Century Fox, including: its film and television studios, most of its cable TV networks (including FX), a controlling stake in National Geographic, Star India, and the aforementioned stake in Hulu. Disney’s rationale was that it needed to beef up its content offerings to compete in streaming, which was clearly the future; traditional TV was, by implication, the past.

That left a newly spun-out Fox Corporation, which included the Fox broadcast network, Fox News, and Fox Sports (including the Fox Sports cable channels and Fox’s share in the Big Ten network); the nature of these channels signaled a completely different strategy than streaming. I wrote at the time in a Daily Update:

In that last sentence I actually put forth two distinct strategies: selling direct to consumers, and charging distributors significantly more. Both do depend on having differentiated content, which was the point of that Daily Update, but the similarities end there. To explain what I mean, this deal actually offers two great examples: if it goes through, that means Fox is pursuing the second strategy, and Disney the first.

Start with Fox: its news and sports divisions — particularly the former — are highly differentiated. That gives Fox significant pricing power with shrinking-but-still-very-large TV distributors. Moreover, given that both news and sports are heavily biased towards live viewing, they are also a good fit for advertising, which again, matches up with traditional TV distribution. What Fox would accomplish with this deal, then, is shedding a huge amount of that detritus I mentioned earlier: sure, more was better when there was only one distributor, but now that there is competition for viewer’s attention, filler is a drag on the content that actually gives negotiating leverage. Fox could come out of this deal with the same pricing power it has today but a vastly streamlined corporate structure and cost basis.

It’s a bet that has paid off: while Disney, like other streaming companies, enjoyed a huge run-up during the pandemic, it is Fox that today has the superior returns in its two years as an independent company:

Fox's stock price relative to Disney since the 21st Century Fox acquisition

Still, I think my original analysis was incomplete; Fox isn’t simply wringing more money out of a dying business model with a leaner corporate structure. Rather, it is driving the paid-TV business model to its logical endpoint: nothing but sports and news. That doesn’t mean, though, they are the biggest winner.

Sports Concentration

The foundation of Fox’s offering is the NFL, but it is an expensive offering: $2.025 billion per year for Sunday afternoon regular season games, playoffs, and the Super Bowl every four years. The NFL understands its position in the sports landscape very well — it has a monopoly on the professional version of America’s favorite sport — and it prices its rights accordingly, even as it is careful to spread its games across multiple networks:1

The NFL monopoly

College football is America’s second favorite sport; it has also traditionally been a much more profitable one for the networks. CBS, for example, currently pays the SEC $55 million per year to broadcast the conference’s top games, which average over 6 million viewers per telecast; that is about a third of what CBS averages for NFL games, but at a fraction of the cost for rights. The relative cheapness of that deal is explained by the fact it was negotiated a decade ago, when the college football landscape was considerably more diffused:2

College diffusion

The SEC’s new deal looks a lot different: ESPN has exclusive rights to SEC football3 for $300 million per year. That increase is driven by the SEC’s dominance of college football, and corresponding national interest; that interest will be that much greater thanks to the addition of Texas and Oklahoma (which will almost certainly lead to an increase in rights fees). In short, sports are the biggest driver of pay-TV, which means it is essential to have the sports the most people want to watch; the SEC figures prominently in that regard.

Still, it is the Big Ten, based in the sports-obsessed Midwest, and filled with massive public universities churning out interested alumni who live all over the country, that is the most attractive of all; even before this expansion the conference was rumored to be seeking a deal for $1.1 billion/year. Add in the Los Angeles market and UCLA and USC fan bases and that number could end up even higher.

Blame Games

Put all of these pieces together, and the question of who exactly is responsible for college football’s conference upheavel gets a bit more complicated:

  • Local TV stations charge ever higher retransmission fees to pay-TV operators because they have compelling content that subscribers demand.
  • Networks charge ever higher affiliate fees to TV stations for that compelling content, extracting most (if not all) of those retransmission fees.
  • The most compelling content is sports, especially as alternative content loses out to the Internet, and the most popular sport (the NFL) is governed by a single entity, allowing it to extract the greatest fees.

All of this extraction is a function of relative bargaining power that is ultimately derived from what fans want to see:

The NFL has the most bargaining power

Given this, the logic of the Big Ten’s expansion into California is obvious: the more of an audience that the Big Ten can command, the more of the money flowing through that value chain it can extract. Sure, it’s not quite to the level of the NFL, but it’s the next closest thing. This is also the downside to Fox’s bet on live: while the company owns the content it produces on channels like Fox News, it has to buy sports rights, and it is the Big Ten that is determined to take its share, even if that means an expansion that otherwise makes no sense at all.

In other words, I think the tweet above has it backwards: Fox and ESPN are not “grandmasters calling the shots behind the scenes”; they are essential but ultimately replaceable parts in the movement of money from consumers to the entities that provide the content those consumers want.

The Big Ten is accruing NFL-like bargaining power

Still, the tweet is instructive: perhaps the most essential role Fox and ESPN play for universities is taking the blame as the latter make more money than ever.

I wrote a follow-up to this Article in this Daily Update.


  1. The NFL is also very careful about cultivating local fans: the league has always insisted that all games are available over-the-air in local markets, whether they are on broadcast TV or not 

  2. The SEC also sold secondary rights to ESPN 

  3. And basketball, but that doesn’t drive rights fees to nearly the same extent 

Spotify, Netflix, and Aggregation

When Spotify filed for its direct listing in 2018, it was popular to compare the streaming music service to Netflix, the streaming video service; after all, both were quickly growing subscription-based services that gave consumers media on demand. This comparison was intended as a bullish one for Spotify, given that Netflix’s stock had increased by 1,113% over the preceding five years:

Netflix's stock from 2013-2018

The problem with the comparison is that Spotify clearly was a different kind of business than Netflix, thanks to its very different relationship to its content providers; whereas Netflix had always acquired content on a wholesale basis — first through licensing deals with content owners, and later by making its own content — Spotify licensed content on a revenue share basis. This latter point is why I was skeptical of Spotify’s profit-making potential; I wrote at the time in Lessons From Spotify, in a section entitled Spotify’s Missing Profit Potential:

That, though, is precisely the problem: Spotify’s margins are completely at the mercy of the record labels, and even after the rate change, the company is not just unprofitable, its losses are growing, at least in absolute euro terms:

Spotify Gross and Net Profit

Moreover, it seems highly unlikely Spotify’s Cost of Revenue will improve much in the short-term: those record deals are locked in until at least next year, and they include “most-favored nation” provisions, which means that Spotify has to get Universal Music Group, Sony Music Entertainment, Warner Music Group, and Merlin (the representative for many independent labels), which own 85% of the music on Spotify as measured by streams, to all agree to reduce rates collectively. Making matters worse, the U.S. Copyright Royalty Board just increased the amount to be paid out to songwriters; Spotify said the change isn’t material, but it certainly isn’t in the right direction either.

I compared this (unfavorably) to Netflix in a follow-up:

Netflix has licensed content, not agreed-to royalty agreements. That means that Netflix’s costs are fixed, which is exactly the sort of cost structure you want if you are a growing Internet company. Spotify, on the other hand, pays the labels according to a formula that has revenue as the variable, which means that Spotify’s marginal costs rise in-line with their top-line revenue.

Both companies proceeded to do quite well in the markets over the next three-and-a-half years, with a decided edge to Netflix; however, the last six months undid all of those gains, with Netflix getting hit harder than Spotify:

Netflix and Spotify stock since 2018

Setting aside any analysis of absolute value, I think the relative trend is justified: I still maintain, as I did in 2018, that Spotify and Netflix are fundamentally different businesses because of their relationship to content; now, though, I think that Spotify’s position is preferable to Netflix, and has more long-term upside. Moreover, I don’t think this is a new development: I was wrong in 2018 to prefer Netflix to Spotify; worse, I should have already known why.

Categorizing Aggregators

In my original article about Aggregation Theory I wrote:

The value chain for any given consumer market is divided into three parts: suppliers, distributors, and consumers/users. The best way to make outsize profits in any of these markets is to either gain a horizontal monopoly in one of the three parts or to integrate two of the parts such that you have a competitive advantage in delivering a vertical solution. In the pre-Internet era the latter depended on controlling distribution…

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

Aggregation Theory

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

Fast forward to 2017, where in Defining Aggregators I sought to provide a more specific definition of an Aggregator; specifically, Aggregators had:

  • A direct relationship with users
  • Zero marginal costs for serving users
  • Demand-driven multi-sided networks with decreasing acquisition costs

The rest of the post was devoted to building a taxonomy of Aggregators based on their relationships to suppliers:

  • Level 1 Aggregators paid for supply, but had superior buying power due to their user base; Netflix was here.
  • Level 2 Aggregators incurred marginal transaction costs for adding supply; “real-world” companies like Uber and Airbnb were here.
  • Level 3 Aggregators had zero supply costs; Google and Facebook were here.

Notice the tension between the two articles, specifically this sentence in Aggregation Theory: “no longer do distributors compete based upon exclusive supplier relationships”. Had I kept that sentence in mind then I ought to have concluded that Level 1 and Level 2 were not Aggregators at all: sure, companies in these categories could scale on the demand side, but they were going to hit a wall on the supply side. Indeed, this has been a problem for Uber in particular: the company has never been able to escape from the need to compete for supply, which is another way of saying the company has never been able to make any money.

Netflix has a similar problem: the company’s content investments have failed to provide the evergreen lift in customer acquisition I anticipated; despite having more original content than ever, Netflix has hit the wall in terms of subscriber growth, particularly in developed countries where it can charge the highest prices. While some of these challenges were forseeable — I predicted that Netflix would have a rough stretch where all of its once-content providers tried their hand at streaming1 — it does seem likely that Netflix is going to struggle to get significantly more leverage on its content costs than it has to date, as that is the only way to not just acquire but also keep its subscribers.

In short, Defining Aggregator was focused on the demand-side in its definition and the supply-side in its taxonomy; my original
Article — and, in retrospect, the better one — did the opposite. The only way you can truly control demand — the tell-tale sign of an Aggregator — is to have fully commoditized and infinitely scalable supply; streaming video fails on the former, and ride-sharing on the latter.

Spotify and Commoditized Supply

With that in mind, go back to the reason I was skeptical of Spotify, and those revenue-sharing deals with record labels. In 2017’s The Great Unbundling I noted that the music industry, much to the surprise of anyone who observed the Napster era, had ended up in pretty good shape:

While piracy drove the music labels into the arms of Apple, which unbundled the album into the song, streaming has rewarded the integration of back catalogs and new music with bundle economics: more and more users are willing to pay $10/month for access to everything, significantly increasing the average revenue per customer. The result is an industry that looks remarkably similar to the pre-Internet era:

A drawing of The New Music Media Industry Model

Notice how little power Spotify and Apple Music have; neither has a sufficient user base to attract suppliers (artists) based on pure economics, in part because they don’t have access to back catalogs. Unlike newspapers, music labels built an integration that transcends distribution.

It’s easy to see why this would be a bad thing for Spotify; on the flipside, you can see why many of the most optimistic assumptions about Spotify’s upside rested on the company somehow eliminating the labels completely, even though the fact that new music immediately becomes back catalog music means that the label’s relative position in the music value chain is quite stable. This is why I’ve always been skeptical about the possibility of Spotify displacing the labels entirely, and the short-lived exclusive wars confirmed that point of view: it’s better for artists and labels for all music to be available everywhere.

My contention — and the key insight undergirding this Article — is that this is better for Spotify, as well. Go back to the point about commoditization: an input does not need to be free to be commoditized. Sure, the marginal cost of streaming music — which is nothing but bits — is zero; it is a credit to the music labels’ new-music-to-back-catalog flywheel that they are able to charge for access to these bits. At the same time, the bits are available to anyone for roughly the same price: that is why not just Spotify and Apple, but also YouTube, Amazon, Tidal, Deezer, etc. all have roughly the same catalog for roughly the same price. Streaming music isn’t free, but it is an infinitely available non-exclusive commodity.

Look again at the contrast to the other companies I highlighted above: there are a limited number of potential ride-share drivers, which means that Uber has to compete with Lyft with a never-ending set of driver incentives that make profitability difficult if not impossible to achieve; Netflix has some of the content — not all of it — and it has to bid against competitors to get more of it.

This does, to be clear, make it easier to achieve a direct profit: as I noted above, because Netflix pays a fixed cost for content it can earn a surplus without having to pay anything extra to the content producer, whereas Spotify can only eke out profitability from its subscribers by reducing its operational costs. The real opportunity for an Aggregator, though, is building a business model that is independent of supply and instead predicated on owning demand. Here Spotify is better placed than Netflix — although the latter is finally making moves in that direction.

Spotify’s Investor Day

Earlier this month Spotify had an investor day (which I covered in an Update last week); Charlie Hellman, vice president and head of music product, presented a textbook case as to why Spotify is an Aggregator for music, with the business model to match. Hellman started by emphasizing Spotify’s role in new music discovery:

It’s important to remember that first and foremost Spotify is a music company. All of our music team’s strategies ladder up to two primary goals: making a unique and superior music experience for fans, and creating a more open and valuable ecosystem for artists. These two goals really complement one another, which is clear to see when you look at the playlisting ecosystem we’ve spent the last decadee defining and perfecting. Whatever your mood, your style, whatever the occassion, Spotify has something for you, and as Gustav mentioned, Spotify drives around 22 billion discoveries a month. On top of that, 1/3 of all new artist discoveries happen on personalized algorithmic playlists. Listeners love this exposure to new music, as well as the personalized touch. Discovery is our bread and butter, and it’s driving a level of engagement that no streaming service can claim.

This is the number one characteristic of an Aggregator: in a world of scarcity distribution was the most valuable; in a world of abundance it is discovery that matters most. To that end, Hellman emphasized that the world of digital music was one of ever-increasing abundance, thanks in part to Spotify:

The music industry is changing fast. There have never been fewer barriers to entry, and that’s enabling more and more talented artists to be discovered…but with this reduction in barriers comes an increase in the number of artists seeking success. We’re in the midst of an explosion in creativity where tens of thousands of songs are uploaded each day, and that rate of daily uploads has doubled in the last two years. In this rapidly growing landscape, artists need an evolving toolkit that works for the millions who will make up tomorrow’s music industry. One that mirrors their creativity and ambition by offering speed and scale.

Hellman highlighted free tools Spotify offers, including analytics, custom art and videos, and the ability to pitch your song for a playlist. What is far more important from a business perspective, though, is the fact you can pay for promotion.

In addition to these free tools, we’ve also invested in building the most performant and effective commercial tools for promotion in the streaming era. Because there’s so much being added to Spotify every day, artists need tools that will help them stand out, now more than ever, and we’re uniqely positioned to deliver effective promotion for artists for a few reasons. First, unlike like, say, social media marketing, our promotion tools reach people that have already actively made a decision to open Spotify and listen to music. It’s contextual. Plus, our ability to target listeners based on their listening activity, their taste, is second to none. And further, we have the unique ability to actually report back how many people listened to or saved the music as a result. Our ability to deliver the best promotional opportunities for artists presents a tremendous opportunity.

This is an opportunity that is paying off; CFO Paul Vogel explained how Spotify’s heavy investments in podcasting (more on this in a moment) were obscuring major improvements in the financial performance of the company’s music business:

Our music business has been a real source of strength, driving strong revenue growth, and strong margin expansion. This may not be immediately evident in our consolidated results, but make no mistake, we have delivered against the expectations and framework we rolled out at the time of our direct listing. Isolating just the performance of our music operations, you’d see that our music revenues, which consist of premium subscriptions, ad-supported music, our marketplace suite of artist tools, and strategic licensing, great at a 24% compound annual growth rate, in-line with our expectations on an FX-neutral basis, and importantly, our music gross margins have increased over the same time frame, reaching 28.3% in 2021. This is approximately 150-basis points higher than our total 2021 consolidated margin of 26.8%…Looking at our progression in another way, since 2018, the last year before our major podcast investment, our music margins have expanded on average by approximately 75 basis points per year. At our last investor day we told you to expect gross margins in the 30-35% range over the long-term. This was, and still remains, the goal for our music operations. As you can see from these numbers, we are clearly on our way.

Let me unpack how we have expanded our music gross margins. At the beginning of 2018, we announced the development of our marketplace business — all of the tools and services that Charlie described earlier. Our thesis back then was that by providing increased value to artists, creators, and labels, they would see material benefit, and so would Spotify, and that is exactly what we are seeing today. We’ve long maintained that our success is not solely tied to renegotiating new headline rates. It’s about our ability to innovate, right along with our partners, to grow a business that benefits both artists and Spotify, and that’s what we’ve done with Marketplace. In 2018 our Marketplace contribution to gross profit was only $20 million. In 2021 it grew to $160 million, 8x the size in just four years. We expect that number to increase another 30% or more in 2022. We see tremendous upside in Marketplace, and anticipate that it’s financial contribution will continue to grow at a healthy double-digit rate in the years ahead. Marketplace is the quintessential example of our approach to capital allocation. There was a significant up-front cost to build-and-launch these offerings, but we saw compelling data which gave us the confidence to double-down and invest aggressively against our goals. It may have taken time to build up momentum, but our patience and conviction has paid off, and we are seeing material benefit from our investment.

Notice how this business — in its mechanics, if not its financial numbers — looks more like a Google or a Facebook than a Netflix: Spotify isn’t earning money by making margin on its content spend; rather, it is seeking to enable more content than ever, confident that it controls the best means to surface the content users want. Those means can then be sold to the highest bidder, with all of the margin going to Spotify. Spotify calls this promotion — it certainly looks a lot like the old radio model of pay-to-play — but that’s really just another word for advertising. Moreover, this isn’t Spotify’s only advertising business: the company has long been building an ad-supported music business, and is now heavily investing in doing the same for podcasting (which has always clearly been an aggregation strategy).

Acting Like an Aggregator

Being an Aggregator is not the only way to make money; indeed, it is amongst the most difficult. It is far more straightforward to make a differentiated product and charge for it; that is also the only possibility for most products and companies. What makes Aggregators unique is that they are best served by doing the opposite of what is optimal for traditional businessess:

  • Instead of having differentiated content, Aggregators want commoditized content, and more of it.
  • Instead of increasing margin on their users, Aggregators want to reduce it, ideally to zero, or at least to the same level as their competitors (as in the case of Spotify).
  • Instead of introducing friction in the market, the better to lock-in users, Aggregators want to decrease friction, confident the gravitational pull of their user experience will, all things being equal, draw in more users than their competitors, increasing their attractivness to not just suppliers but also advertisers (who, in the case of Spotify’s music business, may be the same entities).

Spotify has, for the most part, acted like an Aggregator: the company has fought exclusives in the music business, kept its subscription prices as low as possible, and in the case of podcasts ensured its Anchor platform supports all podcast players.2 Netflix has not: the company has invested heavily in its own content, steadily increases its prices, and is now embarking on a campaign to make sure its best customers pay more for sharing access.

What is noteworthy are the exceptions: on Spotify’s side the most obvious one are its podcast exclusives. The potential payoff in terms of taking podcast share is obvious; being an Aggregator means being the biggest player in a particular space, and by all accounts Spotify’s strategy has delivered exactly that. There are, though, risks in the approach: exclusive content creators are liable to become ever more expensive over time as they seek to seize their share of the value they create. Moreover, they risk triggering a response by competitors making their own exclusive deals. To put it in the terms discussed above, exclusive content is de-commoditized content, and that is bad for Aggregators (that noted, Spotify’s biggest long-term competitor is not Apple but YouTube, a formidable Aggregator in its own right; maybe exclusives aren’t the worst idea).

Netflix, meanwhile, is finally building — or contracting to build — an ad business. While this still seems like a reaction to slowing growth instead of a considered strategy, what is important is the shift from selling exclusive supply to selling exclusive demand: the latter is far more scalable and defensible, although the transition will be very dificult (and may not fully pay off if Netflix isn’t willing to invest on its own).

The takeaway I am most interested in, though, is a selfish one: I had an essential part of Aggregation Theory — the commoditzation of supply — right originally, only to forget that insight and make several bad calls along the way. In the case of Netflix and Spotify I was right in observing that they were different businesses; my mistake was mislabeling which had more potential as an Aggregator.

I wrote a follow-up to this Article in this Daily Update.


  1. What I got wrong about my prediction was the timing, thanks to COVID

  2. Spotify has also ensured that independent podcasters like Stratechery can deliver their content on Spotify

Data and Definitions

Last week the German Bundeskartellamt (“Federal Cartel Office”) announced in a press release:

The Bundeskartellamt has initiated a proceeding against the technology company Apple to review under competition law its tracking rules and the App Tracking Transparency Framework. In particular, Apple’s rules have raised the initial suspicion of self-preferencing and/or impediment of other companies, which will be examined in the proceeding.

The press release quoted Andreas Mundt, the President of the Bundeskartellamt, who stated:

We welcome business models which use data carefully and give users choice as to how their data are used. A corporation like Apple which is in a position to unilaterally set rules for its ecosystem, in particular for its app store, should make pro-competitive rules. We have reason to doubt that this is the case when we see that Apple’s rules apply to third parties, but not to Apple itself. This would allow Apple to give preference to its own offers or impede other companies.

The press release continues:

Already based on the applicable legislation, and irrespective of Apple’s App Tracking Transparency Framework, all apps have to ask for their users’ consent to track their data. Apple’s rules now also make tracking conditional on the users’ consent to the use and combination of their data in a dialogue popping up when an app not made by Apple is started for the first time, in addition to the already existing dialogue requesting such consent from users. The Identifier for Advertisers, classified as tracking, which is important to the advertising industry and made available by Apple to identify devices, is also subject to this new rule. These rules apparently do not affect Apple when using and combining user data from its own ecosystem. While users can also restrict Apple from using their data for personalised advertising, the Bundeskartellamt’s preliminary findings indicate that Apple is not subject to the new and additional rules of the App Tracking Transparency Framework.

John Gruber disagrees at Daring Fireball:

I think this is a profound misunderstanding of what Apple is doing, and how Apple is benefiting indirectly from ATT. Apple’s privacy and tracking rules do apply to itself. Apple’s own apps don’t show the track-you-across-other-apps permission alert not because Apple has exempted itself but because Apple’s own apps don’t track you across other apps. Apple’s own apps show privacy report cards in the App Store, too…

If you want to argue that Apple engaged in this entire ATT endeavor to benefit its own Search Ads platform, that’s plausible too. But if Apple actually cared more about maximizing Search Ads revenue than it does user privacy, wouldn’t they have just engaged in actual user tracking? The Bundeskartellamt perspective here completely disregards the idea that surveillance advertising is inherently unethical and Apple has studiously avoided it for that reason, despite the fact that it has proven to be wildly profitable for large platforms.

This strikes me as a situation where Gruber — my co-host for Dithering — is right on the details, even as the Bundeskartellamt is right on the overall thrust of the argument. The distinction comes down to definitions.


It’s striking in retrospect how little time Apple spent publicly discussing its App Tracking Transparency (ATT) initiative — a mere 20 seconds at WWDC 2020, wedged in between updates about camera-in-use indicators and privacy labels in the App Store:

Next, let’s talk about tracking. Safari’s Intelligent Tracking Prevention has been really successful on the web, and this year, we wanted to help you with tracking in apps. We believe tracking should always be transparent, and under your control, so moving forward, App Store policy will require apps to ask before tracking you across apps and websites owned by other companies.

These 20 seconds led, 19 months later, to Meta announcing a $10 billion revenue shortfall, the largest but by no means only significant retrenchment in the online advertising space. Not everyone was hurt, though: Google and Amazon, in particular, have seen their share of digital advertising increase, and, as Gruber admitted, Apple has benefited as well; the Financial Times reported last fall:

Apple’s advertising business has more than tripled its market share in the six months after it introduced privacy changes to iPhones that obstructed rivals, including Facebook, from targeting ads at consumers. The in-house business, called Search Ads, offers sponsored slots in the App Store that appear above search results. Users who search for “Snapchat”, for example, might see TikTok as the first result on their screen. Branch, which measures the effectiveness of mobile marketing, said Apple’s in-house business is now responsible for 58 per cent of all iPhone app downloads that result from clicking on an advert. A year ago, its share was 17 per cent.

These numbers, derived as they are from app analytics companies, are certainly fuzzy, but they are the best we have given that Apple doesn’t break out revenue numbers for its advertising business; they are also from last fall, before ATT really began to bite. They also exclude the revenue Apple earns from Google for being the default search engine for Safari, and while Google’s earnings indicate YouTube has suffered from ATT, search has more than made up for it.

I explained in depth why these big companies have benefitted from ATT in February’s Digital Advertising in 2022; I wrote in the context of Amazon specifically:

Amazon also has data on its users, and it is free to collect as much of it as it likes, and leverage it however it wishes when it comes to selling ads. This is because all of Amazon’s data collection, ad targeting, and conversion happen on the same platform — Amazon.com, or the Amazon app. ATT only restricts third party data sharing, which means it doesn’t affect Amazon at all…

That is not to say that ATT didn’t have an effect on Amazon: I noted above that Snap’s business did better than expected in part because its business wasn’t dominated by direct response advertising to the extent that Facebook’s was, and that more advertising money flowed into other types of advertising. This almost certainly made a difference for Amazon as well: one of the most affected areas of Facebook advertising was e-commerce; if you are an e-commerce seller whose Shopify store powered-by Facebook ads was suddenly under-performing thanks to ATT, then the natural response is to shift products and advertising spend to Amazon.

This is where definitions matter. The opening paragraph of Apple’s Advertising & Policy page, housed under the “apple.com/legal” directory, states:

Ads that are delivered by Apple’s advertising platform may appear on the App Store, Apple News, and Stocks. Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.

I note the URL path for a reason: the second sentence of this paragraph has multiple carefully selected words — and those word choices not only impact the first sentence, but may, soon enough, lead to its expansion. Specifically:

“Meaning”

Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.

“Tracking” is not a neutral term! My strong suspicion — confirmed by anecdata — is that a lot of the most ardent defenders of Apple’s ATT policy are against targeted advertising as a category, which is to say they are against companies collecting data and using that data to target ads. For these folks I would imagine tracking means exactly that: the collection and use of data to target ads. That certainly seems to align with the definition of “track” from macOS’s built-in dictionary: “Follow the course or trail of (someone or something), typically in order to find them or note their location at various points”.

However, this is not Apple’s definition: tracking is only when data Apple collects is linked with data from third parties for targeted advertising or measurement, or when data is shared/sold to data brokers. In other words, data that Apple collects and uses for advertising is, according to Apple, not tracking; the privacy policy helpfully lays out exactly what that data is (thanks lawyers!):

We create segments, which are groups of people who share similar characteristics, and use these groups for delivering targeted ads. Information about you may be used to determine which segments you’re assigned to, and thus, which ads you receive. To protect your privacy, targeted ads are delivered only if more than 5,000 people meet the targeting criteria.

We may use information such as the following to assign you to segments:

  • Account Information: Your name, address, age, gender, and devices registered to your Apple ID account. Information such as your first name in your Apple ID registration page or salutation in your Apple ID account may be used to derive your gender. You can update your account information on the Apple ID website.

  • Downloads, Purchases & Subscriptions: The music, movies, books, TV shows, and apps you download, as well as any in-app purchases and subscriptions. We don’t allow targeting based on downloads of a specific app or purchases within a specific app (including subscriptions) from the App Store, unless the targeting is done by that app’s developer.

  • Apple News and Stocks: The topics and categories of the stories you read and the publications you follow, subscribe to, or turn on notifications from.

  • Advertising: Your interactions with ads delivered by Apple’s advertising platform.

When selecting which ad to display from multiple ads for which you are eligible, we may use some of the above-mentioned information, as well as your App Store searches and browsing activity, to determine which ad is likely to be most relevant to you. App Store browsing activity includes the content and apps you tap and view while browsing the App Store. This information is aggregated across users so that it does not identify you. We may also use local, on-device processing to select which ad to display, using information stored on your device, such as the apps you frequently open.

Just to put a fine point on this: according to Apple’s definition, collecting demographic information, downloads/purchases/subscriptions, and browsing behavior in Apple’s apps, and using that data to deliver targeted ads, is not tracking, because all of the data is Apple’s (and by extension, neither is Google’s collection and use of data from Safari search results, or Amazon’s collection and use of data from its app; however, a developer associating an in-app purchase with a Facebook ad is).

“And”

Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.

One thing should be made clear: there has been a lot of bad behavior in the digital ad industry. A particularly vivid example was reported by the Wall Street Journal last month:

The precise movements of millions of users of the gay-dating app Grindr were collected from a digital advertising network and made available for sale, according to people familiar with the matter. The information was available for sale since at least 2017, and historical data may still be obtainable, the people said. Grindr two years ago cut off the flow of location data to any ad networks, ending the possibility of such data collection today, the company said.

The commercial availability of the personal information, which hasn’t been previously reported, illustrates the thriving market for at-times intimate details about users that can be harvested from mobile devices. A U.S. Catholic official last year was outed as a Grindr user in a high-profile incident that involved analysis of similar data. National-security officials have also indicated concern about the issue: The Grindr data were used as part of a demonstration for various U.S. government agencies about the intelligence risks from commercially available information, according to a person who was involved in the presentation.

Clients of a mobile-advertising company have for years been able to purchase bulk phone-movement data that included many Grindr users, said people familiar with the matter. The data didn’t contain personal information such as names or phone numbers. But the Grindr data were in some cases detailed enough to infer things like romantic encounters between specific users based on their device’s proximity to one another, as well as identify clues to people’s identities such as their workplaces and home addresses based on their patterns, habits and routines, people familiar with the data said.

It’s difficult to defend any aspect of this, and this isn’t even a worst case scenario: there are plenty of unscrupulous apps and ad networks that include explicit Personal Identifiable Information (PII) in these data sales/transfers as well; as Eric Suefert noted in 2020, the industry has had this reckoning coming for a very long time.

That, though, is why the “and” from Apple is so meaningful; here is the sentence again:

Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.

This definition conflates two very different things: linking and sharing. The distinction between the two undergirded a regular feature of Meta CEO Mark Zuckerberg’s appearances in Congressional hearings; here is a representative exchange between Senator Edward Markey and Zuckerberg in 2018:

Should Facebook get clear permission from users before selling or sharing sensitive information about your health, your finances, your relationships? Should you have to get their permission?…

Senator…I want to be clear: we don’t sell information. So regardless of whether we get permission to do that, that’s just not a thing we’re going to do.

Meta doesn’t sell data; it collects it, and the third parties that leverage the company’s platforms for advertising very much prefer it that way. PII is like radioactive material: it’s very valuable, and can certainly be leveraged, but it’s also difficult to handle and can be dangerous to not just the users identified but to the companies holding it. The way Meta works is that its collective advertising base has effectively deputized the company to collect data on their behalf; that data is not exposed directly, but is instead used to deliver targeted advertisements that are by-and-large bought not by targeting specific criteria, but rather by specifying desired results: app installs, e-commerce conversions, etc. Everything user-related is, to the companies buying the ads, a complete black box.

This is where linking comes in: apps or websites that leverage Facebook advertising (or any other relevant advertising platform, like Snap) include a Facebook SDK or Pixel that tracks installs, sales, etc., and sends that data to Meta where it can be linked to an ad that was shown to that user. Again, this is completely invisible to the developer or merchant; technically they are sending data to Meta, since the conversion data was collected in their app or on their website, but in reality it is Meta collecting that data and sending it to themselves.

The reason why developers and merchants are happy with this arrangement is that advertising is a scale business: you need a lot of data and a lot of customers to make targeted advertising work, and no single developer or website has as much scale as, say, a Google or an Amazon; Meta et al enable all of these smaller developers and merchants to effectively work together without having to know each other, or share data.

Google, Amazon, and Facebook's ad businesses operate similarly, but only Facebook is affected by ATT

You can, to be clear, object to this arrangement, but it’s worth pointing out that this is very different than selling or sharing data with data brokers; all of the data is in one place and one place only, which is broadly similar to the situation with Google or Amazon (or Apple, as I’ll get to in a moment). The big difference is that Meta doesn’t own all of the customer touch points: whereas a Meta advertiser may own their own Shopify website, an Amazon advertiser has to list their goods on Amazon’s site, with all of the loss of control that entails. Apple’s definition, though, lumps Meta’s approach (which again, is representative of other platforms like Snap) in with the worst actors in the space.

“Our”

Apple’s advertising platform does not track you, meaning that it does not link user or device data collected from our apps with user or device data collected from third parties for targeted advertising or advertising measurement purposes, and does not share user or device data with data brokers.

To the extent you think that the Bundeskartellamt is right, then it is this word that is the most problematic definition of all. One would assume that “our” means Apple-created apps, like News or Stocks: just as Amazon collects data from the Amazon app, of course Apple collects data from its own apps. The actual definition, though, is much more expansive; go back to the Epic trial and the exchange I recounted in App Store Arguments:

The argument that Judge Gonzales Rogers seemed the most interested in pursuing was one that Epic de-emphasized: Apple’s anti-steering provisions which prevent an app from telling a customer that they can go elsewhere to make a purchase. Apple’s argument, in this case presented by Cook, goes like this:

A tweet from Adi Robertson

This analogy doesn’t work for all kinds of reasons; Apple’s ban is like Best Buy not allowing products in the store to have a website listed in the instruction manual that happens to sell the same products. In fact, as Nilay Patel noted, Apple does exactly this!

A tweet from Nilay Patel

The point of this Article, though, is not necessarily to refute arguments, but rather to highlight them, and for me this was the most illuminating part of this case. The only way that this analogy makes sense is if Apple believes that it owns every app on the iPhone, or, to be more precise, that the iPhone is the store, and apps in the store can never leave.

Let me be precise in a different way that is relevant to this Article; Apple doesn’t particularly care about or claim ownership of the content of an app on the iPhone, but:

  • Apple insists that every app on the iPhone use its payment system for digital content
  • Apple treats all transactions made through its payment system as Apple data
  • Ergo, all transactions for digital content on the iPhone are Apple data

The end result looks something like this — i.e. strikingly similar to Facebook, but with App Store payments attached:

Apple's ad model looks similar to Facebook's

Here’s the key point: when it comes to digital advertising, particularly for the games that make up the vast majority of the app advertising industry, transaction data is all that matters. All of the data that any platform collects, whether that be Meta, Snap, Google, etc. is insignificant compared to whether or not a specific ad led to a specific purchase, not just in direct response to said ad, but also over the lifetime of the consumer’s usage of said app. That is the data that Apple cut off with ATT (by barring developers from linking it to their ad spend), and it is the same data that Apple has declared is their own first party data, and thus not subject to its ban on “tracking.”

This, needless to say, is where legitimate questions about self-preferencing come to the forefront. Developers who want to link conversion data with Facebook are banned from doing so, while they have no choice but to share that data with Apple because Apple controls app installation via the App Store; this strikes me as a clear example of the President of the Bundeskartellamt’s claim that “Apple’s rules apply to third parties, but not to Apple itself”.


I have been very clear that I disagree with those who want to ban all targeted advertising; I believe that targeted advertising is an essential ingredient in a new Internet economy that provides opportunities to small businesses serving niches that are only viable when the world is your market. After all, people who might love your product need some way to know that your product exists, and what is compelling about platforms like Facebook is that it completely leveled the advertising playing field: suddenly small businesses had the same tools and opportunities to advertise as the biggest companies in the world. At the same time, I understand and acknowledge those who disagree with the concept on principle.

What is frustrating about the debate about ATT, though, is that Apple presents itself as a representative of the latter, with its constant declarations that privacy is a human right, and advertisements that lean heavily into the (truly problematic) world of data brokering, even as it builds its own targeting advertising business. Gruber asked me on this morning’s episode of Dithering whether or not I would feel better about ATT if Apple weren’t itself doing targeted advertising, and the answer is yes: I would still be disappointed about the impact on the Internet economy, but at least it wouldn’t be so blatantly anti-competitive.

Apple, to its credit, has made moves that very much align with its privacy rhetoric by cleaning up some of the worst abuses by apps, including significantly fine-tuning location permissions, providing a new weather framework that makes it significantly cheaper to build a weather app (reducing the incentive to monetize by selling location data), and increasing transparency around data collection. Moreover, at this year’s WWDC the company introduced significant enhancements to SKAdNetwork that should make it easier for developers and platforms like Facebook to re-build their advertising capabilities.

At the same time, an increasing number of signals suggest that Apple is set to significantly expand their own advertising business; an obvious product to build would be an ad network that runs in apps (given that these apps run on the iPhone, Apple would in this scenario claim that collecting data about who saw what ad would be first party data, just like transactions are). Yes, Apple tried and failed to build an ad network previously, but a big reason that effort failed is because Apple didn’t collect the sort of data necessary to make it succeed.

What has changed is not just Apple, but also the data that matters: when iAd launched in 2010, digital advertising ran like people still think it does, leveraging relatively broad demographic categories and contextual information to show a hopefully relevant ad;1 what matters today is linking an ad to a transaction, and Apple has positioned itself to have perfect knowledge of both, even as it denies others the same opportunity.


  1. This is the era when Facebook earned its reputation for being far too cavalier with user data; Facebook was also the company that built the modern advertising approach that depends on linking data instead of sharing it. 

Zero-COVID and Free Speech

From the Financial Times, last Thursday:

Barely a week after the Chinese Communist party declared victory in its struggle to protect Shanghai from coronavirus, half of the financial hub’s districts will be shuttered this weekend to test millions of residents after signs emerged of renewed community transmission of the virus. China’s most populous city, which was only released from a two-month lockdown last week, detected 11 new infections on Thursday, six outside the city’s mass quarantine centres. The measures will affect eight of the financial hub’s 16 districts, including Pudong, one of the worst-hit areas at the start of the lockdown.

Three cases were detected in the Red Rose beauty parlour in the city centre, prompting health authorities to test more than 90,000 people close to the salon. Only a few days previously, the Xuhui local party body wrote a celebratory post on the microblogging platform Weibo hailing the salon’s reopening on June 1 for clients who had gone weeks without a professional haircut. It said the state-run salon’s resumption of business reflected how the city’s “pandemic situation improved”. The post has since been taken down.

The mass testing ended up finding 5 cases; Chaoyang district in eastern Beijing, meanwhile, is undergoing mass testing of its own, and schools are closed.


One of the common responses to China’s draconian efforts to control COVID’s spread (which, notably, do not include forced vaccination, or the use of Western vaccines), is that it doesn’t work: SARS-CoV-2, particularly the Omicron variant, is simply too viral. It’s worth pointing out that this response is incorrect: China not only eventually controlled the Wuhan outbreak, and not only kept SARS-CoV-2 out for most of 2021, but also ultimately controlled the Shanghai outbreak as well. The fact there were only 5 community cases over the weekend is proof that China’s approach works!

What I think people saying this mean is something different: either they believe the trade-offs entailed in this effort are not worth it, or they simply can’t imagine a government locking people in their homes for months, hauling citizens off to centralized quarantine, separating parents and children, entering and spraying their homes, and killing their pets. I suspect the latter is more common, at least amongst most Westerners: people are so used to a baseline of individual freedom and autonomy that the very possibility of the reality of COVID in China simply does not compute.

Taiwan and Zero-COVID

Perhaps it is not only my knowledge of China, but also my experience living in Taiwan for nearly two decades, or more pertinently, my experience of living in Taiwan the last two years, that makes me much more willing to believe in the effectiveness of China’s approach.

For most of the last two-and-a-half years Taiwan was COVID-free; for most of 2020 that meant life went on as normal, with no masks, everything open, etc.; the one abnormality was that every person entering Taiwan had to quarantine (at home or in a hotel) for 14 days. Things changed in 2021, when the Alpha variant broke through, leading to a soft lockdown: restaurants and schools were closed, and workplaces were strongly encouraged to work from home; masks were instituted everywhere, including outside, and quarantine was hotel only. What is less known is that quarantine went beyond travelers: anyone who was a close contact of an infected person, including family members and co-workers, but also people who might have had the misfortune of being in the same restaurant at the same time as a positive case, were quarantined as well (your location in said restaurant was ascertained by reviewing your cellular location data).

It is this last point that, in my estimation, stopped the 2021 spread in its tracks, and kept Taiwan COVID-free until earlier this year (I myself endured an 18-day centralized quarantine due to testing positive at the airport). It is also, for nearly every Westerner I have relayed this fact to, a startling abridgment of civil liberties. The very idea that you can be locked up for simply being in the wrong place at the wrong time is inconceivable; that, though, is much less stringent than China’s approach in Shanghai, including the requirement that you need a PCR-test within the last 72 hours to even grocery shop.

Here’s the thing: that relative reduction in stringency relative to China is precisely why Taiwan’s containment eventually failed; Taiwan, for most of the last month, has had the highest case rate in the world. From the New York Times COVID tracker:1

NYT Covid tracker on June 13 shows that Taiwan has the highest case rate in the world

Taiwan, to its credit, did not lockdown in the face of this outbreak; I suspect the horrors of the Shanghai lockdown served as a deterrent, particularly given Taiwan’s ongoing struggle for international recognition and desire to distinguish itself from China. It’s also worth noting that at the critical moment — late March and early April — it wasn’t clear if China’s lockdowns would work; still, even if the outcome was clear, Taiwan — despite its willingness to violate civil liberties to a considerably greater degree than most Western democracies — was never willing to go as far as China. And so, while the Chinese approach worked, it almost certainly would not have worked in Taiwan simply because the latter wasn’t willing to be as brutal as the former.

I am being, as best as I can, impartial about the choices here: the important takeaway is not simply that China’s approach did in fact work to arrest the spread of SARS-CoV-2, but also that it was the only approach that worked; even Taiwan’s approach, which was far more stringent than any Western country would tolerate, eventually failed. Of course there were benefits, particularly in terms of getting time to administer vaccines, but it’s certainly worth wondering if it was all worth it.2

The opposite side of the spectrum were areas of America that, after enduring a few months of (very) soft lockdown at the beginning of the pandemic, were mostly open from the summer of 2020 on; I have friends in parts of Wisconsin, for example, whose kids have been in school since the fall of 2020. The price of this approach was far more deaths, particularly amongst the elderly who have always been at far higher risk: over 1 million Americans have died of COVID.

This isn’t the complete COVID story, though, and not simply because there can be no honest accounting of the pandemic until it finally sweeps China; the most effective vaccines in the world were developed in the West, and the U.S. produced and distributed the largest number of them. How many lives were saved, and how much economic upheaval — which isn’t about simply dollars and cents, but people’s livelihoods, sense of worth, and even sanity — was avoided or reduced because of vaccines? That must be recorded in the ledger as well, and in this accounting the West comes out looking far stronger.

The Great Firewall

The reason to audit this accounting is that I think there is an analogy to be drawn between COVID and the debates around free speech that have sprung up over the last six years. Before then, there wasn’t much of a debate about free speech: just as the W.H.O. and C.D.C. used to maintain that lockdowns don’t work, it used to be widely accepted that free speech was a good thing. Moreover, it was also accepted that free speech was not simply a legalistic limitation on government power, but was a cultural value. I pointed out earlier this year that this was no longer the case in elite culture; the debate around Elon Musk buying Twitter confirmed exactly that.

To summarize, the “sophisticated” view on free speech is that the First Amendment both restricts the government and also protects companies who make their own moderation decisions; this is of course correct legally, but the idea that this distinction should be both celebrated and pushed to its limit is new. That, by extension, means that the “rube” view on free speech is that said principle ought to apply broadly: not only should the government not be able to limit your speech, but neither should Facebook or Twitter or Google. Again, this was a widely held view not too long ago: much of the debate around net neutrality, for example, centered on the importance of private corporations not being allowed to treat different bits of data differently based on what type of content they represented.

There are, of course, philosophical arguments to be made as to why either view is better or worse than the other; to return to the COVID analogy, one can debate whether or not the sacrifice of civil liberties is worth whatever deaths might be prevented (again, with the caveat that the final accounting is not yet complete). What I think is missing in both debates, though, is the question of what was possible.

Go back to my point above: I strongly suspect that most people in the West are convinced that China’s approach will not work — even though it is! — because they simply cannot imagine enduring or tolerating or even encountering the level of brutality necessary for success; that is certainly true of COVID dead-enders who still bemoan that the West isn’t doing enough to control the spread of COVID. It is, from my perspective, hard to imagine any of these folks accepting non-negotiable centralized quarantine simply for being in the wrong restaurant at the wrong time — and again, this is the Taiwan approach that ultimately failed! They are complaining about something that simply isn’t possible, not because their political enemies are unwilling to do what is necessary, but because they themselves would never tolerate it.

This, I should note, is why I have long been in strong favor of fully opening up: while there was an argument to be made that it was worth trying to delay outbreaks until vaccines were widely available, by the summer of 2021 (in the U.S.) the only possible outcome of restrictions was to make people miserable at best, and cause economic, socio-political, and developmental damage at worst; spread, absent a China-style approach, was inevitable, so why invite bad outcomes when there are no benefits?3

I have the same questions about free speech. Once again, I must acknowledge that China’s approach to free speech works, at least in terms of its leaders’ immediate goals. In other words, it doesn’t exist, even — especially! — on the Internet. This — like China’s insistence on zero-COVID — was something that Westerners scoffed at as being unrealistic; then-President Bill Clinton said upon the establishment of Permanent Normal Trade Relations with China:

In the new century, liberty will spread by cell phone and cable modem. In the past year, the number of Internet addresses in China has more than quadrupled, from 2 million to 9 million. This year the number is expected to grow to over 20 million. When China joins the W.T.O., by 2005 it will eliminate tariffs on information technology products, making the tools of communication even cheaper, better, and more widely available. We know how much the Internet has changed America, and we are already an open society. Imagine how much it could change China.

Now there’s no question China has been trying to crack down on the Internet. Good luck! That’s sort of like trying to nail jello to the wall. But I would argue to you that their effort to do that just proves how real these changes are and how much they threaten the status quo. It’s not an argument for slowing down the effort to bring China into the world, it’s an argument for accelerating that effort. In the knowledge economy, economic innovation and political empowerment, whether anyone likes it or not, will inevitably go hand in hand.

Clinton, along with nearly all of the Western intelligentsia, underrated China’s willingness to do whatever it took to build a mold around that jello, from building the Great Firewall to employing countless numbers of censors to tanking its entire IT sector once it felt it was becoming too politically powerful. The end result is a populace that not only has little idea about today’s reality — i.e. that most people have had COVID, and are fine, and are living normally — but even less idea about the past.

Tank Cake

Last February Time Magazine named Li Jiaqi one of its “Next Top 100 Most Influential People”. Li’s nickname was the “lipstick king”, which refers to the time in 2018 when the live-streaming e-commerce peddler sold 15,000 lipsticks in 5 minutes; last fall Li sold $1.7 billion worth of goods in 12 hours. Ten days ago, on June 3, Li was doing what he does best — selling goods via live-streaming — when his stream suddenly went off the air; Li, within a matter of hours, was suddenly off of the Internet, no longer appearing on Taobao, Alibaba’s e-commerce platform that streamed his show. The BBC explained what happened:

Last Friday night, Li was mid-way through his popular livestream show when it ended abruptly. The 30-year-old, known for his smooth voice and K-pop idol looks, had just shown his audience a vanilla log cake while selling snacks. The cake resembled a tank: it had Oreos for wheels and a wafer pipe resembling a cannon. And Li’s show was on 3 June, the eve of the 33rd anniversary of the Tiananmen Square massacre…

The "Lipstick King" selling a tank cake on June 3rd

Generations of Chinese have grown up without learning of the massacre – and many of those millennials and Gen Z-ers appeared to be among Li’s audience on Friday and in the days after. Li failed to return to his livestreaming show after the transmission was cut. Shortly after, he posted on his Weibo account saying he had merely faced technical issues. But his continued absence – he has missed three shows so far during one of the year’s biggest online shopping festivals – has only fuelled more questions and debate. Some have cottoned on quickly as to why he was censored, while others are having a revelation. “What does the tank mean?” a confused viewer asked. Another said: “What could possibly be the wrong thing to say while selling snacks?”

That’s not all, though: it seems almost certain that Li had no idea he did anything wrong, or why.

Few online believe that Li was trying to make a political statement. Given his celebrity status, he knew how to navigate political sensitivities and to steer clear of minefields, they said. And he had never expressed political beliefs before. Some even argued that he was possibly among those who didn’t know about the Tiananmen Square massacre.

Many of his loyal fans also wondered if the top livestreamer had been set up by competitors to take a political fall, and perhaps the cake was sneaked into the line-up of his show on Friday. A clip circulating on social media, apparently of the moment before the cake is brought out, also shows Li expressing surprise over the announcement of a tank product. A male assistant announces in the background that the team has a tank-shaped good to sell. Li laughs and says: “What? A tank?” His co-presenter then says: “Let’s see if Li Jiaqi and I will still be here at 11pm.” They were taken off air shortly after 9pm.

Many fans suspect purposeful sabotage; perhaps that is a conspiracy theory, but said theory is undergirded by the reality that it is not just possible but even probable that a 30 year-old in China has no idea that selling a tank-shaped cake on June 3rd is grounds for being disappeared. To put it another way, China’s control of information is not unlike its control of COVID: it seems impossible, and the means intolerable, but that is simply because we in the West can’t imagine the limitations on personal freedom necessary to make it viable.

Acceptance and Competition

To further expand on this point: if people in the West would not accept truly strict lockdowns, then they certainly wouldn’t accept centralized quarantine (which didn’t work), which means they absolutely wouldn’t accept forced testing and the inability to leave your house for months. Ergo, people in the West would never accept the reality of zero-COVID, which is why it makes sense to go in the opposite direction: open up, and forgo the massive costs of zero-COVID as well. Don’t get stuck in the middle, enduring the worst outcomes of both.

Similarly, if people in the U.S. would not accept any government infringement on speech, then they certainly wouldn’t accept ISP-level censorship like the Great Firewall, which means they absolutely wouldn’t accept forced disappearances for selling the wrong cake. Ergo, people in the U.S. would never accept the reality of true control of speech, which is why it makes sense to go in the opposite direction: embrace free speech not just as a law but as a cultural more, and forgo the massive costs of half-ass speech restrictions as well. Don’t get stuck in the middle, enduring the worst outcomes of both.

COVID, alas, seems to have been a worst case scenario in terms of both points: we suffered the aforementioned economic, socio-political, and development damage associated with strict control, while controlling nothing; meanwhile private platforms went overboard in controlling information, and ended up only deepening the suspicion of skeptics about COVID and its vaccines, leading to many more deaths, but also increased skepticism about vaccines generally.

The worry is that this middling approach, where we get the worst of both worlds, impacts innovation generally; China is increasingly focused on a top-down approach to technological innovation in particular, placing heavy emphasis and tons of money on catching up in areas like semiconductors and AI. The best response is to go in the opposite direction, and let a thousand flowers bloom, trusting that innovation by definition arises in places we least expect it.

To put it another way, if we could accurately eliminate bad ideas, then there would, by definition, be no more good ideas to discover; the way to compete with China is to lean into the fact that there remains so much we don’t yet know.


You likely have, by this point, heard the story of Katalin Karikó; from Stat News in 2020:

Before messenger RNA was a multibillion-dollar idea, it was a scientific backwater. And for the Hungarian-born scientist behind a key mRNA discovery, it was a career dead-end. Katalin Karikó spent the 1990s collecting rejections. Her work, attempting to harness the power of mRNA to fight disease, was too far-fetched for government grants, corporate funding, and even support from her own colleagues…By 1995, after six years on the faculty at the University of Pennsylvania, Karikó got demoted. She had been on the path to full professorship, but with no money coming in to support her work on mRNA, her bosses saw no point in pressing on.

Karikó would eventually figure out how to stop the body from rejecting mRNA, an essential discovery on the way to today’s vaccines. Along the way, though, she was nearly defeated by an academic system that increasingly relies on money from the powers that be, who think they know everything; fortunately said powers couldn’t actually stop her work, even though the consensus was that said work was a bad idea.

Only with time did it reveal itself as a good idea, which is the story of almost everything in life: we live, we learn, we discover new things, not just those of us alive in 2022, but all of humanity for our entire existence. That is how we beat COVID: not by destroying our liberties and lives, but by invention and information. It turns out that free speech isn’t just an analogy to COVID: it’s an essential part of getting past it. And, critically, it’s the only approach that nearly all of us reading this article — particularly those of us in the U.S., no matter our political affiliation — would actually tolerate.

In short, we live in the U.S., not China, and it’s high time all of us — including tech companies — started acting like it, instead of LARPing the most pathetic imitation possible.


  1. This case rate is likely significantly underreported, I would add: given that positive cases are not allowed to leave their house for 7 days — again, tracked by cellphone — there is a very strong incentive to simply not report a positive case; anecdotally speaking the majority of people I know in Taiwan have gotten COVID over the last month or so. 

  2. My aforementioned 18-day quarantine in April certainly seemed like a needless waste of my life — as do ongoing traveler quarantines whose only purpose is to protect travelers from what is again, the highest case rate in the world. 

  3. I do recognize that people wished to wait for a children’s vaccine; given the relative risk for children I disagreed, but I acknowledge the argument 

Thin Platforms

The Department of Justice’s 1998 complaint against Microsoft accused the company of, amongst other things, tying the Internet Explorer browser to the Windows operating system:

Internet browsers are separate products competing in a separate product market from PC operating systems, and it is efficient to supply the two products separately. Indeed, Microsoft itself has consistently offered, promoted, and distributed its Internet browser as a stand-alone product separate from, and not as a component of, Windows, and intends to continue to do so after the release of Windows 98…

Microsoft’s tying of its Internet browser to its monopoly operating system reduces the ability of customers to choose among competing browser products because it forces OEMs and other purchasers to license or acquire the tied combination whether they want Microsoft’s Internet browser or not. Microsoft’s tying — which it can accomplish because of its monopoly power in Windows — impairs the ability of its browser rivals to compete to have their browsers preinstalled by OEMs on new PCs and thus substantially forecloses those rivals from an important channel of browser distribution.

In retrospect, the complaint feels quaint for three reasons:

First, Microsoft won the browser wars, and it didn’t matter; after peaking at 95% market share in 2004, Internet Explorer was first challenged by Firefox, which peaked at 32% market share in 2010, and then surpassed by Chrome in 2012:

The reasons ended up being both a condemnation and an endorsement of the libertarian defense of Microsoft’s actions, depending on your timeframe: sure, the company leveraged its operating system dominance to gain browser market share, but the company also made a great browser (I personally switched with the release of version 4). And then, with Version 6 and its position seemingly secured, the company just stopped development; that is what opened the door to first Firefox and then Chrome, both of which were downloaded and installed by end users looking for something better. The market worked, eventually.

Of course, the reason the market could work is that Windows was an open platform: sure, Microsoft controlled (and allegedly abused) what could be preinstalled on a new computer, but once said computer was in a user’s hands they could install whatever they wanted to, including alternative browsers. That gets to the second reason why the complaint feels quaint: today having a browser pre-installed is de rigueur for operating systems, and Apple’s iOS goes much further than simply pre-installing Safari: all alternative browsers must use Apple’s built-in rendering engine, which means they can only compete on user interface features, not fundamental functionality.1

The third reason has to do with Microsoft itself.

Thick and Thin

As I noted last week in an Update, one of the overarching themes of CEO Satya Nadella’s Build developer conference keynote was the seemingly eternal tech debate about thin versus thick clients (to dramatically simplify — and run the risk of starting a flame war — thin clients are terminals for a centralized computer, while thick clients are computers in their own right, that sync):

The biggest takeaway from this keynote is that for developers, at least the ones that Microsoft is courting, the thin client model has won — although the truth, as is so often the case with tech holy wars, has ended up somewhere in the middle. Here is the key distinction: there is and will continue to be a lot of work that happens locally; all of the assumptions around that work, though, will be as if the work is being done on the server. For example:

  • GitHub Codespaces is an explicitly online environment that you can temporarily use locally.
  • Azure Arc provides the the Azure control plane for an on-premises development environment.
  • The Azure Container Apps service and Azure Kubernetes Service enable developers to write locally in the same environment they deploy to the cloud.

Moreover, several other of the announcements were about patching up limitations in cloud development relative to local: Microsoft Dev Box, for example, enables the deployment of cloud-based VMs that mimic a local development environment for things like app development; Microsoft Cloud PC (which was previously announced) does the same thing for client applications.

What makes this shift so striking is that it is being articulated by Microsoft; after all, Windows (along with Intel) was the dominant winner of the thick client era. Yes, Windows Server was an integral part of Microsoft’s enterprise dominance, but the foundation of the company’s strategy — as evidenced by the tactics used in the fight against Netscape — was the fact that Windows was the operating system on the devices people used. That, by extension, was precisely why mobile was so disruptive to the company: suddenly Windows was only on some of the devices people used; iOS and Android were on a whole bunch of them as well.

I’ve spent many articles writing about how Satya Nadella weaned Microsoft off of its Windows-centric strategy; the pertinent point in terms of this Article comes from Teams OS and the Slack Social Network:

The end of Windows as the center of Microsoft’s approach, and the shift to the cloud, though, did not mean the end of Microsoft’s focus on integration, or its attempt to be an operating system; the company simply changed its definition of what an operating system was; Satya Nadella said at a press briefing in 2019:

The other effort for us is what we describe as Microsoft 365. What we are trying to do is bring home that notion that it’s about the user, the user is going to have relationships with other users and other people, they’re going to have a bunch of artifacts, their schedules, their projects, their documents, many other things, their to-do’s, and they are going to use a variety of different devices. That’s what Microsoft 365 is all about.

Sometimes I think the new OS is not going to start from the hardware, because the classic OS definition, that Tanenbaum, one of the guys who wrote the book on Operating Systems that I read when I went to school was: “It does two things, it abstracts hardware, and it creates an app model”. Right now the abstraction of hardware has to start by abstracting all of the hardware in your life, so the notion that this is one device is interesting and important, it doesn’t mean the kernel that boots your device just goes away, it still exists, but the point of real relevance I think in our lives is “hey, what’s that abstraction of all the hardware in my life that I use?” – some of it is shared, some of it is personal. And then, what’s the app model for it? How do I write an experience that transcends all of that hardware? And that’s really what our pursuit of Microsoft 365 is all about.

This is where Teams thrives: if you fully commit to the Microsoft ecosystem, one app combines your contacts, conversations, phone calls, access to files, 3rd-party applications, in a way that “just works”…This is what Slack — and Silicon Valley, generally — failed to understand about Microsoft’s competitive advantage: the company doesn’t win just because it bundles, or because it has a superior ground game. By virtue of doing everything, even if mediocrely, the company is providing a whole that is greater than the sum of its parts, particularly for the non-tech workers that are in fact most of the market. Slack may have infused its chat client with love, but chatting is a means to an end, and Microsoft often seems like the only enterprise company that understands that.

Note that line about “3rd-party applications”: if Teams is the Windows of Microsoft’s new services strategy, then it follows that the platform opportunity for developers in Microsoft’s ecosystem is itself centered on Teams; that’s exactly what Nadella described in the Build keynote:

Let’s talk about the future of work and how we’re making apps more contextual and people-centric, so you can build a new class of collaborative applications. It starts with Microsoft Graph, which underlies Microsoft 365 and makes available to you information about people, their relationships, and all of their artifacts. Today we are seeing developers around the world enriching their apps with Microsoft Graph. In fact, more than half of the Microsoft 365 tenant are using custom-built and 3rd-party apps powered by the Graph. With Graph connectors ISVs can extend their applications and have them be discovered as part of the user’s everyday tasks, whether they are writing an email, meeting on Teams, or doing a search. For example, data from an app can appear directly in an organization’s search results, as you can see in the experience Figma is building here. You can compose a mail and @-mention files from these apps in-line, and you can access them in Teams chat too. Another way that you can create interactive experiences is by building live actionable loop component using adaptive cards like partner Zoho does. Your users can make decisions and take action like updating the status of a ticket right in the flow of work, and updates are always live, like this one across Outlook, Teams, and Zoho.

When you combine the Microsoft Graph with Microsoft Teams, you combine the data that describes how people work together with the place they work together. It’s incredibly powerful, and developers are extending their apps into Teams and embedding Teams in their apps. In fact, monthly usage of 3rd-party apps and custom-built solutions on Teams has grown 10x over the last two years, and more and more ISVs are generating millions of [dollars in] revenue from customers using apps built on Teams.

“Graph connectors” are the new APIs.

Windows versus Teams

If the Windows platform looked like this…

The Windows "thick" platform

…then the new Teams platform looks like this:

The Teams "thin" platform

There are a few important observations to make about these differences.

First, in the PC era the monopoly that mattered was being the only operating system on a single device. This, to be clear, is a technical necessity — while a PC could dual-boot into different operating systems, only one could run at a time2 — but it was the foundation of Windows’ market monopoly. After all, whichever operating system was running on the most devices most of the time was the operating system that developers would target; the more developers on a particular operating system, the more popular that operating system would be amongst end users, resulting in a virtuous cycle, aka a two-sided network, aka lock-in, aka a monopoly.

Once mobile came along, though, not only did the number of devices proliferate, but so did the need for new user interfaces, power requirements, hardware re-imagining, etc.; this made it inevitable that Microsoft would miss mobile, because the company was approaching the problem from the completely wrong perspective.3 At the same time, this proliferation of devices meant that the point of integration — which enterprises still craved — moved up the stack. I wrote in 2015’s Redmond and Reality:4

That is because there is in fact a need for an integrated solution on mobile. Look at Box, for example: the company obviously has a cloud component, but they also have multiple apps for every relevant — and non-relevant! — platform resulting in much better functionality than what Microsoft previously had to offer. Multiply that advantage across a whole host of services and it starts to make sense for the CIO to modularize her backend services in order to achieve integration when it comes to how those services are accessed:

A drawing of Pre-Cloud and post-Cloud Services

This is exactly what Microsoft would go on to build with Teams: the beautiful thing about chat is that like any social product it is only as useful as the number of people who are using it, which is to say it only works if it is a monopoly — everyone in the company needs to be on board, and they need to not be using anything else. That, by extension, sets up Teams to play the Windows role, but instead of monopolizing an individual PC, it monopolizes an entire company.

Second, developers had much more power and flexibility in the old model, because they had direct access to the underlying PC. This had both advantages — anyone could make an app that could do anything, and users could install it directly — and disadvantages — anyone could make an app that could do anything, and users could install it directly. In other words, the same openness of the PC that presented an opportunity for Firefox and Chrome to dethrone Internet Explorer — and for Netscape to exist in the first place — also presented an opportunity for viruses, malware, and ransomeware.

This latter point is the justification that Apple returns to repeatedly for its App Store model, even though a significant portion of the increased security of mobile devices is due to fundamentally different architectural choices made in designing the underlying operating system. Then again, these arguments go hand-in-hand: it’s those architectural choices in iOS (and Android) design that make App Store control possible; the broader point is that mobile set the expectation that developer freedom — and by extension, opportunity — would be limited by the operating system owner.

A thin platform like Teams takes this even further, because now developers don’t even have access to the devices, at least in a way that matters to an enterprise (i.e. how useful is an app on your phone that doesn’t connect to the company’s directory, file storage, network, etc.). That means the question isn’t about what system APIs are ruled to be off-limits, but what “connectors” (to use Microsoft’s term) the platform owner deigns to build. In other words, not only did Microsoft build their new operating system as a thin platform, they ended up with far more control than they ever could have achieved with their old thick platform.

Stripe OS

Build wasn’t the only developer conference last week: Stripe also held Stripe Sessions, and one of the tentpole sections of the keynote was called “Finance OS”. Here’s Stripe co-founder and President John Collison:

We’ve talked about payments, and how they’re highly strategic, and rapidly fragmenting, and we’ve talked about the business model innovations of adaptive enterprises and fintech everywhere. These trends are great news for the Internet economy, but a challenge for finance and business operations teams. The rate limiter for so many new opportunities isn’t the idea for a great product; it’s the mundane foundations. “Can we build for this? Can we get international operations off of the ground? Can we expand when we’re still not closing our books on time?” It’s never just about having the idea for a great product, it’s about being able to operate it, and that’s why we’re building a modern operating system for finance, and like any good OS, we’re focused on nailing the basics.

Those basics included features like invoicing, billing, taxes, revenue recognition, and data pipelines, all of which sit on top of the various ways to gather, store, and distribute money that Stripe has abstracted away:

The Stripe OS

This image, given its similarity to the one above, makes clear what was coming next:

So we just heard about core revenue management capabilities, like invoicing, subscription billing, and handling tax. Even if you’re not using Stripe, these are the things you should want running like clockwork.

But what about everything else? It’s like any operating system: core functionality needs to work perfectly out of the box, but the breadth of functionality of the platform is also really important, having an app to solve every use case. For things like customer messaging, you might want to use something like Intercom; for contracts, DocuSign; or, you might just to build your own tool. But often these workflows are highly integrated, so for years our users have been asking us for the tools of their choice to interoperate with Stripe…

We’re thrilled to launch today Stripe Apps and the Stripe App Marketplace, where you can find or build best-of-breed tools that work naturally with Stripe.

There are the missing pieces!

The Stripe thin platform

“Working naturally with Stripe” doesn’t simply mean access to Stripe’s APIs; it means fitting in to the Stripe dashboard — Stripe is even including pre-made UI components so that 3rd-party apps look like they were designed by the fintech company:

Stripe offers pre-made UI components for integrating into the Stripe dashboard

This is another thin platform: developers don’t have access to the core financial data of a company, nor does IT want them to; instead the opportunity is to sit on top of an abstraction layer that covers all of a company’s money-moving pieces, and to fit in as best as you can.


Of course I am covering the Build and Stripe Sessions keynotes together because both happened the same week; at the same time, it was a fortuitous coincidence, because Stripe’s announcement brings important context to Microsoft’s approach. After all, I used the magic word “monopoly”; the truth, though, is that not only was an operating system monopoly inevitable, it also made perfect sense from a user perspective that important functionality — like browsing — became integrated with the core OS.

Collison made the case as to why similar considerations should be front-and-center for thin platforms — there are things “you should want running like clockwork.” Microsoft would make a similar argument about Teams and its incorporation of things like file storage and communications, and, I would argue, Teams’ success in the market relative to Slack is evidence that the argument is a compelling one to customers. That Microsoft has so often seemed like the only enterprise company actually building for an enterprise’s ends, instead of solipsistically obsessing over being best-of-breed for one specific means, seems worth celebrating and emulating, not condemning and complaining.

At the same time, it is also worth mourning the slow eclipse of the thick client model. Yes, things like malware were a pain and a drain on productivity, and the SaaS model has led to a plethora of new products that are accessible to companies without needing an IT department, but the big downside of the thick model in terms of what could go wrong and the necessity of IT created the conditions for massive upside, in this case the opportunity to make new apps — and by extension, new companies — without needing any permission, “connectors” or pre-made UI components. Alas, the tech industry is past the end of the beginning; welcome to middle age, where the only thickness is your waistline.

I wrote a follow-up to this Article in this Daily Update.


  1. Apple argues, not without merit, that this is for security reasons; critics argue, with considerable | merit, that this restricts innovation on iOS and meaningful competition with the App Store. 

  2. Absent virtualization, although that wasn’t really feasible on user-level PCs at the time Windows was establishing its dominance 

  3. This interview with Tony Fadell includes an excellent discussion on this point in the context of Intel, which applies just as much to Microsoft. 

  4. With, I am ashamed to admit, probably the worst drawing in the history of Stratechery; as I recall I was late to a Chinese New Years’ Eve dinner! 

Warner Bros. Discovery

Last week the Wall Street Journal ran a profile of longtime Discovery CEO (now Warner Bros. Discovery CEO) David Zaslav entitled There’s a New Media Mogul Tearing Up Hollywood, adding “Zas Is Not Particularly Patient”. After opening with an anecdote about Zaslav complaining about Warner Bros. backing a Clint Eastwood movie, even though they thought it would fail, the profile states:

Mr. Zaslav, who last month took over the company resulting from Discovery’s merger with AT&T Inc.’s WarnerMedia, has given every indication he wants to be a talent-friendly mogul, schmoozing with industry personalities at the Beverly Hills Hotel. But the 62-year-old cable-industry veteran, a protégé of the late Jack Welch, longtime CEO of General Electric Co., has shown he isn’t afraid to ruffle the industry’s elite.

He and his team have been scouring the company’s books, making it clear spending needs to be reined in. They have abandoned projects they consider costly and unnecessary. That included pulling the plug on CNN+, barely a month after previous management launched the streaming service, and canceling a DC Comics superhero movie in development. He has given an unwelcome jolt to executives in the WarnerMedia empire who were happy when AT&T decided to part with it in the merger, hoping there would be less financial scrutiny—not more…

Mr. Zaslav has few options other than drastic moves. The deal brought the new company—now home of Warner Bros. and cable channels including HBO, CNN, TNT, Food Network and HGTV—$55 billion in debt, and he has promised to cut at least $3 billion in costs. He has given executives a few weeks to provide restructuring and business plans. “We are not trying to win the direct-to-consumer spending war,” Mr. Zaslav said on an April earnings call. On the call, Chief Financial Officer Gunnar Wiedenfels called out the nearly $30 billion the company spends making and marketing content, saying: “We intend to drive the highest level of financial discipline here.”

The profile continued in the same vein, and came across as fairly negative; that is why I appreciated it, because I am otherwise extremely bullish about the potential for Zaslav’s new company.

HBO and Discovery Synergies

I wrote a year ago when the deal between AT&T and Discovery was announced that Warner Bros. and Discovery had excellent synergies:

Start with the cable bundle: yes, cord-cutting continues, but there are still a lot of households with cable, and this new company will have significantly enhanced bargaining power with distributors. WarnerMedia’s combination of live sports, news, premium television, and scripted shows was already quite strong; Discovery brings a highly differentiated set of channels from HGTV to Discovery to Food Network that not only attract distinct demographics, but also are particularly effective at driving advertising.

Another set of synergies come in the two companies burgeoning direct-to-consumer offerings. Once again the breadth of content is a good thing: HBO Max and Discovery+ have something for everyone in the household. The types of content are complementary as well; back in 2018 I explained in the context of Friends:

While most of the Netflix attention is paid to original series, the truth is that there are two types of shows on the streaming network:

  • Original series drive new subscribers
  • “Filler”, that is, content that is there when subscribers simply want something to watch, keeps people subscribed

Discovery content is excellent filler [while HBO excels at original series].

Zaslav and CFO Gunnar Wiedenfels made all of these points on Warner Bros. Discovery’s inaugural earnings call in late April. Start with the second point above, about streaming synergy. Zaslav, in response to a question about advertising (more on this in a moment), brought up the fact that Discovery+ has very low churn:

We’re in the market already with an ad-light product. We’re the ones that were out there very early saying ad-light looks really compelling, because it’s a great consumer proposition. Our users, the churn was very low; we were doing between two and four minutes of advertising and generating $5, $6 in incremental revenue, and as it scaled, we started to make more. And so, we said very early on, we’re going to switch to offer consumers what they want, a lower-priced opportunity with a small number of advertising.

HBO Max isn’t doing so well in this regard, at least according to Zaslav a few minutes later:

We have some work to do on the platform itself that will be significant. But we also think that one of the big opportunities here is going to be churn reduction. There is meaningful churn on HBO Max, much higher than the churn that we have seen. And so, the ability for us to come together is part of one of the thesis here that managing churn, and we’ve seen this because we’ve been added in Europe for eight years, as we begin to manage churn in a meaningful way, that provides a real meaningful growth.

The benefit of coming together is exactly what I noted a couple of years ago: hits may capture customers, but filler content — especially a wide range of it — keeps them:

What you need is a diversity of content for everybody in the home, and they may come in for Euphoria [from HBO], but our research shows that people watch Euphoria, their favorite second show to watch is 90 Day Fiance [from TLC]. Having a diversity of content is a reason why people are spending hours with Discovery+…when you put all of this diversity of content together, there is content for kids, there is content for teens, it’s basically everybody in the family, why would you go anywhere else. We have all the movies, we have all of the library content that you want…

If you look at HBO right now, what it really needs is precisely what we have. When they are finished with watching Winning Time, they can go and watch Friends or watch Big Bang or watch their favorite movie or go over and watch Oprah or watch some TLC shows just for fun. So, we believe and we see this in Europe where we tried to offer, we thought that the answer was just to offer niche high quality that you get high-quality shock and all content together with a lot of nutrition, in our case in Europe, together with sport and you offer something that everybody in the family uses, and the churn goes way down, it’s much harder to churn out of a product when your kids use it or your significant other uses it or your mom and dad are watching, but also if you find yourself watching it more often. So, I think it’s precisely why we did this deal. And I think everything tells us that it’s going to make us stronger and more compelling because of the breadth of the quality menu of IP that we have.

In short, HBO Max plus Discovery+ is a bundle, with all of the attendant advantages that entails (and which certainly did not apply to a standalone CNN+ service). Of course this also strengthens Warner Bros. Discovery’s hand in terms of the linear TV bundle as well; Zaslav said in his prepared remarks:

One of the company’s unique assets is the linear network group, and in 2021 taken together, we enjoyed the number one share in total television total day in all key demos and people 2+. And we have the greatest brands: HG, Food, HBO, Discovery, CNN, NBA, March Madness, NHL, Magnolia, The Oprah Winfrey Network. Our balanced verticals and content genres across scripted, lifestyle, sports and news provide us with significant opportunities to not only cross-promote for the benefit of the portfolio, but also to offer compelling reach and targeting campaigns for our advertising partners.

Don’t forget negotiating leverage; Wiedenfels noted:

US distribution revenues were up 11% year-over-year, largely driven by the growth of Discovery+ subscribers throughout 2021, while linear affiliate revenues were also up year-over-year as rate increases continued to outpace subscriber declines. Our fully distributed subscribers were down 4% as were total portfolio subscribers when correcting for the impact of the sale of our Great American Country network in early June last year.

Yes, cords are still being cut, especially last year, but the story of cable for the last several years has been the jettisoning of cost-driven subscribers in favor of charging full price for people who actually want linear TV, which, in the long run, means that the linear TV bundle is primarily the sports and news bundle. Warner Bros. is well-placed for this new world, thanks to its combination of sports rights on TNT and TBS, along with CNN. That, though, isn’t a guarantee of profitability; Zaslav noted in an answer explaining why news was more profitable than sports:

When it comes to sports, we’re very careful about sports. And the TNT and Warner team was clever about getting long term rights which we’re going to get a lot of benefit from, but sports are rented and news is scalable.

The unscripted content that Discovery specializes in is even more scalable, and far cheaper; now, instead of negotiating with cable providers like Comcast for a collection of channels that customers like, but don’t necessarily need (particularly since Discovery+ is an option), Warner Bros. Discovery will be negotiating for a bundle that includes sports and news and filler. Those sports rights may eat up a lot of TNT and TBS’s carriage fees; the real money will be made on the extra pennies added to the rest of the channels in the portfolio.

And then, of course, that money can be spent on streaming: sure, Netflix has the advantage of having a larger customer base, but no one — other than Netflix executives until a month ago — said that you could only use subscription dollars and nothing else to acquire streaming content.

Advertising’s Continued Strength

Then there is advertising. First — and in contrast to Netflix’s agonizing on the matter — I enjoyed how matter-of-fact Zaslav was about having ad-supported streaming tiers:

In streaming, we have a massive opportunity to reach the widest possible addressable market by offering a range of tiers, all with the most compelling and complete portfolio of content. A premium and attractively priced ad-free direct-to-consumer product, a lower-priced ad-light tier, something we have had tremendous success with and is our highest ARPU product, and in some very price-sensitive markets outside the United States, we can even offer an advertiser-only product.

Secondly, Warner Bros. Discovery can provide a one-stop shop for advertisers across streaming and TV; Zaslav said in his opening remarks:

The combined strengths of both organizations’ client relationships, advanced advertising, programmatic, sponsorships and direct-to-consumer, ad-light streaming services, all positioned the company with a unique hand. I’ve personally spent quite a bit of time with key advertisers and agencies, and I’m so impressed with the combined capability of our platforms and our ability to uniquely serve the needs of our clients, including integrating sports alongside our broad entertainment offerings.

The bit about integrating sports refers to the fact that Warner Bros. actually had multiple advertising teams (one for sports and one for everything else — but none for streaming); I’m not entirely clear how the company ended up in this position, but given the fact the company’s businesses were built in an era where ad agencies sat in the middle between advertisers on one side and inventory on the other, it makes a certain sort of sense. Today, though, the goal is to be as large and as integrated as possible, the better to share data and provide effective targeting at scale; that offering is particularly compelling given that streaming is the best place to reach all of those people that cut the cord.

Still, even with the cord-cutting, TV advertising continues to be a good business; Zaslav reflected:

We recognize that 4% of subscribers are down and viewership on the platform is down…Long-term, there’s no question that the business is challenging, but CPMs are increasing, advertisers still are looking for inventory, because it’s the most effective inventory in long-form video. And look, remember, broadcast for a period of 20 years was declining and CPMs were increasing. I was at NBC in the mid-’90s when Welsh was saying this can’t continue. We can’t have smaller and smaller audiences and make more and more money. And I think he was right or maybe he will be right eventually, but it’s almost 30 years later and the advertisers are still paying more than the hurdle rate of decline.

So, we will be leaning in with efficiencies and effectiveness to our traditional business, which generates an awful lot of free cash flow…We now have the same or in many cases, the largest reach on television in the US. And the ability to use our own inventory to promote to and from all of our products and the efficiency of doing that and the cost savings of doing it, I think is a big plus for us.

There are a few things going on here, and both go back to an Article I wrote in 2016 called TV Advertising’s Surprising Strength — and Inevitable Fall. The key insight in that piece was that huge swathes of the economy, from large CPG companies to big box retailed to auto makers, were all built around TV advertising, which meant they would prop up the medium far longer than people thought:

Brands uniquely suited to TV are probably by definition less suited to digital advertising, which at least to date has worked much better for direct response marketing. No one is going to click a link in their feed to buy a car or laundry detergent, and a brick-and-mortar retailer doesn’t want to encourage shopping to someone already online. So after a bit of experimentation, they’re back with TV.

Still, I think Facebook and Snapchat in particular will figure brand advertising out: both have incredibly immersive advertising formats, and both are investing in ways to bring direct response-style tracking to brand advertising, including tracking visits to brick-and-mortar retailers.

Six years on, and “Surprising Strength” remains correct, while “Inevitable Fall” is the prediction that is looking shakier: to the extent that brand advertising is going digital, a lot of the shift seems to be doing so primarily as streaming video ads, a shift that will only accelerate as Netflix, HBO Max, and Disney all launch ad products. The most privacy-invasive practices of Facebook et al, meanwhile, have been curtailed (and rightly so — products that cross over into real-world tracking are where I have always drawn a hard line).

Just as important, though, are the impact of changes like ATT on the small and medium-sized businesses that were threatening the biggest advertisers like a hundred duck-sized horses: by destroying their ability to effectively coordinate their advertising spend, ATT and similar regulations have breathed new life into the old ecosystem, which ultimately plays to the benefit of the largest advertising sellers, including Warner Bros. Discovery.

Reasons for Optimism

Like I said, I’m pretty optimistic about Warner Bros. Discovery, which is why I appreciated the Wall Street Journal article I opened with: it’s fair to wonder if the exact sort of clarity of thinking and explicit commitment to financial results that Zaslav demonstrated on that earnings call will translate into managing talent and navigating Hollywood, particularly given the huge debt load the new company is carrying.

That noted, the other reason to be bullish is that Warner Bros. Discovery’s strategy is, in contrast to Netflix, back to the future; Zaslav said at the beginning of the call:

These last few months in our industry have been an important reminder that while technology will continue to empower consumers of video entertainment, the recipe for long-term success is still made up of a few key ingredients. Number one, world-class IP content that is loved all over the globe; two, distribution of that content on every platform and device where consumers want to engage, whether it’s theatrical or linear or streaming; three, a balanced monetization model that optimizes the value of what we create and drives diversified revenue streams; and four, finally, durable and sustainable free cash flow generation.

This isn’t anything new: the Hollywood model has always been about creating compelling content once and then monetizing it through every possible means; in Zaslav’s view a self-owned streaming platform is just an addition to the old model, not a wholesale replacement.

Moreover, things like movies in theaters still have their advantages: they build IP and build awareness for the other content models, above and beyond the money they make. Oh, and to take this update full circle, they also make talent like Eastwood happy. To that end, I suspect in the long run that flexibility and pragmatism in content distribution, combined with real discipline about cash flow, will prove to be more compelling than the same combination in reverse.

Cable’s Last Laugh

If there is one industry people in tech are eternally certain is doomed, it is cable. However, the reality is that cable is both stronger than ever and poised for growth; the reasons why are instructive to not just tech industry observers, but to tech companies themselves.

The Creation of Cable

Robert J. Tarlton was 29 years old, married with a son, when he volunteered to fight in World War II; thanks to the fact he owned a shop in Lansford, Pennsylvania that sold radios, he ended up repairing them all across the European Theater, learning about not just reception but also transmission. After the war Tarlton re-opened his shop, when Motorola, one of his primary suppliers, came out with a new television.

Tarlton was intrigued, but he had a geography problem; the nearest television station was in Philadelphia, 71 miles away; in the middle lie the Pocono Mountains, and mountains aren’t good for reception:

Lansford is separated from Philadelphia by the Pocono Mountains

It turned out, though, that some people living in Summit Hill, the next community over, could get the Philadelphia broadcast signal; that’s where Tarlton sold his first television sets. Of course Tarlton couldn’t demonstrate this new-fangled contraption; his shop was in Lansford, not Summit Hill. However, it was close to Summit Hill: what if Tarlton could place an antenna further up the mountain in Summit Hill and run a cable to his shop? Tarlton explained in an interview in 1986:

Lansford is an elongated town. It’s about a mile and a half long and there are about eight parallel streets bisected with cross streets about every 500 feet. There are no curves; everything is all laid out in a nice symmetrical pattern. Our business place was about three streets from the edge of Summit Hill and Lansford sits on kind of a slope. The edge of Lansford inclines from about a thousand feet above sea level to about fifteen hundred feet above sea level in Summit Hill. So to get television into our store, my father and I put an antenna partly up the mountain. No, we didn’t go all the way up, but we put up an antenna, kind of a crude arrangement, and then from tree to tree we strung a twin lead that was used in those days as a transmission line. We ran this twin lead, crossed a few streets, and into our store. And we had television.

The basic twin lead was barely functional, but that didn’t stop everyone from demanding a television with a haphazard wire to their house; Tarlton realized that new coaxial cable amplifiers designed for a single property could be chained together, re-amplifying the signal so it could reach multiple properties. After getting all of the other electronics retailers on board — Tarlton knew that a clear signal would sell more TV sets, but that everyone needed to use the same system — the first commercial cable system was born, and it sold itself. Tarlton reflected:

You didn’t have to advertise. You had to keep your door locked because the people were clamoring for service. They wanted cable service. You certainly didn’t have to advertise.

People couldn’t get enough TV; Tarlton explained:

Cable is dependent upon advances in technology because people who originally saw one channel wanted to see the second channel, wanted to see the third, and after you had five, they wanted more. So it was a case of more begets more. At one time three channels seemed to be quite sufficient but when we added one more channel, it created a new interest in the cable system. People then had variety. They had alternatives. At one time later five channels seemed enough. As a matter of fact, a man who is often quoted, former FCC Commissioner Ken Cox, said that five channels was enough, and that’s quite a story in itself. The engineers were able to continually refine equipment to add more channels…All these technical advances‑‑continuing advances, automatic gain control, automatic temperature compensation, etc. have made cable what it is today.

One of the most important technological developments was satellite: now cable systems could get signals both more reliably and from far further away; this actually flipped geography on its head. Tarlton said:

At that time we went to the highest possible point to look to the transmitter. Today with satellites, we go to the lowest possible point because we don’t want the interference from other signals. So it is ironic that we have changed so much from what we used to do.

Within these snippets is everything that makes the cable business so compelling:

  • Cable is in high demand because it provides the means to get what customers most highly value.
  • Cable works best both technologically and financially when it has a geographic monopoly.
  • Cable creates demand for new supply; technological advances enable more supply, which creates more demand.

It’s that last bit about satellites being better on lower ground that stands out to me, though: as long as you control the wires into people’s houses you can and should be pragmatic about everything else.

Cable’s Evolution

Tarlton would go on to work for a company called Jerrold Electronics, which pivoted its entire business to create equipment for cities that wanted to emulate Lansford’s system; Tarlton would lead the installation of cable systems across the United States, which for the first two decades of cable mostly retransmitted broadcast television.

The aforementioned satellite, though, led to the creation of national TV stations, first HBO, and then WTCG, an independent television station in Atlanta, Georgia, owned by Ted Turner. Turner realized he could buy programming at local rates, but sell advertising at national rates via cable operators eager to feed their customers’ hunger for more stations. Turner soon launched a cable only channel devoted to nothing but news; he called it the Cable News Network — CNN for short (WTCG would later be renamed TBS).

Jerrold Electronics, meanwhile, spun off one of the cable systems it built in Tupelo, Mississippi to an entrepreneur named Ralph Roberts; Roberts proceeded to systematically buy up community cable systems across the country, moving the company’s headquarters to Philadelphia and renaming it to Comcast Corporation (Roberts would eventually hand the business off to his son, Brian). Consolidation in the provision of cable service proceeded in conjunction with consolidation in the production of content, an inevitable outcome of the virtuous cycle I noted above:

  • Cable companies acquired customers who wanted access to content
  • Studios created content that customers demanded

The more customers that a cable company served, the stronger their negotiating position with content providers; the more studios and types of content that a content provider controlled the stronger their negotiating position with cable providers. The end result were a few dominant cable providers (Comcast, Charter, Cox, Altice, Mediacom) and a few dominant content companies (Disney, Viacom, NBC Universal, Time Warner, Fox), tussling back-and-forth over a very profitable pie.

Then came Netflix, and tech industry crowing about cord cutting.

Netflix and other streaming services were obviously bad for television: they did the same job but in a completely different way, leveraging the Internet to provide content on-demand, unconstrained by the linear nature of television that was a relic of cable’s origin with broadcast TV. Here, though, cable’s ownership of the wires was an effective hedge: the same wires that delivered linear TV delivered packet-based Internet content.

Moreover, this didn’t simply mean that cable’s TV losses were made up for by Internet service: Internet service was much higher margin because companies like Comcast didn’t need to negotiate with a limited number of content providers; everything on the Internet was free. This has meant that the fortunes of cable companies has boomed over the last decade, even as cord-cutting has cut the cable TV business by about a third.

Cable companies today, though, are yet another category down from their pandemic highs, thanks to fear that the broadband growth story is mostly over; fiber offers better performance, 5G opens the door to wireless in the home, and anyone who doesn’t have broadband now is probably never going to get it. I think, though, this underrates the strategic positioning of cable companies, and ignores the industry’s demonstrated ability to adapt to new strategic environments.

The Wireless Opportunity

From the Wall Street Journal in 2011:

Verizon Wireless will pay $3.6 billion to buy wireless spectrum licenses from a group of cable-television companies, bringing an end to their years-long flirtation with setting up its own cellphone service. The sellers—Comcast Time Warner Cable Inc. and Bright House Networks—acquired the spectrum in a government auction in 2006 and now will turn it over to the country’s biggest wireless carrier at more than a 50% markup. While cable companies have dabbled with wireless, the spectrum has largely sat around unused, prompting years of speculation about the industry’s intentions…

Under the deal, Verizon Wireless will be able to sell its service in the cable companies’ stores. The carrier, in turn, will be able to sell the cable companies’ broadband, video and voice services in its stores. Verizon’s FiOS service only reaches 14% of U.S. households, according to Bernstein Research. In four years’ time, the cable companies will have the option to buy service on Verizon’s network on a wholesale basis and then resell it under their own brand.

The joint marketing arrangement “amounts to a partnership between formerly mortal enemies,” wrote Bernstein analyst Craig Moffett in a research note.

Moffett would later partner with Michael Nathanson to form their own independent research firm (called MoffettNathanson, natch); Moffett released an updated report last month that explained how that Verizon deal ended up playing out:

When Comcast and Time Warner Cable sold their AWS-1 spectrum to Verizon back in 2011, they believed at the time that they were walking away from ever becoming facilities-based wireless players. They therefore viewed it as imperative that the sale come with an MVNO agreement with Verizon to compensate for that forfeiture. They got precisely what they wanted, an MVNO contract with Verizon that was described at the time as “perpetual and irrevocable“…

It took another six years after that transaction before Comcast finally launched Xfinity Mobile in mid-2017. Charter [which merged with Time Warner Cable, acquiring the latter’s MVNO rights] followed suit a year later…in four short years, the Cable operators have become the fastest growing wireless providers in the country, accounting for nearly 30% of wireless industry net additions. Cable’s 7.7 million mobile lines represent ~2.5% of the U.S. mobile phone market (including prepaid and postpaid phones).

As Moffett notes, this growth is particularly impressive given that most cable companies couldn’t feasibly offer family plans under the terms of the original deal, which was negotiated before such a concept even existed; the deal has been re-negotiated, though, and almost certainly to the cable companies’ advantage: if Verizon is going to lose customers to an MVNO, it would surely prefer said MVNO be on their network; this means that the cable companies have negotiating leverage.

What, though, makes the cable companies such effective MVNOs? One of the most interesting parts of Moffett’s note was proprietary data about the amount of data used by cellular subscribers; it turns out that cable MVNO customers are far more likely to consume data over WiFi, perhaps because of cable company out-of-home WiFi hot spots. This could become even more favorable in the future as cable companies build out Citizens Broadband Radio Service (CBRS) service, particularly in dense areas where cable companies have wires from which to hang CBRS transmitters. Moffett writes:

In a perfect world, Cable will offload traffic onto their own facilities where doing so is cheap (high density, high use locations) and leave to Verizon the burden of carrying traffic where doing so is/would be expensive. Because the MVNO agreement is “perpetual and irrevocable,” and is based on average prices (i.e., the same price everywhere, whether easy or hard to reach), Cable is presented with a perfect ROI arbitrage; they can take the high ROI parts of the network for themselves, and leave the low ROI parts of the network to their MVNO host… all without sacrificing anything with respect to their national coverage footprint.

One can understand why Verizon gave the cable companies these rights in 2011; the phone company desperately needed spectrum, and besides, everyone knew that MVNOs could never be economically competitive with their wholesale providers, who were the ones that actually made the investments in the network.

That’s the thing, though: cable companies had their own massive build-out, one that was both much older in its origins and, because it connected the house, actually carried more data. This was to the benefit of Verizon and other cellular providers, of course: a Verizon iPhone uses WiFi at home just as much as a Comcast iPhone does; here, though, it was cable that had internet economics on its side: there is no competitive harm in giving equal access to an abundant resource; it is cellular access that is scarce, which means that the cable companies’ MVNO deal, in conjunction with their out-of-home Internet access options, gives them a meaningful advantage.

Customer Acquisition

Moffett spends less time on customer acquisition; anecdotally speaking, it’s clear that Charter’s Spectrum, which provides the cable service I consume (via the excellent Channels application), is pushing wireless service hard: phones are front-and-center in their stores, and Spectrum wireless commercials fill local inventory. Moreover, this is a well-trodden playbook: cable companies came to dominate the fixed line phone service business simply because it was easier to get your TV, Internet, and phone all in the same place (and of those, the most important was TV, and now Internet); it’s always easier to upsell an existing subscriber than it is to acquire a new one.

Of course cable companies long handled customer acquisition for content creators — that was the cable bundle in a nutshell. What is interesting is how this customer acquisition capability is attractive to the companies undoing that bundle: Netflix, for example, put its service on Comcast’s set-top boxes in 2016, and made a deal for Comcast to sell its service in 2018. I wrote in a Daily Update at the time:

Netflix, meanwhile, is laddering up again: the company doesn’t actually need a billing relationship with end users, it just needs ever more end users (along with the data about what they watch) to spread its fixed content costs more widely; the company said in its shareholder letter that:

These relationships allow our partners to attract more customers and to upsell existing subscribers to higher ARPU packages, while we benefit from more reach, awareness and often, less friction in the signup and payment process. We believe that the lower churn in these bundles offsets the lower Netflix ASP.

What is particularly interesting is that this arrangement moves the industry closer to the endgame I predicted in The Great Unbundling

“Endgame” was a bit strong: that Article was, as the title says, about unbundling; one of my arguments was that the traditional cable TV bundle would become primarily anchored on live sports and news, while most scripted content went to streaming. That is very much the case today (TBS, for example, is abandoning scripted content, while becoming ever more reliant on sports). What was a mistake was insinuating that this was the “end”; after all, as Jim Barksdale famously observed, the next step after unbundling is bundling.

To that end, Netflix + Xfinity TV service was a bundle of sorts, but the real takeaway was that Comcast was fine with being simply an Internet provider (which ended up helping with TV margins, since cable companies mostly gave up on fighting to keep cord cutters). Shortly after the Netflix deal Comcast launched Xfinity Flex, a free 4K streaming box for Internet-only subscribers that included a storefront for buying streaming services (which would be billed by Comcast). You can even subscribe to digital MVPDs like YouTube TV!

The free Xfinity Flex streaming box

The first takeaway of an offering like Xfinity Flex ties into the wireless point: cable companies already have a billing relationship with the customer — because they provide the most essential utility for accessing what is most important to said customer — which makes them particularly effective at customer acquisition. That is why Netflix, YouTube, etc. are all willing to pay a revenue share for the help.

The Great Rebundling?

The second takeaway, though, is that the cable companies are better suited than almost anyone else to rebundle for real. Imagine a “streaming bundle” that includes Netflix, HBO Max, Disney+, Paramount+, Peacock, etc., available for a price that is less than the sum of its parts. Sounds too good to be true, right? After all, this kind of sounds like what Apple was envisioning for Apple TV (the app) before Netflix spoiled the fun; I wrote in Apple Should Buy Netflix in 2016:

Apple’s desire to be “the one place to access all of your television” implies the demotion of Netflix to just another content provider, right alongside its rival HBO and the far more desperate networks who lack any sort of customer relationship at all. It is directly counter to the strategy that has gotten Netflix this far — owning the customer relationship by delivering a superior customer experience — and while Apple may wish to pursue the same strategy, the company has no leverage to do so. Not only is the Apple TV just another black box that connects to your TV (that is also the most expensive), it also, conveniently for Netflix, has a (relatively) open app platform: Netflix can deliver their content on their terms on Apple’s hardware, and there isn’t much Apple can do about it.

Six years on and Netflix is in a much different place, not only struggling for new customers but also dealing with elevated churn. Owning the customer may be less important than simply having more customers, particularly if those customers are much less likely to churn. After all, that’s one of the advantages of a bundle: instead of your streaming service needing to produce compelling content every single month, you can work as a team to keep customers on board with the bundle.

The key point about a bundle, as longtime YouTube executive and Coda CEO Shishir Mehrotra has written, is that it minimizes SuperFan overlap while maximizing CasualFan overlap; in other words, effective bundles have more disparate content that you are vaguely interested in, instead of a relatively small amount of focused content that you care about intensely. This makes the bundle concept even more compelling to new entrants in the streaming wars, who may not have as large of libraries as Netflix, and certainly don’t have as many subscribers over which to spread their content costs.

Moreover, the fact that the streaming services have largely done their damage to traditional TV, leaving the cable TV bundle as the sports and news bundle, means it is actually viable to create a lower-priced bundle than what was previously available (if you don’t want sports). After all, you’re not cannibalizing TV, but rather bringing together what has long since been broken off (unbundling then bundling!).


This isn’t something that is going to happen overnight: despite the fact that bundles are good for everyone it is hard to get independents into a bundle as long as they are growing; Netflix’s recent struggles are encouraging in this regard, particularly if other streaming services start to face similar headwinds. Moreover, cable companies are not the only entities that will seek to pull something like this off: Apple, Amazon, Google, and Roku already make money from revenue shares on streaming subscriptions they sell; all of them sell devices that can be used as interfaces for selling a bundle. And, of course, there are more Internet providers than just the cable companies: there is fiber, wireless, and even Starlink.

The breadth of the cable company bundle, though, is unmatched: not only might it include streaming services, but also linear TV; more than that, this is the company selling you Internet access, and increasingly wireless phone service. That gives even more latitude for discounts, and perks like no data caps on streamed content, not just at home but also on your phone.

All of these advantages go back to Robert J. Tarlton and Lansford, Pennsylvania. A recurring point on Stratechery this past year has been the durability and long-term potential inherent in technology rooted in physical space. I wrote in Digital Advertising in 2022:

Real power in technology comes from rooting the digital in something physical: for Amazon that is its fulfillment centers and logistics on the e-commerce side, and its data centers on the cloud side. For Microsoft it is its data centers and its global sales organization and multi-year relationships with basically every enterprise on earth. For Apple it is the iPhone, and for Google is is Android and its mutually beneficial relationship with Apple (this is less secure than Android, but that is why Google is paying an estimated $15 billion annually — and growing — to keep its position). Facebook benefited tremendously from being just an app, but the freedom of movement that entailed meant taking a dependency on iOS and Android, and Apple has exploited that dependency in part, if not yet in full.

For cable companies, power comes from a wire; it would certainly be ironic if the cord-cutting trumpeted by tech resulted in cable having even more leverage over customers and their wallets than the pre-Internet era.