FTC Sues Amazon

From the Wall Street Journal:

The Federal Trade Commission and 17 states on Tuesday sued Amazon, alleging the online retailer illegally wields monopoly power that keeps prices artificially high, locks sellers into its platform and harms its rivals. The FTC’s lawsuit, filed in Seattle federal court, marks a milestone in the Biden administration’s aggressive approach to enforcing antitrust laws and has been anticipated for months. The agency’s chair, Lina Khan, is a longtime critic of Amazon who wrote in the Yale Law Journal in 2017 that earlier generations of competition cops and courts abandoned the law’s concerns over conglomerates such as Amazon. Khan has had trouble convincing courts of her antitrust views, however. Having earlier lost cases against both Microsoft and Meta Platforms, she and her agency now face a crucial test in taking on Amazon.

The federal agency and the states alleged that Amazon violated antitrust laws by using anti-discounting measures that punished merchants for offering lower prices elsewhere. The government also said sellers on Amazon were compelled to use its logistics service if they want their goods to appear in Amazon Prime, the subscription program whose perks include faster shipping times. Such “tying,” the complaint says, illegally “restricts sellers’ choices” and “reduces product selection available to Amazon’s rivals.”

The FTC also said sellers feel they must use Amazon’s services such as advertising to be successful on the platform. Between being paid for its logistics program, advertising and other services, “Amazon now takes one of every $2 that a seller makes,” Khan said at a briefing with the media Tuesday.

This is the key paragraph of the FTC’s (heavily redacted) complaint:

This case is about the illegal course of exclusionary conduct Amazon deploys to block competition, stunt rivals’ growth, and cement its dominance. The elements of this strategy are mutually reinforcing. Amazon uses a set of anti-discounting tactics to prevent rivals from growing by offering lower prices, and it uses coercive tactics involving its order fulfillment service to prevent rivals from gaining the scale they need to meaningfully compete. Amazon deploys this interconnected strategy to block off every major avenue of competition — including price, product selection, quality, and innovation — in the relevant markets for online superstores and online marketplace services.

I will, for the sake of space, focus on these two complaints; I will note, though, that the extreme suspicion with which things like a subscriptions-based loyalty program (Prime), bundling (Prime), store-branded goods (Amazon Basics et al.), and advertising are presented hardly does the FTC’s case any good. Characterizing practices that have been common tactics in retail for literally decades as some sort of nefarious plot makes me question this paragraph from the press release:

The complaint alleges that Amazon violates the law not because it is big, but because it engages in a course of exclusionary conduct that prevents current competitors from growing and new competitors from emerging. By stifling competition on price, product selection, quality, and by preventing its current or future rivals from attracting a critical mass of shoppers and sellers, Amazon ensures that no current or future rival can threaten its dominance. Amazon’s far-reaching schemes impact hundreds of billions of dollars in retail sales every year, touch hundreds of thousands of products sold by businesses big and small and affect over a hundred million shoppers.

That first sentence in particular made me think of this meme:

A meme about the FTC claiming it isn't suing Amazon for being big

Set that aside for now, though; I actually think at least one of the complaints is compelling, if not convincing.

FBA and Prime

This complaint is not the compelling one; from the complaint:

Amazon deploys yet another tactic as part of its monopolistic course of conduct. Amazon conditions sellers’ ability to be “Prime eligible” on their use of Amazon’s order fulfillment service. As with Amazon’s anti-discounting tactics, this coercive conduct forecloses Amazon’s rivals from drawing a critical mass of sellers or shoppers – thereby depriving them of the scale needed to compete effectively online.

Amazon makes Prime eligibility critical for sellers to fully reach Amazon’s enormous base of shoppers. In 2021, more than ██% of all units sold on Amazon in the United States were Prime eligible. Prime eligibility is critical for sellers in part because of the enormous reach of Amazon’s Prime subscription program. According to public reports, Mr. Bezos told Amazon executives that Prime was created in 2005 to “draw a moat around [Amazon’s] best customers.” Prime now blankets more than ██% of all U.S. households, with its reach extending as far as ██% in some zip codes.

Amazon requires sellers who want their products to be Prime eligible to use Amazon’s fulfillment service, Fulfillment by Amazon (“FBA”), even though many sellers would rather use an alternative fulfillment method to store and package customer orders.

I find this charge ridiculous on its face. The core offering of Prime — the feature that it launched with 18 years ago — was a shipping guarantee. From the February 2005 press release:

Today the Company also introduced “Amazon Prime,” Amazon.com’s first ever membership program. For a flat membership fee of $79 per year, members get unlimited, express two-day shipping for free, with no minimum purchase requirement. Members also get one-day, overnight shipping for only $3.99 per item — order as late as 6:30PM ET.

“Amazon Prime is ‘all-you-can-eat’ express shipping,” said Jeff Bezos, founder and CEO of Amazon.com. “Though expensive for the Company in the short-term, it’s a significant benefit and more convenient for customers. With Amazon Prime, there’s no minimum purchase to think about, and no consolidating orders — two-day shipping becomes an everyday experience rather than an occasional indulgence.”

It seems eminently reasonable to me that Amazon predicate inclusion in a program defined by a shipping guarantee on letting Amazon deliver your products. Prime was a massive risk at the time, dwarfed only by the many billions of dollars that Amazon has spent since then building out its logistics network. I see no basis on which a government regulator ought to demand that Amazon give out access to the Prime label and bear the reputation risk for 3rd-party delivery services that did not take those risks or make those investments. It’s absurd.

The FTC’s argument seems to be mostly based on the existence of an Amazon program called “Seller-Fulfilled Prime” that launched in 2015, before enrollment was shuttered in 2019, and suspended in 2020; Amazon announced it was coming back in 2023 (perhaps because of this case). Seller-Fulfilled Prime let sellers participate in the Prime program, as long as they delivered the goods themselves (i.e. didn’t use a 3rd-party fulfillment service) and passed Amazon’s stringent requirements. The FTC, based on internal emails (which are redacted), claims that Amazon killed the program because it reduced the company’s hold on merchants. A few points on this:

  • First, Prime is Amazon’s brand and program; just because Amazon opened it up once doesn’t mean it ought be compelled to keep it open.
  • Second, this charge definitely feels downstream from a fishing expedition; I imagine those redacted emails are pretty spicy, because it’s hard to see any justification for this charge otherwise.
  • Three, look carefully at those dates: in 2019 Amazon announced that Prime would shift to a one-day guarantee, and in 2020 Amazon was in the middle of saving the country during lockdowns. Given the reputational risk attached to Prime those seem like relevant reasons to suspend the program.

Ultimately, though, these arguments pale in comparison to the sheer audacity of the FTC’s insistence it ought to be able to simply take what Amazon has built and distribute it to whoever wants it.

Pricing Punishment

This charge is more compelling; from the complaint:

One set of tactics stifles the ability of rivals to attract shoppers by offering lower prices. Amazon deploys a sophisticated surveillance network of web crawlers that constantly monitor the internet, searching for discounts that might threaten Amazon’s empire. When Amazon detects elsewhere online a product that is cheaper than a seller’s offer for the same product on Amazon, Amazon punishes that seller. It does so to prevent rivals from gaining business by offering shoppers or sellers lower prices…

The sanctions Amazon levies on sellers vary. For example, Amazon knocks these sellers out of the all-important “Buy Box,” the display from which a shopper can “Add to Cart” or “Buy Now” an Amazon-selected offer for a product. Nearly ██% of Amazon sales are made through the Buy Box and, as Amazon internally recognizes, eliminating a seller from the Buy Box causes that seller’s sales to “tank.” Another form of punishment is to bury discounting sellers so far down in Amazon’s search results that they become effectively invisible…

Moreover, Amazon’s one-two punch of seller punishments and high seller fees often forces sellers to use their inflated Amazon prices as a price floor everywhere else. As a result, Amazon’s conduct causes online shoppers to face artificially higher prices even when shopping somewhere other than Amazon. Amazon’s punitive regime distorts basic market signals: one of the ways sellers respond to Amazon’s fee hikes is by increasing their own prices off Amazon. An executive from another online retailer sums up this perverse dynamic: Amazon’s anti-discounting conduct █████████████████████████████████. Amazon’s illegal tactics mean that when Amazon raises its fees, others — competitors, sellers, and shoppers – suffer the harms.

Amazon’s tactics suppress rival online superstores’ ability to compete for shoppers by offering lower prices, thereby depriving American households of more affordable options. Amazon’s conduct also suppresses rival online marketplace service providers’ ability to compete for sellers by offering lower fees because sellers cannot pass along those savings to shoppers in the form of lower product prices.

This all sounds bad and, at first glance, anti-competitive. Consider, though, what the FTC is implicitly asking:

  • First, Amazon is replacing the offending merchants in the Buy Box with other merchants who offer lower prices (including, potentially, Amazon itself). It’s difficult to understand how this is bad for consumers.
  • Second, insisting that Amazon promote merchants who offer higher prices on Amazon and lower prices elsewhere is, once again, insisting that Amazon offer the fruits of its investments, both in terms of customer acquisition and in delivery speed, to merchants who are actively seeking to develop an Amazon competitor.
  • Third, most-favored nation clauses, which this is in practice if not in specifics (in fact, according to the FTC, these practices replaced MFN clauses) have consistently been found to be legal.

Most importantly, though, this alleged illegality rests on Amazon being a monopoly, which means, as happens with all antitrust cases, we have a question of market definition. In this case the FTC has defined the relevant markets as “the online superstore market” and “the market for online marketplace services.” These definitions, conveniently enough, exclude all brick-and-mortar retailers (the word “omnichannel” doesn’t appear in the complaint), and all independent retailers, such as those hosted by Shopify. The FTC says that this narrow definition makes sense because of the convenience and selection that is exclusive to “online superstores”, thanks to the ability to ship things together; never-mind that other websites are only a click away, and that the entire reason you can ship things together is because most items on Amazon are Fulfilled by Amazon (see the previous complaint).

This definition is obviously going to be critical to this case: Benedict Evans ran the numbers and, if you consider all of retail, then Amazon only has single-digits worth of marketshare; if you consider all of e-commerce Amazon has about 35% share. What is clear is that just about everything on Amazon is available elsewhere: the power the company has with regard to pricing is a function of the demand it delivers to merchants — demand that is not compelled of customers, but willingly given precisely because customers find Amazon’s service to be valuable.

Amazon’s Market-Making

That’s not to say that there aren’t merchants wholly beholden to Amazon; in 2019 an Amazon reseller named Molson Hart wrote an essay on Medium complaining about Amazon’s fulfillment prices, and included this chart:

We sell plush and construction toys on Amazon. Well, technically, we sell toys on our website, on eBay, on Walmart.com, to brick-and-mortar stores, and we sell on Amazon. But, really, we only sell on Amazon. In 2018, we had about $4,000,000 in sales but Amazon.com accounted for over 98% of that.

How Amazon dominates the sale of one merchant

Harvard Business School would call this “vendor/customer concentration”. In the e-commerce world, we call it being Amazon’s bitch.

While Amazon received $1.95 million from us last year, they are not afraid of losing our business for a couple of reasons. First, there are thousands of companies out there eager to take our place. Second, Amazon had $277 billion in gross merchandise revenue in 2018. Our $3.9 million in sales on Amazon accounted for .0014% of that. Finally, we have nowhere else to go and Amazon knows it.

Hart shared his entrepreneurship story on a podcast, and I think it provides important context:

In 2014-2015, you could sell literally anything you wanted on Amazon and it was profitable. You didn’t really need to do any data analysis. If you were buying in China and you didn’t do an absolutely abominable job sending it to Amazon, you were making money. When you’re selling into retail it’s different. On Amazon you could just sell commodities in 2014-2015 — literally a towel, you didn’t have to have a brand or anything — but to sell into retail, which is a developed market, I can’t call into Walmart and be like, “Hey, I’m this twenty-six year old kid and I’m going to sell you towels.” They’re like “No, we’re going to buy from branded manufacturers of towels and factories for towels and people who have an established business, some credibility in this industry.” So if we wanted to sell into retail we had to bring innovative products to the table, otherwise there was no incentive for them to take the risk on a young company with a young founder, etc.

So I figured out at one point, “Maybe instead of coming up with all of these innovative products, because innovation was really hard, maybe we could find stuff that is already popular in China or Japan or Korea and just bring it to America and re-brand it.” So then what I did is I just went onto Taobao — China’s Amazon — and went through tens of thousands of products. I looked at the top sellers in each category, toys, games, bikes, sporting goods, all that stuff, and anytime I saw a product that was selling really well in China that I had never seen before I put it into a bucket and said, “OK, we’re going to launch this in America.” So that’s what we did and by-and-large was a pretty effective strategy. That’s actually how Brain Flakes was born.

Brain Flakes is Hart’s biggest product, and the biggest driver of that Amazon-dominated sales chart up above. The question I have with regards to that chart, though, and Hart’s griping about Amazon’s fees, is hasn’t Amazon earned the right to charge Hart whatever it deems appropriate? Hart himself admits that his entire business was predicated on Amazon’s marketplace model, a model that enabled individual entrepreneurs with smarts and hustle to build big businesses without a reputation or a brand. To put it another way, Hart’s business is dominated by Amazon because Amazon made his entire business possible (and if these complaints sound familiar, they echo complaints about Facebook from companies built on Facebook, Yelp’s complaints that they have to acquire customers instead of relying on SEO, or publishers the world over blaming their commodity status on the same companies that made the market for them in the first place).

I am by no means here to pick on Hart or any of the millions of other 3rd-party merchants on Amazon: I salute their entrepreneurial grit. I fail, though, to see what exactly is anticompetitive in this story. What I see, much like the Prime program above, is massive investment by Amazon to create an entire category that dramatically increased the amount of commerce, and it’s unclear to me why they can’t conduct normal business activity to ensure they have competitive prices in that market.

That noted, the reason I find this part of the complaint compelling is that I do have unease about the use of enforced price matching and other non-organic means of limiting competition; that, along with acquisitions and digital advertising, was one of the three areas of concern I highlighted in a 2019 Article that explained what antitrust crusaders fail to understand about Aggregators who gain market power not through controlling supply but rather by harnessing demand. The issue, though, is that these concerns ought be addressed through new laws; trying to apply antitrust regulations that were created for the analog world in a digital context simply doesn’t make sense, and very likely will, like so many other recent FTC actions, fail in court.

Charter-Disney Winners and Losers

From the Wall Street Journal:

Disney and Charter Communications have reached an agreement that will restore popular channels, including ESPN and ABC, to the cable operator’s nearly 15 million subscribers, ending a blackout that lasted for more than a week. The agreement comes just hours before ESPN’s coverage of the first “Monday Night Football” game of the season — a highly anticipated matchup between the New York Jets and Buffalo Bills. It also marks a seminal moment in the oft-fraught relationship between pay-TV providers and entertainment companies, which have been at loggerheads in recent years as the continued rise of streaming upended their respective businesses.

Disney and Charter released a joint press release that included the details:

Among the key deal points:

  • In the coming months, the Disney+ Basic ad-supported offering will be provided to customers who purchase the Spectrum TV Select package, as part of a wholesale arrangement;
  • The ESPN flagship direct-to-consumer service will be made available to Spectrum TV Select subscribers upon launch and;
  • Charter will maintain flexibility to offer a range of video packages at varying price points based upon different customer’s viewing preferences.

Charter also will use its significant distribution capabilities to offer Disney’s direct-to-consumer services to all of its customers – in particular its large broadband-only customer base – for purchase at retail rates. These include Disney+, Hulu and ESPN+, as well as The Disney Bundle.

Effective immediately, Spectrum TV will provide its customers widespread access to a more curated lineup of 19 networks from The Walt Disney Company. Spectrum will continue to carry the ABC Owned Television Stations, Disney Channel, FX and the Nat Geo Channel, in addition to the full ESPN network suite. Networks that will no longer be included in Spectrum TV video packages are Baby TV, Disney Junior, Disney XD, Freeform, FXM, FXX, Nat Geo Wild, and Nat Geo Mundo.

To preserve all these valuable business models, the parties also have renewed their commitment to lead the industry in mitigating the effects of unauthorized password sharing.

The biggest question here is the third bullet point: Charter would like to offer bundles that don’t include ESPN, but it’s notable that the press release says “maintain flexibility”, as opposed to “gain flexibility”; that lines up with something ESPN chairman Jimmy Pitaro told the Hollywood Reporter:

“The first [priority] was protecting the traditional business model, one that’s been very, very good to us and continues to be good to us,” Pitaro adds. “And we were able to do that, we secured commitments that were very strong in terms of rates and minimum penetration.”

I’m going to assume that this means that Spectrum will continue to be contractually compelled to have ESPN in 70%~80% of its TV packages (that’s why it’s hard to even find information about the company’s TV Basic and TV Essentials packages on its marketing pages); that leaves the question of who won, and the answer depends on your perspective: are you asking about this specific stand-off, the overall future of TV, or the entire arc of video?

UPDATE: The minimum penetration is 85%.

Current Standoff — Winner: Charter

Charter is the unequivocal winner of this standoff. Indeed, the agreement details above completely validate my argument last week that ESPN no longer has the same leverage it enjoyed for decades. Remember, Charter was willing to meet Disney’s demand for higher ESPN affiliate fees; what Charter wanted was all of Disney’s non-sports content too. That used to exist on Disney’s cable channels, but Disney — along with the rest of Hollywood — had broken the bundle by putting all of their best content on its streaming services. Now the ad-supported Disney+ will be a part of the cable bundle as well, along with the future ESPN streaming service.

Disney did, of course, get its rate increase and minimum penetration guarantees, and will charge for Disney+; that charge, though, is balanced out by the elimination of the channels that Disney cannibalized from the bundle.

More important is the fact that Disney has been forced to give up its attempt at double-dipping: no longer can the company get paid by Charter for channels and charge subscribers directly for what is generally the same general entertainment content. That was what Charter wanted, and Disney, lacking leverage and the reality of massive sports rights fees that presumed the presence of Charter’s millions of TV subscribers, gave in.

Future of TV — Winner: Disney

Here is perhaps the biggest surprise in this deal: I actually think it is Disney that is the bigger winner when it comes to the future of TV. Note this paragraph in the press release:

Charter will also use its significant distribution capabilities to offer Disney’s direct-to-consumer services to all its customers – in particular its large broadband-only customer base – for purchase at retail rates. These include Disney+, Hulu and ESPN+, as well as The Disney Bundle.

I wrote extensively about the go-to-market capabilities of cable companies and why they were well-positioned to bundle streaming services last year in Cable’s Last Laugh; I will refrain from quoting half of the piece, and briefly summarize:

  • First, Disney, along with every other streaming service, needs help improving their go-to-market efficiency; in this there is no better asset than the cable companies’ existing go-to-market machines.
  • Second, Disney, along with every other streaming service, needs help lowering churn. When you are a standalone streaming service the only way to stop churn is by continually producing new must-see content, which is extremely expensive. It is much easier if you are part of a bundle and sharing the burden of generating new content with other companies.
  • Third, Disney, along with every other streaming service, has come to realize that their greatest growth opportunity is in advertising. A profitable advertising business, though, depends on scale; the fact that Disney has just quadrupled its ad-supported Disney+ base is a big deal!

It’s obvious, of course, that a stronger bundle is better for Disney’s existing cable channels, particularly ESPN; what should also be clear is that a stronger bundle is better for Disney’s streaming services as well, and now Disney is committed to building exactly that alongside of Charter, and inevitably over the next several years, every other pay-TV provider.

This is why Disney is the long-term winner: the obstacle to the company doing the right thing for the long-term health of their business was not Charter, it was Disney’s own management, and Charter did the company the tremendous favor of forcing it to give up an unsustainable double-dipping strategy and take a step into a future of re-bundling.

Charter, meanwhile, knows better than anyone the value of bundles: the more services it can tie into a single billing statement the stickier their offering is for end users. Yes, the company may have been willing to give up video, but it is stronger for having it.

The Arc of Video — Winner: Consumers

All that noted, both Charter and Disney emerge from the last decade weaker than they were before. Disney, along with the rest of Hollywood, killed the golden goose that was 90% of households subscribed to cable. No matter how successful Disney+ or an over-the-top ESPN streaming service becomes it will never be as profitable as effectively charging a tax on every household in America.

Charter is worse off as well: yes, the company had leverage over Disney in this negotiation, but that was a function of no longer caring whether or not it carried ESPN, not because the alternative was better. Indeed, Charter’s strategy of directing unhappy customers to Fubo was a necessary negotiating ploy that carried long-term risks: once customers are accustomed to getting their sports and news from an app it quickly becomes apparent that that app can be accessed over any broadband provider, including fiber and fixed wireless. As I noted above, Charter knows the value of bundles better than anyone, and this new bundle is much weaker than the old one.

The big winners, though, are consumers, on multiple levels:

  • First, consumers will soon have the option to get nearly all of their entertainment on an a la carte basis, particularly once the ESPN streaming service launches, even as they have access to a bundle that includes nearly all of their entertainment for a lower price than if they subscribed individually.
  • Second, consumers will be able to access general entertainment on-demand, and a far greater range of sports thanks to the effectively infinite number of channels enabled by streaming.
  • Third, consumers will be able to get their general entertainment ad-free if they are willing to pay (this is a win for the entertainment companies as well, who will gain the opportunity to segment their customer base based on their willingness to pay for an ad-free experience).

The biggest win of all, though, comes at Charter’s expense specifically: as noted above the loosening of the TV part of the bundle will make it easier to change broadband providers. That means that Charter will have to compete based on the quality of its broadband, which has fallen behind fiber over the last several years. Charter has announced plans to rectify that, pledging to spend $5.5 billion over the next three years to move its entire network to DOCSIS 4.0; other cable carriers are plotting similar upgrades. Meanwhile, Charter has been very aggressive in pushing its mobile service, significantly undercutting the big phone carriers in price, particularly as part of a bundle.

This is great news for consumers, and redolent of what happened with the iPhone. When the consumer point of contact changed from a carrier-controlled interface to an Apple-controlled one, the only alternative for phone carriers was to compete on their network quality; that was bad for profitability but great for consumers, both in terms of price and quality. I would expect a similar effect as the consumer point of contact for TV continues to change from a cable box to apps: infrastructure providers like Charter will have to compete by building infrastructure, and that’s a good thing.

Other Winners and Losers

Tech is another big winner of this fight, which shouldn’t be a surprise: big tech is so dominant in part because it provides so much consumer surplus in its markets; a market where consumers are winning is probably one where tech is as well. In this case video is becoming an app game, and while Charter and the other pay-TV providers have useful go-to-market channels, tech is the king of distribution and customer acquisition.

To that end, what cable can do for streamers is already being done by Amazon, Apple, Roku, etc.; the latest entrant is YouTube, which is using NFL Sunday Ticket to launch YouTube Channels, a streaming marketplace designed to sell services like Disney+ (for an ongoing commission, of course). YouTube, though, can pair that offering with YouTube TV, which means it has everything that Spectrum has; in this regard the fact that YouTube has already significantly increased the value of Sunday Ticket through better technology should make competitors nervous.

The second big winner is Fox. Fox sold off its 21st Century division to Disney to focus on news and sports; Fox News charges the second highest affiliate fees amongst cable channels, and Fox has invested heavily in sports rights that run across the Fox broadcast network (which gets large retransmission fees from cable companies), FS1, and regional networks like the Big Ten network. The cable bundle sticking together is existential for Fox, and it looks like that is going to happen — indeed, Fox’s live offerings are now going to be re-bundled with 21st Century content streamed by Disney.

Fox’s fate, meanwhile, gets to why sports leagues are big winners as well. Had the bundle fallen apart than the NBA, which is in the midst of negotiating a new rights deal, would have been in big trouble; now that it has a future ESPN can more confidently bid. At the same time, now that everything is becoming an app, including traditional TV, the motivation for tech companies to bid in order to secure their marketplaces as the ultimate winners is higher as well.


One final comment about the significance of this deal.

There is a certain flavor of detached cynicism that is often the default response to news; examples abounded yesterday after this deal was announced. For example:

A cynical response to the Charter-Disney deal that is wrong

Most of the time this response works well: the status quo is a powerful thing, and if your goal is being right more often than not than it is always safer to be skeptical that things are different this time.

In this case, though, I think Sherman has it wrong: cable TV as we know it ended several years ago with The Great Unbundling. The significance of the just-announced deal between Disney and Charter is that The Great Re-bundling has begun.

The Rise and Fall of ESPN’s Leverage

This Article is available as a video essay on YouTube


On December 12, 1975, RCA Corporation launched its Satcom I communications satellite; the primary purpose was to provide long-distance telephone service between Alaska and the continental U.S. RCA had hopes, though, that there might be new uses for its capacity; to that end the company had listed for sale a 24-hour transponder that covered the entire United States, only to discontinue the offering after failing to find a single buyer.

Three years later Bill Rasmussen, the communications manager for the Hartford Whalers, was let go from his job; he had the idea of doing the same coverage he did for the team, but independently, along with other Connecticut sports, leveraging the then-expanding cable access TV facilities in Connecticut. These facilities existed to capture broadcast signals from New York and Boston using large antennas and deliver them to people’s houses; the cables, though, had capacity to carry more channels at basically zero cost, including Rasmussen’s proposed Connecticut sports network.

It was in the course of canvasing Connecticut cable providers that Rasmussen was introduced to the concept of satellite communications, and Al Parinello, a manager at RCA. At first Rasmussen pitched his Connecticut sports network idea, and Parinello was confused: satellites covered the entire country, so why was Rasmussen only talking about a single state? Parinello told James Andrew Miller in Those Guys Have All The Fun:

I can still remember the conversation. Bill said, “Let me get this straight. You mean to tell me, for no extra money — for no extra money! — we could take this signal and beam it anywhere in the country?” And I said, “That’s right.” And then he asked again, “Anywhere in the country?” And I said, “Anywhere.” I remember we went back and forth like this a couple times. Bill and Scott were looking at each other, and they might have been getting sexually excited, I’m not sure. But I can tell that they were very, very excited.

It was in the course of that conversation that Parinello mentioned the unbought 24-hour transponders, which would let Rasmussen send a signal around the entire United States for less than it could cost him to buy access on those Connecticut cable companies; he bought it the next day, and only then set out to create what would become ESPN.

In other words, the very idea for ESPN sprung from:

  1. The fact that RCA had invested massive capital costs in the Satcom I satellite and thus:
  2. Was selling access to that satellite at a relatively low price, given that said access had zero marginal costs, which meant:
  3. Rasmussen could leverage that access to reach every home in America, or at least every cable operator, for an even lower price than it cost to reach only the state of Connecticut.

Massive fixed costs resulting in zero distribution costs and massive scalability on a platform that is inherently indifferent to the data it is distributing might sound familiar: it’s the same economic forces undergirding the Internet, and it speaks to those forces’ power that while they may have made ESPN in the first place, they threaten to destroy it in the long run.

ESPN and the Advent of Affiliate Fees

The first ESPN broadcast was a year later, on September 7, 1979; in the intervening time Rasmussen had made a deal with the NCAA, which oversaw a host of untelevised sports, to televise the early rounds of the men’s basketball tournament along with several other less popular sports. The other important deal was with Anheuser-Busch, which signed an advertising contract for $1.3 million. The idea was to convince cable distributors around the country to pick up the free ESPN signal, and to make up the cost with advertising; 1.4 million homes had access to that first broadcast.

In another foreshadowing of the Internet, ESPN soon realized that providing ongoing content monetized with nothing but advertising was good for growth but bad for actually making money; a year later the company reached 6 million homes and had a new deal with Anheuser-Busch that didn’t come close to covering its costs. Rasmussen was also out, as new management sought to rework its deal with the NCAA and, most importantly, the cable operators.

Then, in 1982, CBS Cable failed, leading Wall Street to question the whole business model; this didn’t affect ESPN, which was still mostly owned by Getty Oil, with ABC as a new investor and partner, but it did affect the cable companies, who saw their stocks plummet. The last thing they needed was for ESPN to go out of business too; Roger Werner, ESPN’s then CEO, told Miller:

We went to the market with this sort of survival pitch essentially as follows: If you come in voluntarily and do a new deal with us, we’ll start your rate at four cents in 1983 or ’84 and then we’ll go to six cents the next year, then eight cents. Either rip up the old contract and have some protection for whatever the term of your new affiliation agreement is going to be, or pay the prevailing rate when your old deal expires. There was the specter that if we were still around—and we intended to be around—we’d be a much more expensive service…

Essentially we were saying, guys, if you’re not interested in paying a fee and you’re really not interested in stepping up to the plate in the near term, tell us now and we’ll pull the plug. Nobody really wanted to deal with the idea that they were going to be paying for a product that had been free, but actually my recollection of this is that it was very stress-filled, it was very contentious.

It worked. Suddenly ESPN had two business models: advertising and a per-subscriber fee, whether or not they watched ESPN. Andy Brilliant, the then-general counsel told Miller:

At the end of the day, they blinked and agreed to pay us a dime per household. We breathed a massive sigh of relief. It was the first time we actually received validation that our service was worth something to the cable operators. I think that really put us on the map for good.

It also changed how ESPN thought about programming. Then-President of ESPN Bill Grimes told Miller:

This was, like, ’83; at that time we had boxing one night and skiing, tennis, and a whole bunch of other stuff on the schedule. We were talking one day about the fact that there was a lot of college basketball becoming available. I said, “You know, we could get basketball six nights a week. Our weekly ratings in prime would really go up.” But Roger [Werner] said, “That’s true, we could probably get a better rating. But they’re only numbers. We’re now in the business of subscriber fees. So what we want is as diverse programming as possible. Even if a program like skiing or auto racing gets a lower rating, there are people who will never watch a basketball game. So we should now think a little bit differently.” This was totally contrary to what I had grown up with in the business — rating, rating, rating. Get the highest ratings we can get. But Roger was right. We didn’t want all our ratings from one thing, because it’s only those hundred people who watch the skiing event that’ll yell like hell if the cable operators ever do decide to drop ESPN. His belief that sacrificing a little bit of ratings to have greater variety was going to create more rabid fans of ESPN was absolutely right.

Werner was right: ESPN could raise its affiliate fees, and cable operators that tried to drop them in protest were overrun with complaints, quickly adding the channel back. By 1986 ESPN was charging around 27 cents per subscriber, and then they signed a deal with the NFL, adding a 9 cent surcharge to their fees; cable operators could choose to not show the game (and avoid the surcharge), but within weeks nearly every cable operator realized their customers would not tolerate not having access to the NFL. George Bodenheimer, who would later become President of ESPN, told Miller this anecdote about the surcharge:

We set a deadline and we told everybody there was a benefit to committing to us then, but those who didn’t sign by midnight of the deadline date would pay a higher price. I remember pleading with one particular cable operator who was my account who said he wasn’t going to agree to sign on. His name was Leonard Tow.

Tow was in the process of building what is now known as Frontier Communications; Grimes picked up the story:

Leonard comes in, and you know what the first thing he says about the deal was? “We can’t afford to do this.” I said, “People not seeing the games aren’t going to like it.” Leonard said, “I know football’s popular, but we’re already paying you guys a subscriber fee. We’ll just put on some other local programming the night of the game.” I reminded him that if he changed his mind after tomorrow, he would have to pay a 20 percent incremental fee, a premium, but he just kept saying nope. On the way out I said, “Leonard, look, we’re really successful now and we’re going to be more successful in the future. It would be awful not having you a part of this, but I really believe you’re going to wind up changing your mind. Just wait until people find out you won’t have the games.” He disagreed and we said good-bye. One week later, he called and signed on. And, oh yeah, he paid the extra 20 percent.

ESPN paid $153 million over three years for those NFL rights; the first broadcast reached 45 million homes, earning the network an incremental $4.05 million/month, just about enough to cover the NFL rights. What was more important is that the NFL attracted new subscribers which paid ESPN’s full fees, which amounted to over $12 million a month. Moreover, ESPN also got rights from the NFL for unlimited access to highlights: that fueled studio shows like NFL Primetime and SportsCenter that cost very little to produce, yet both attracted large audiences (for advertising), and made the NFL and other sports even more popular. The flywheel was fully engaged.

Charter vs. Disney

Over the last decade the story of ESPN specifically, Disney more broadly, and cable as a whole has been the slow but steady disintegration of that flywheel, culminating in the current standoff between Charter and Disney. From the Wall Street Journal:

Charter Communications subscribers are caught in the middle of a philosophical fight between the cable giant and Disney, parent company of ESPN, ABC and several other networks. Disney-owned networks on Thursday went dark for customers of Charter’s Spectrum cable systems, which has nearly 15 million video subscribers across the country including the New York and Los Angeles markets. As a result, sports fans who are Charter subscribers are losing access to college football and the U.S. Open. And the National Football League season is about to begin: ESPN’s “Monday Night Football” starts Sept. 11. Other channels no longer available to Charter include ABC-owned TV stations and cable networks FX, Disney Channel, Freeform and National Geographic.

Channels going dark in the midst of an affiliate fee dispute aren’t new: indeed, they were how ESPN managed to extract per-subscriber fees in the first place. And, for 40 years, ESPN usually won, including a standoff with YouTube TV in late 2021; I wrote at the time in an Update:

It appears that Disney decisively won its stand-off with Google; YouTube TV dropped Disney channels for about two days, only to come to an agreement that Disney characterized as “fair terms that are consistent with the market”; this strongly suggests that Google saw sufficient cancellations in that two-day window that it caved on its demands to get a lower rate. This further reaffirms just how powerful the ESPN bundle is (and Disney’s bundle generally).

When Disney went dark on Charter last week, I initially assumed a similar outcome; then came the Charter investor call the next morning, and this slide:

Charter's investor slide about video

The most important sentence is in the light blue box on the far right: “The video product is no longer a key driver of financial performance.” This is the culmination of a 25-year shift in business model for the cable companies: those initial investments in wires in the ground to provide small communities access to big city TV broadcasts turned out to be very well suited to providing broadband Internet access. Remember the lesson of RCA and ESPN’s founding: the digital transmission of information is inherently indifferent to the data being distributed. In the case of cable the initial use case was digital TV signals, but the exact same cable could also carry packets running the TCP/IP protocol.1

Of course for a long time it was very profitable to carry both, along with voice: cable companies offered “triple play” bundles that included TV, Internet, and telephony. Over time the telephony part dropped off, as people used mobile phones exclusively; cable carriers have since moved into the mobile carrier space as well, fueled by profits from TV and broadband Internet. What made the Internet part the most valuable, though, is that the cable companies didn’t need to pay for content: everything was just a packet.

That, though, was also the problem: some of those packets reformed themselves as Netflix video streams, which ate into time spent watching TV. Worse, Netflix’s stock was rising and rising as it acquired ever more customers, much to the chagrin of Hollywood, which felt entitled to those multiples given they were the ones producing the most compelling content. That resulted in the fateful decision to start their own streaming services, impoverishing the TV bundle; Charter’s investor presentation included “The Impoverishment Cycle” created by MoffettNathanson:

"The Impoverishment Cycle" from MoffettNathanson, via Charter

What Disney and all of the rest forgot was the lesson first imparted by Werner at the dawn of affiliate fees: retaining customers means offering content for everyone; in the case of the cable bundle, that meant having compelling programming above-and-beyond sports.

The second lesson Disney forgot was why that NFL deal made sense for ESPN at the time, even though the surcharge ESPN charged cable providers was only projected to barely cover the deal: high end sports deals drove customer demand, but the real money was made on (1) everyone who didn’t care about football and (2) cheap content like SportsCenter. The latter, though, has also been impoverished by the Internet; I noted last year when the Big Ten signed a TV deal that excluded ESPN:

The Big Ten’s exclusion of ESPN really highlights the degree to which social media has supplanted ESPN’s previous tentpole shows like SportsCenter; ESPN used to get discounts on rights deals because to be excluded from SportsCenter meant publicity death. That’s no longer the case.

The former, meanwhile, is a reminder that while ESPN has generally made money from rights deals, particularly for smaller sports that filled the schedule and inspired niche fans to badger the cable companies, the biggest properties — particularly the NFL — have always been cognizant of their worth and willing to extract their full value. Disney, in turn, can only maintain ESPN profitability by passing on those rights fees to cable distributors, who must in turn pass them on to their customers.

The third lesson Disney has forgotten is the most counter-intuitive takeaway of this battle: the worst thing that has happened to the company’s negotiating position is that ESPN is already available on the Internet.

The Phases of Cable TV

The cable TV industry has gone through four distinct phases in terms of competition:

Phase 1: Non-Consumption

The first phase was the time in ESPN’s history I detailed above: burgeoning cable TV services were running cables to every home in America and trying to convince customers to sign-up. In this case their competition was non-consumption: a lot of people didn’t have cable, and the cable companies wanted them to sign up for service. ESPN was a particularly unique asset in this regard thanks to its provision of sports content that wasn’t available elsewhere — indeed, until the NFL deal, most of the content had never been available at all. This certainly led to some bruising fights between ESPN and the cable companies over affiliate fees, but it’s always easier to come to an agreement when the pie is growing.

Phase 2: Satellite

The second phase was the 90s emergence of DirecTV and Dish Network, which offered the same channels as cable TV but via a small satellite dish you could mount on your roof or porch. This was a more involved installation process, but ultimately cheaper thanks to the fact that DirecTV and Dish didn’t need to put an actual cable in the ground. This was also good news for ESPN because now there was an alternative to traditional cable TV: if a cable provider didn’t want to accept higher affiliate fees then ESPN could withhold service, trusting its viewers would punish the cable provider by moving to satellite (which means they would probably be gone forever).

Phase 3: IPTV

By the 2000s the satellite threat to cable was fading because satellite was TV only: if a customer had both Internet and TV via their cable provider than it was much harder to switch. Remember, though, that it’s all data in the end; thus the 2000s saw the rise of IPTV offerings from traditional telecom providers like AT&T and Verizon. They too saw salvation for their own fading telephony business in providing broadband Internet, but providing a competitive offering to cable meant offering TV as well. And, thanks to the Internet, they could simply provide said TV using the TCP/IP protocol.

The decade that followed was probably the time of maximum ESPN leverage: it was easier for customers to switch from the cable bundle to the telecom bundle than it was to install an extra satellite dish; it’s no surprise that this was the decade when ESPN’s aggressiveness in terms of both acquiring sports rights and in raising affiliate fees increased; it was also the peak of ESPN’s relative share of Disney profits.

Phase 4: vMVPDs

The virtual multichannel video programming distributor (vMVPD) era kicked off in 2015 with the launch of Sling TV. This took the IPTV trend in Phase 3 to its logical endpoint: instead of needing a box to display IPTV signals, you could simply use an app. vMVPD’s have had a big impact on the landscape in two ways: first, they significantly diminished the cord-cutting trend for years as they both captured cord-cutters and also non-consumers, and second, they decimated regional sports networks that had long increased affiliate fees even more aggressively than ESPN. I wrote in What the NBA Can Learn From Formula 1:

There just aren’t that many SuperFans of a single team, yet regional networks cost more than anything outside of ESPN — more in some markets. This worked in a world where everyone got cable by default, but remember that cable is losing far more customers than pay-TV as a whole, thanks to the rise of the aforementioned virtual pay-TV providers. Virtual pay-TV providers don’t have a customer base to defend, or infrastructure costs to leverage: they distribute via the Internet that people already pay for. To that end, they don’t have to carry everything, and regional sports networks were the most obvious thing to drop: this lets virtual pay-TV providers have a lower price than cable by virtue of excluding content that most people don’t want.

Still, this didn’t seem to affect ESPN, as exemplified by the fact they appear to have won their negotiation with YouTube TV in 2021. In fact, though, this dispute with Charter is showing why ESPN may be a loser as well. Go back to the issue of cable customer churn in response to ESPN’s lack of availability; here’s how it manifested in each phase:

  • In Phase 1, a churned customer meant less leverage on expensive buildouts, and pressure from Wall Street.
  • In Phase 2, a churned customer went to the effort of getting satellite and probably never came back.
  • In Phase 3, a churned customer would not just change their TV provider, but also their broadband provider, and remember that broadband was becoming the cable companies’ biggest business.

In Phase 4, though, a churned TV customer is still a broadband customer, because the Internet is a precondition for watching the vMVPD! Sure, a customer could be so incensed that they also change their Internet provider, but that is completely unnecessary and, given the inconvenience involved, highly unlikely.

That means that ESPN, for the first time in its history, has no leverage over the cable companies. Indeed, MoffettNathanson reported that Charter is actively helping customers move to vMVPDs:

For Charter, the uncomfortable truth is that it just doesn’t matter all that much. Yes, they probably do still make some money on video. But not much, and they recognize that linear video is going to be a rapidly declining line of service under even the most optimistic scenarios, so the issue is arguably nothing more than when, not if, video goes away. Charter has already established a referral capability for customers to switch them to YouTube TV or FuboTV (predictably, they haven’t mentioned referring customers to Sling TV or DirecIV Now, and they presumably wouldn’t steer anyone to Hulu Live if the trigger was a dispute with Disney).

Notably, the first NFL Monday Night Football game (ESPN) features two Spectrum-market teams; the New York Jets and the Buffalo Bills. To handle a potential rush of customers anxious about missing the game, Charter is preparing a one-touch QR code that would not only create a new YouTube TV or Fubo subscription, but would also downgrade from a Spectrum video bundle with a single click…Disney may learn the hard way that it’s tough to win a negotiation with a counterparty that has nothing to lose.

This truth may be uncomfortable for Charter; it ought to be sobering for Disney, particularly since the company, along with the rest of Hollywood, was the one responsible for destroying the value of TV to companies like Charter who were built on it.

The Case for Re-Bundling

Once-and-current Disney CEO Bob Iger has been talking a lot recently about ESPN’s inevitable shift to going over-the-top, including stating that he has a particular date in mind; this showdown with Charter and the revelation of ESPN’s dramatic diminishment in its negotiating position is a reminder that declining businesses often don’t have the luxury of dictating their future.

So what does that future look like?

First, it’s very possible — perhaps even likely — that Charter and Disney come to an agreement. As the MoffettNathanson note observes, Charter probably still is making some money on video, and it is also both a customer acquisition tool and churn mitigation factor for their broadband business, and a part of the modern triple play bundle (with mobile). Disney, meanwhile, along with all of the other Hollywood studios, still needs the substantial amount of cash that they receive from cable TV providers (this is particularly pressing for Disney given that they still have to pay for sports rights). Yes, they also earn money from vMVPDs like YouTube TV, but not every customer will seamlessly transition.

To that end, the company that ought to give here is Disney: according to that Wall Street Journal article Charter is willing to accept the reported $1.50 increase in affiliate fees Disney is demanding if they receive the right to bundle the ad-supported versions of Disney+. Charter argues that it is only right that Disney re-add its most valuable entertainment content to the pay-TV bundle, and frankly, I think they have a point.

More importantly for Disney, though, is that cable TV providers like Charter remain potent go-to-market entities — decades of servicing customers in their homes has meant a massive build-up in everything from stores to local sales to customer support — and that could be very helpful as Disney seeks to acquire more marginal customers. More importantly, though, I think it is in the long-term interest of the streaming services to be part of a bundle. I wrote last year in Cable’s Last Laugh:

The cable companies are better suited than almost anyone else to rebundle for real. Imagine a “streaming bundle” that includes Netflix, HBO Max, Disney+, Paramount+, Peacock, etc., available for a price that is less than the sum of its parts…Owning the customer may be less important than simply having more customers, particularly if those customers are much less likely to churn. After all, that’s one of the advantages of a bundle: instead of your streaming service needing to produce compelling content every single month, you can work as a team to keep customers on board with the bundle.

What Charter is proposing is a bit different — they want to bundle traditional TV with streaming services — and I get why Disney is resistant: there are a lot of people paying for both traditional TV and Disney+ (and Hulu and ESPN+); giving Charter bundling rights would cannibalize some amount of revenue. Moreover, it would also mean the end of whatever grand plans Disney might have about offering its own bundle, or cutting out the cable companies’ margin once-and-for-all. At some point, though, Disney and everyone else in Hollywood has to wake up to reality; I wrote in Hollywood on Strike:

The broader issue is that the video industry finally seems to be facing what happened to the print and music industry before them: the Internet comes bearing gifts like infinite capacity and free distribution, but those gifts are a poisoned chalice for industries predicated on scarcity. When anyone could publish text, most text-based businesses went from massive profitability to terminal decline; when anyone could distribute music the music industry could only be saved by tech companies like Spotify helping them sell convenience in place of plastic discs.

For the video industry the first step to survival must be to retreat to what they are good at — producing content that isn’t available anywhere else — and getting away from what they are not, i.e. running undifferentiated streaming services with massive direct costs and even larger opportunity ones. Talent, meanwhile, has to realize that they and the studios are not divided by this new paradigm, but jointly threatened: the Internet is bad news for content producers with outsized costs, and long-term sustainability will be that much harder to achieve if the focus is on increasing them.

Re-bundling is better for everyone; it’s Disney’s fault that the entities best-placed to pull that off no longer need it.

Second, for all of the talk about ESPN, it’s worth noting that its content is still valuable — that’s the entire reason this dispute is a big deal. Will anyone care if Charter stops carrying channels from anyone else in Hollywood? And yet, all of those studios are just as dependent on cable TV cashflow, even as many of them have “cheated” to a much greater extent than Disney: Peacock, for example, carries most of NBC’s sports programming, including football, and even put some of the most attractive Olympics programming exclusively on the streaming service. Why on earth should Charter or any other cable provider pay for NBCUniversal channels? Or, more pertinently, if ESPN isn’t available, why would any of the dwindling number of subscribers stay?

The biggest long-term question, though, has to be around sports itself. Sports leagues could extract ever higher rights fees from ESPN because ESPN could extract ever higher affiliate fees from cable TV providers; if the latter is broken than the former is as well. Yes, vMVPDs like YouTube TV will still exist — and be big winners — and Disney still plans an ESPN streaming service. All of those options, though, entail dramatically increased customer choice; leagues like the NBA have shrugged off declining ratings with the certainty that they would, via cable TV subscribers, get paid regardless, but now the choice isn’t just whether to click the remote, but whether to simply click cancel and watch something else. Better to re-bundle sooner rather than later!



  1. Yes, I know I just said “protocol” twice 

Nvidia On the Mountaintop

It was only 11 months ago that I wrote an Article entitled Nvidia In the Valley; the occasion was yet another plummet in their stock price:

Nvidia's current stock price drop

To say that the company has turned things around is, needless to say, an understatement:

Nvidia's latest stock rise

That big jump in May was Nvidia’s last earnings, when the company shocked investors with an incredibly ambitious forecast; this last week Nvidia vastly exceeded those expectations and forecasted even bigger growth going forward. From the Wall Street Journal:

Chip maker Nvidia said revenue in its recently completed quarter more than doubled from a year ago, setting a new company record, and projected that surging interest in artificial intelligence is propelling its business faster than expected. Nvidia is at the heart of the boom in artificial intelligence that made it a $1 trillion company this year, and it is forecasting growth that outpaces even the most bullish analyst projections.

Nvidia’s stock, already the top performer in the S&P 500 this year, rose 7.5% following the results, which would be about $87 billion in market value. The company said revenue more than doubled in its fiscal second quarter to about $13.5 billion, far ahead of Wall Street forecasts in a FactSet survey. Even more strikingly, it said revenue in its current quarter would be around $16 billion, besting expectations by about $3.5 billion. Net profit for the company’s second quarter was $6.19 billion, also surpassing forecasts.

The results show a wave of investment in artificial intelligence that began late last year with the arrival of OpenAI’s ChatGPT language-generation tool is gaining steam as companies and governments seek to harness its power in business and everyday life. Many companies see AI as indispensable to their future growth and are making large investments in computing infrastructure to support it.

Now the big question on everyone’s mind is if Nvidia is the new Cisco:

Is Nvidia Cisco?

I don’t think so, at least in terms of the near-term: there are some fundamental differences between Nvidia and Cisco that are worth teasing out. The bigger question is the long term, and here the comparison might be more apt.

Nvidia and Cisco

The first difference between Nvidia and Cisco is in the above charts: Nvidia already went through a crash, thanks to the double whammy of Ethereum moving to proof-of-stake and the COVID cliff in terms of PC sales; both left Nvidia with huge amounts of inventory it had to write-off over the second half of last year. The bright spot for Nvidia was the steady growth of data center revenue, thanks to the increase of machine learning workloads; I included this chart in that Article last fall:

Nvidia's gaming revenue drop

What has happened over the last two quarters is that data center revenue is devouring the rest of the company; here is an updated version of that same chart:

Nvidia's sky-rocketing AI revenue

Here is Nvidia’s revenue mix:

Nvidia's revenue mix

This dramatic shift in Nvidia’s business provides some interesting contrasts to Cisco’s dot-com run-up. First, here was Cisco’s revenue, gross profit, net profit, and stock price in the ten years starting from its 1993 IPO:

Cisco's revenue, profit, and stock price in the 90s

Here is Nvidia’s last ten years:

Nvidia's revenue, profit, and stock price

The first thing to note is the extent to which Nvidia’s crash last year looks similar to Cisco’s dot-com crash: in both cases steady but steep revenue increases initially outpaced the stock price, which eventually overshot just a few quarters before big inventory write-downs led to big decreases in profitability (score one for crypto optimists hopeful that the current doldrums are simply their own dot-com hangover).

Cisco, though, didn’t have a second act, unlike this data center explosion. What is notable is the extent to which Nvidia’s revenue increase is matching the slope of the stock price increase (obviously this is inexact given the different axis); it seems likely that the stock will overshoot revenue growth soon enough, but it hasn’t really happened yet. It’s also worth noting how much more disciplined Nvidia appears to be in terms of below-the-line costs: its net profit is moving in concert with its revenue, unlike Cisco in the 90s; I suspect this is a function of Nvidia being a much larger and more mature company.

Another difference is the nature of Nvidia’s customers: over 50% of the company’s Q2 revenue came from the large cloud service providers, followed by large consumer Internet companies (i.e. Meta). This category does, of course, include the startups that once might have purchased Cisco routers and Sun servers directly, and now rent capacity (if they can get it); cloud providers, though, monetize their hardware immediately, which is good for Nvidia.

Still, there is an important difference from other cloud workloads: previously a new company or line of business only ramped their cloud utilization with usage, which ought to correlate to customer acquisition, if not revenue. Model training, though, is an up-front cost, not dissimilar to the cost needed to buy those Sun servers and Cisco routers in the dot-com era; that is cloud revenue that has a much higher likelihood of disappearing if the company in question doesn’t find a market.

This point is relevant to Nvidia given that training is the part of AI where the company is the most dominant, thanks to both its software ecosystem and the ability to operate huge fleet of Nvidia chips as a single GPU; inference is where Nvidia will first see challenges, and that is also the area of AI that is correlated with usage, and thus more durable from a cloud provider perspective.

Those points about a software ecosystem and hardware scalability are also the biggest reason why Nvidia is different than Cisco. Nvidia has a moat in both, along with a substantial manufacturing advantage thanks to its upfront payments to TSMC over the last several years to secure its own 4nm line (and having the good fortune of asking for more scale at a time when TSMC’s other sources of high performance computing revenue are in a slump). There is certainly a massive incentive for both the cloud providers and large Internet companies to bridge Nvidia’s moats — see AWS’s investments in its own chips, for example, or Meta’s development of and support for PyTorch — but right now Nvidia has a big lead and the frenzy inspired by ChatGPT is only deepening their install base, with all of the positive ecosystem effects that entails.

GPU Demand

The biggest challenge facing Nvidia is the one that is ultimately out of their control: what does the final market look like?

Go back to the dot-com era, and the era that proceeded it. The advent of computing, first in the form of mainframes and then the PC, digitized information, making it endlessly duplicable. Then came the Internet which made the marginal cost of distributing that content go to zero (with the caveat that most people had very low bandwidth). This was an obvious business opportunity that plenty of startups jumped all over, even as telecom companies took on the bandwidth problem; Cisco was the beneficiary of both.

The missing element, though, was demand: consistent consumer demand for Internet applications only started to arrive with the advent of broadband connections in the 2000s (thanks in part to a buildout that bankrupted said telecom companies), and then exploded with smartphones a decade later, which made the Internet accessible anytime, anywhere. It was demand that made the router business as big as dot-com investors thought it might be, although by then Cisco had a host of competitors, including large cloud providers who built (and open-sourced) their own.

There are lots of potential starting points to choose for AI: machine learning has obviously been a thing for a while, or you might point to the 2017 invention of the transformer; the release of GPT-3 in 2020 was perhaps akin to the release of the Mosaic web browser, which would make ChatGPT the Netscape IPO. One way to categorize this emergence is to characterize training as being akin to digitization in the previous era, and creation — i.e. inference — as akin to distribution. Once again there are obvious business opportunities that arise from combining the two, and once again startups are jumping all over them, along with the big incumbents.

However you want to make the analogy, what is important to note is that the missing element is the same: demand. ChatGPT took the world by storm, and the use of AI for writing code is both proliferating widely and is extremely high leverage. Every SaaS company in tech, meanwhile, is hard at work at an AI strategy, for the benefit of their sales team if nothing else. That is no small thing, and the exploration and implementation of those strategies will use up a lot of Nvidia GPUs over the next few years. The ultimate question, though, is how much of this AI stuff is actually used, and that is ultimately out of Nvidia’s control.

My best guess is that the next several years will be occupied building out the most obvious use cases, particularly in the enterprise; the analogy here is to the 2000s build-out of the web. The question, though, is what will be the analogy to mobile (and the cloud), which exploded demand and led to one of the most profitable decades tech has ever seen? The answer may be an already discarded fad: the metaverse.

A GPU Overhang and the Metaverse

In April 2022, when Dall-E 2 came out, I wrote DALL-E, the Metaverse, and Zero Marginal Content, and highlighted three trends:

  • First, the gaming industry was increasingly about a few AAA games, small indie titles, and the huge sea of mobile; the limiting factor in further development was the astronomical cost of developing high quality assets.
  • Second, social media succeeded by virtue of making content creation free, because users created the content of their own volition.
  • Third, TikTok pointed to a future where every individual not only had their own feed, but also where the provenance of that content didn’t matter.

AI is how those three trends might intersect:

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before:

  • Game developers pushed the limits on text, then images, then video, then 3D
  • Social media drives content creation costs to zero first on text, then images, then video
  • Machine learning models can now create text and images for zero marginal cost

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

I wrote in the conclusion:

Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds need virtual content created at virtually zero cost, fully customizable to the individual.

Zero marginal cost is, I should note, aspirational at this point: inference is expensive, both in terms of power and also in terms of the need to pay off all of that money that is showing up on Nvidia’s earnings. It’s possible to imagine a scenario a few years down the line, though, where Nvidia has deployed countless ever more powerful GPUs, and inspired massive competition such that the world’s supply of GPU power far exceeds demand, driving the marginal costs down to the cost of energy (which hopefully will have become cheaper as well); suddenly the idea of making virtual environments on demand won’t seem so far-fetched, opening up entirely new end-user experiences that explode demand in the way that mobile once did.

The GPU Age

The challenge for Nvidia is that this future isn’t particularly investable; indeed, the idea assumes a capacity overhang at some point, which is not great for the stock price! That, though, is how technology advances, and even if a cliff eventually comes, there is a lot of money to be made in the meantime.

That noted, the biggest short-term question I have is around Nvidia CEO Jensen Huang’s insistence that the current wave of demand is in fact the dawn of what he calls accelerated computing; from the Nvidia earnings call:

I’m reluctant to guess about the future and so I’ll answer the question from the first principle of computer science perspective. It is recognized for some time now that…using general purpose computing at scale is no longer the best way to go forward. It’s too energy costly, it’s too expensive, and the performance of the applications are too slow. And finally, the world has a new way of doing it. It’s called accelerated computing and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that’s already in the data center. And by using it, you offload the CPUs. You save a ton of money in order of magnitude, in cost and order of magnitude and energy and the throughput is higher and that’s what the industry is really responding to.

Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It’s ready to be deployed.

And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers. This is true for the world’s clouds and you’re seeing a whole crop of new GPU-specialized cloud service providers. One of the famous ones is CoreWeave and they’re doing incredibly well. But you’re seeing the regional GPU specialist service providers all over the world now. And it’s because they all recognize the same thing, that the best way to invest their capital going forward is to put it into accelerated computing and generative AI.

My interpretation of Huang’s outlook is that all of these GPUs will be used for a lot of the same activities that are currently run on CPUs; that is certainly a bullish view for Nvidia, because it means the capacity overhang that may come from pursuing generative AI will be back-filled by current cloud computing workloads. And, to be fair, Huang has a point about the power and space limitations of current architectures.

That noted, I’m skeptical: humans — and companies — are lazy, and not only are CPU-based applications easier to develop, they are also mostly already built. I have a hard time seeing what companies are going to go through the time and effort to port things that already run on CPUs to GPUs; at the end of the day, the applications that run in a cloud are determined by customers who provide the demand for cloud resources, not cloud providers looking to optimize FLOP/rack.

If GPUs are going to be as big of a market as Nvidia’s investors hope it will be, it will be because applications that are only possible with GPUs generate the demand to make it so. I’m confident that time will come; what I, nor Huang, nor anyone else can be sure of is when that time will arrive.

I wrote a follow-up to this Article in this Daily Update.

Disney’s Taylor Swift Era

Bill Simmons had a quick aside about Taylor Swift on the most recent episode of his eponymous podcast.

I have never seen anything like the phenomenon around this concert tour, and I have been alive for all the concert tours since the mid-70s…from a cultural standpoint, from a multi-generation standpoint: fathers and daughters, daughters and moms, multiple generations. You have people like my daughter who is 18, who has not even known life without a Taylor Swift song, and then you have people in their 30s who grew up with her, and then you have the moms who are used to listening with the daughters, and then the show itself: I had friends who went to the first show, and I think she played 45 songs — it was 3+ hours. This is like Michael Jordan shit, whatever is happening with her…

She’s sold out 6 straight shows here in Los Angeles; I’ve been here for 21 years I can’t remember anything as important as these Taylor Swift tickets, just being in the building for that. People coming from all parts of California to go, it’s really something. This is the summer of Taylor.

There is much to say about the summer of Taylor that is pertinent to technology and the Internet: start with the desire for communal in-person experiences driven not only by the pandemic, but also the general fracturing of culture inherent in a media landscape where personalized content delivered directly to your personal device is the norm. Then there is the way in which social media acts as a FOMO1 generator: being able to access every moment of every show on social media doesn’t decrease the value of attending the show, but rather increases the desire to obtain a scarce number of tickets to see the spectacle in person. And, it must be said, there is the excellence at play: not only does Swift have an incredible catalog of popular songs, the show itself spares no expense — or exertion on Swift’s part — to give the fans exactly what they were hoping for.

What made the final LA show unique though (which I had the good fortune of attending with my daughter), was the announcement of 1989 (Taylor’s Version). It wasn’t exactly a secret that the announcement was coming on 8/9, and obviously Swift’s ongoing project to re-release her old albums is well-documented at this point. What is surprising is just how much people care: Speak Now (Taylor’s Version), which was announced earlier in the tour, hit number one on the Billboard charts, giving Swift more number one albums than any woman in history; it seems inevitable that 1989 (Taylor’s Version) will be lucky number 13 for her career.

Taylor’s Versions

What is striking about the popularity of these re-releases is that it is the latest manifestation of Swift’s insistence that the opportunities for musicians are greater than ever. I had never really listened to Swift’s music when she wrote an editorial in the Wall Street Journal in 2014 entitled For Taylor Swift, the Future of Music is a Love Story:

Where will the music industry be in 20 years, 30 years, 50 years?

Before I tell you my thoughts on the matter, you should know that you’re reading the opinion of an enthusiastic optimist: one of the few living souls in the music industry who still believes that the music industry is not dying…it’s just coming alive.

There are many (many) people who predict the downfall of music sales and the irrelevancy of the album as an economic entity. I am not one of them. In my opinion, the value of an album is, and will continue to be, based on the amount of heart and soul an artist has bled into a body of work, and the financial value that artists (and their labels) place on their music when it goes out into the marketplace. Piracy, file sharing and streaming have shrunk the numbers of paid album sales drastically, and every artist has handled this blow differently.

In recent years, you’ve probably read the articles about major recording artists who have decided to practically give their music away, for this promotion or that exclusive deal. My hope for the future, not just in the music industry, but in every young girl I meet…is that they all realize their worth and ask for it.

Music is art, and art is important and rare. Important, rare things are valuable. Valuable things should be paid for. It’s my opinion that music should not be free, and my prediction is that individual artists and their labels will someday decide what an album’s price point is. I hope they don’t underestimate themselves or undervalue their art.

I had two reactions to this editorial. The first echoed Nilay Patel’s take that Taylor Swift doesn’t understand supply and demand:

This might make sense if you’re Taylor Swift and your enormous army of fans will pre-order anything you tell them to, but the most important lesson of the Internet music revolution is that the vast majority of consumers actually reward convenience. That’s why the iPod was a huge hit even though digitally-compressed music sounded terrible at the time, and it’s why teenagers today get most of their music on YouTube, even though YouTube sounds worse still. It’s also why the album is dead: you can’t sell a handful of singles and some okayish filler songs to people for $10 or $15 or $25 anymore, because convenient Internet music distribution has utterly destroyed the need to bundle everything together. You can just Google the singles and listen to them for free…

The single hardest economic problem posed by the internet is the end of scarcity. Even just 10 years ago, most people experienced a one-to-one relationship between creative works and the physical objects they were delivered on: your music came on CDs, your movies came on DVDs, and your news came on printed magazines and newspapers. Since there’s a scarce number of these objects in the world, it’s easy to buy and sell them, because their prices will follow the laws of supply and demand: limited-edition vinyl records will be more expensive than regular CDs, because there are simply fewer of them. If you wanted a CD full of songs in 1995, you went to a store and paid for them, because there was essentially no other way to get those songs. Even if you wanted to steal the music, you had to pay some price: you needed to have a friend with the right CD, and you needed time and blank CDs to make a copy.

But on the internet, there’s no scarcity: there’s an endless amount of everything available to everyone. The laws of supply and demand don’t work terribly well when there’s infinite supply. Swift is right that “important, rare things are valuable,” but she’s failed to understand that the idea of rarity simply doesn’t exist in the digital marketplace.

All fair points! Swift, though, persisted: later that year she pulled her music from Spotify, which meant fans had to actually buy the original 1989 when it came out in October (it was my first Swift purchase); what struck me about this move, in conjunction with the editorial, was that this was, from a certain perspective, a gift she was giving her fans. From an Update in November 2014:

Swift has long since proved herself a master at building a connection with fans, engaging on social media, customizing her concerts, and spilling her secrets through her songs. And, for 1989 she took things to a new level. What I think Swift has realized, though, is that reaching out to your fans is not enough: it has to be reciprocal: what selling an album for actual cash money does is give people a way to commit. They are quite literally giving Swift something valuable in exchange for her work…

The problem with Spotify is that at a very fundamental level it treats music as a commodity. You can’t choose where your $10/month goes based on the emotional impact of a song; the hot new hit by the artist you’ll never hear from again is treated exactly the same as the artist that was with you when you needed him or her the most. It cheapens the connection, not by withholding money per se, but by denying the commitment inherent in an explicit purchase…

And so, Swift has put up something of a door: access to her costs fans at least $12.99. Here’s the thing about doors though: while they keep people out, they also keep them in. Simply by virtue of having paid money directly to Swift for Swift’s music that album is already more meaningful to its 1 million buyers than the exact same music would have been were it listened to via Spotify’s all-you-can-eat subscription (or free with ads!), no matter how much money Ek and team pay out.

A few months later, though, and 1989 was on Spotify; every album since then has been also, and 1989 (Taylor’s Version) will be as well. Something funny will happen, though, when 1989 (Taylor’s Version) is released: streams of 1989 will plummet, while 1989 (Taylor’s Version) will shoot to the top of the charts; the realities of music are such that not even Swift can hold out on streaming,2 but she has still given her fans the capability of reciprocating their relationship by making the conscious decision to only listen to (Taylor’s Version)s.

Still, the value of a streaming choice doesn’t go that far; Patel’s economic argument was right, after all. The real money for Swift comes from the concerts, with the Eras Tour set to be the first to gross $1 billion; physical scarcity is still the best way for a creator to capture value.

Disney’s Earnings

Last week Disney reported earnings, including a 23% decline in profit in its traditional TV business; that was more than made up for by an 11% increase in profit in its parks, experiences, and products segment, which accounted for 68% of Disney’s profit. Disney’s theme parks and cruises have always been an essential part of the Disney model; from a 2017 Update:

The answer reminded me of this famous chart Walt Disney created to show how the Disney business worked:

Walt Disney's Disney Map

At the center, of course, are the Disney Studios, and rightly so. Not only does differentiated content drive movie theater revenue, it creates the universes and characters that earn TV licensing revenue, music recording revenue, and merchandise sales.

What has always made Disney unique, though, is Disneyland: there the differentiated content comes to life, and, given the lack of an arrow, I suspect not even Walt Disney himself appreciated the extent to which theme parks and the connection with the customer they engendered drive the rest of the business. “Disney” is just as much of a brand as is Mickey Mouse or Buzz Lightyear, with stores, a cable channel, and a reason to watch a movie even if you know nothing about it.

It was the theme park angle that made me excited about Disney+; I wrote in 2019:

This is the only appropriate context in which to think about Disney+. While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable — the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disney’s studios — but the larger project is Disney itself.

By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.

I think, in retrospect, that this was an example of my falling in love with elegance and spending insufficient time in spreadsheets: Walt Disney’s chart may have been a satisfying business model, but the reality of Disney’s TV business is that it was scalable in a way that that Disney chart never could be. The beauty of the cable bundle is that nearly every household in America paid for it every single month, regardless of whether or not Disney had a hit TV show or a must-watch sporting event; thanks to its suite of channels, anchored by ESPN, Disney received a big chunk of that money, and it grew like clockwork. In that world Walt Disney’s model was a nice side business to the real money-maker — that’s a pretty good reason for Disney to have held on to that model as long as they did.

In fact, you can very much make the case that Disney and all of its peers ought to have held on longer: yes, streaming — i.e. Netflix — leveraged the Internet for distribution of video, but that didn’t mean that Disney and Time Warner and Paramount and all of the rest had to. Those Netflix multiples, though, which far exceeded anyone else’s in Hollywood, were too tempting, and soon enough everyone was putting their best content on streaming services, leaving the cable bundle to wither.

Disney’s Taylor Swift Era

The end result is the inversion you see in Disney’s recent results. Disney is, from this point forward, not much different than Taylor Swift: sure, there is money to be made (hopefully) in areas like streaming, but the real durable value and outsized profits will come from real life experiences. This is, to be sure, a good business, but it has its limits: it is remarkable that Swift performed six shows in seven nights in Los Angeles, but it was still only six shows; concerts don’t scale like CD sales used to. Disney, similarly, only has so many theme parks, that only accommodate so many people, and operating those theme parks takes significant ongoing resources.

It’s interesting, then, to observe how differently Swift and Disney are perceived at this moment in time: I opened with Simmons analogizing Swift to Jordan, and I think it’s a fair comparison; the reality of the fractured world wrought by the Internet is that any star who can emerge from the noise becomes bigger than anything we have seen before, from hunger for a unifying experience if nothing else, and admission to that experience becomes valuable through unprecedented demand combined with physically limited supply.

That limitation, though, implies a lack of scale, which means that Swift is as big as she will ever be; that’s ok, because it’s bigger than anyone has ever has been. Disney, meanwhile, may have its own physical experiences, made valuable by their scarcity, but it will never be as valuable as owning distribution. The best thing Iger can do now is move the company on from the heights it once reached; maybe someday Disney and its investors will forget that those outsized profits ever existed.

I wrote a follow-up to this Article in this Daily Update.


  1. Fear Of Missing Out 

  2. It is worth noting that Swift’s fans still bought over 1 million physical copies of her most recent album

Hollywood on Strike

From the New York Times:

The Screen Actors Guild today announced that it will strike at 12:01 A.M. Friday against the multi million-dollar television entertainment production business. Union and employer sources held out very little hope that the work stoppage might be averted. Pessimism was heightened by the fact that representatives of both sides could see no grounds, on the basis of unofficial discussions held in recent days, for the resumption of formal contract negotiations, which were broken off July 13.

John L. Dales, national executive secretary of the guild, said the principal point at issue was the refusal of the producers “to agree to make any residual payment whatsoever to actors for the second run of a video film.” Under terms of the original contract negotiated three years ago, the performers receive additional pay on a percentage basis of salary minimums, starting with the third showing of a film and continuing through the sixth. That contract expired Wednesday.

Oh, I’m sorry — that’s an article from 63 years ago.

The 1960 edition of the New York Times about the Hollywood strike

Here’s the one in the New York Times from last week:

The Hollywood actors’ union approved a strike on Thursday for the first time in 43 years, bringing the $134 billion American movie and television business to a halt over anger about pay and fears of a tech-dominated future.The leaders of SAG-AFTRA, the union representing 160,000 television and movie actors, announced the strike after negotiations with studios over a new contract collapsed, with streaming services and artificial intelligence at the center of the standoff. On Friday, the actors will join screenwriters, who walked off the job in May, on picket lines in New York, Los Angeles and the dozens of other American cities where scripted shows and movies are made.

Actors and screenwriters had not been on strike at the same time since 1960, when Marilyn Monroe was still starring in films and Ronald Reagan was the head of the actors’ union. Dual strikes pit more than 170,000 workers against old-line studios like Disney, Universal, Sony and Paramount, as well tech juggernauts like Netflix, Amazon and Apple. Many of the actors’ demands mirror those of the writers, who belong to the Writers Guild of America. Both unions say they are trying to ensure living wages for workaday members, in particular those making movies or television shows for streaming services.

The reason to start with 1960 is that that was the last time actors and writers were on strike at the same time; the primary driver of that unrest was the rise of television. As for the last actors strike, in 1980? That was about the rise of home video. This leads to the first takeaway: the most important driver of unrest between studios and talent has always been technological paradigm shifts, and this time is no different.

In this case it is the rise of streaming that strikes me as more consequential than AI, but to first dispatch with the latter, it seems to me that writers are relatively more threatened by AI; it’s much more plausible today to imagine using an LLM to generate a B-movie script or filler television than it is to imagine AI replicating actors (particularly since actors licensing their likeness may in fact turn out to be very lucrative).

What is worth noting about AI is that those concerns are in-line with traditional Hollywood talent concerns when it comes to new technology: both unions have in strikes past been focused on preserving union jobs in the face of technological replacements. That is what led to the rise of residuals, which were at the core of the 1960 strike: if studios were showing movies on TV, then that meant they were occupying scarce time with content that actors weren’t getting paid for, which is to say that the actors in the movie that was being shown were competing with themselves; thus the union demand that they be paid for it.

This by extension is why I think the AI questions in this debate will probably be easier to solve: there is already a paradigm in place in Hollywood to make sure that the talent gets a cut of every airing of a piece of entertainment, and again, while you can envision an LLM writing a script, I wouldn’t be surprised if Hollywood executives primarily see the issue as something to give on while getting concessions on the more consequential issue. That, as I noted above, is streaming, and the reason why this negotiation is probably going to be very difficult is that it is exceptionally hard to divide up a pie that is shriveling before one’s eyes.

Scarcity and Residuals

There was a very interesting answer in this Slate interview with Wayne Federman, who wrote a piece in The Atlantic a decade ago about the 1960 strike:

For this strike, before we even negotiated, we already had strike authorization from the membership. In 1960, they didn’t. The 1960 strike was really about one issue, and this strike is about multiple issues. This is about how residuals, specifically for streaming entertainment, are being calculated. Those numbers are … not really released. It’s not like a Nielsen rating. Sometimes you’ll hear something like, Oh, 1.2 million minutes of Squid Game — what does that mean? Does that mean that many people watched one minute of it, or does that mean people watched it a number of times, or … ? I don’t know why it’s all proprietary for these streamers, but that’s just where we’re at. We want a little more transparency in that, [to consider] that if we’re on a hit show, is that paid differently than a [nonhit] show? And then there’s this A.I. situation.

You said that the studios were sort of giving up these residuals through clenched teeth. Do you think that their position on that has changed?

That’s the amazing outcome of what Ronald Reagan — and other negotiators at the time — was able to do: In a way, they were changing the paradigm of how Hollywood money is divided up. They were striking for an idea: that we deserve this for A, B, C, and D reasons. You get residuals now. Not everyone; editors don’t get residuals, but directors do. I get residuals for streaming services, but they’re just not the same. They’re not as good as cable, and they’re not as good as network. When you look at the check, you’re like, OK, this doesn’t seem like a lot. But, again, you don’t know how many people are watching it.

And also, I think when we first started looking at streaming services, we were like, We want these services to thrive so that there’ll be more work for actors. So I think that’s why we were not militant about residuals for these new platforms. No one is saying, Oh, we paid you to be on this Netflix show, and we never have to pay you a residual. The problem is that it’s not as hearty as it used to be for these other mediums. But the idea of residuals … is not going away, unless [the companies] decide to try to break the unions and just use nonunion actors and not pay residuals.

One of the ways Netflix broke into Hollywood was by eschewing back-end deals for top talent and paying more upfront; this removed the potential for huge upside if a show was a massive hit, but it guaranteed that talent got paid, even if a show wasn’t a success. Over time Netflix and other streamers have started to pay back-end bonuses in addition to residuals, but as Federman notes, the lack of transparency into how exactly those residuals are calculated is a big sticking point.

The most interesting paragraph to me, though, is the last one: at the risk of taking Federman’s word for it, the sentiment that unions saw streaming services as a net positive rings true, and aligns with the previous item. The entire idea of residuals arose from the idea that talent shouldn’t have to compete with itself when it came to re-running a movie or show; the key thing to note, though, is that this concern made sense in a world where there was scarce distribution. To go back to the 1960s, there were only three networks: that meant there were only 504 hours in a week to air content on television; airing a two-hour movie reduced the available space for talent to 502 hours.

Streaming, though, is purely additive. The Internet makes distribution effectively free, which means there are an infinite number of hours available for talent to monetize. This does, it’s worth noting, render talent’s original argument for residuals moot; if anything Netflix had it right when it temporarily shifted the model to simply paying up front. In fact, Federman unwittingly makes this point when he describes the mindset of studio heads in 1960:

Let’s say you get hired to act in a film. Basically, the person hiring you is taking the risk. They’re paying you your salary, and in return, they own that product. So, what SAG was saying was, You can play that film anywhere in the world, you can play it in Italy, you can have it dubbed — but when you put it on television, that’s a new revenue stream. Also, the argument was that that is taking work away from other actors. Because if you have this movie on, that time slot is no longer available for working actors.

On the other side, the head of 20th Century Fox [Spyros Skouras], his argument was very simple: Why should I pay you twice for the same job? I’ve already paid you for this job. I own this at this point. And that was basically the position of all of these studio owners. At the beginning of the strike, they were like, We’re not even going to talk about residuals. It’s a nonstarter. And Reagan said, We’re “trying to negotiate for the right to negotiate.” That’s how far apart they were. It was so foreign to these guys that they would have to share their revenues with actors after they’d already paid the actors. Ultimately, one studio, Universal Pictures—believe it or not, the head of Universal, a guy named Lew Wasserman, used to be Ronald Reagan’s agent—was the first domino that dropped. I think Lew Wasserman thought it was inevitable anyway: If it wasn’t going to happen in 1960, it might happen in ’65. And then one after another [gave in], until, I think, the 20th Century guy was the last guy, who was like, All right, I’ll give it, I’ll pay you again for something I’ve already paid you for, through clenched teeth.

Wasserman was right: studios were going to have to share the scarce resource, which was time on TV, with talent. Again, though, scarcity in terms of distribution is now gone; the only scarce resource on the Internet is consumer time and attention, and commanding that is far more difficult and risky. Look no further than the deteriorating financial condition of most of Hollywood: not only are the studios competing with Netflix and Amazon and Apple, but also with things like YouTube and social media. Indeed, you could very easily make the case that a far more legible labor action would be for the studios to lock out the talent in an attempt to remove residuals completely, given how much more risk any content producer is taking on today.

This angle is, obviously, a non-starter, but it does point at why these negotiations are likely to be so fraught: actors and writers are angling to get a larger share of revenue that they arguably no longer deserve.

The Cost of Streaming

There is an even larger problem, though, which is that studios have — in my estimation — yet to come to grips with the true cost of streaming. To go back to the old model, studios were in the business of making movies or TV shows and then selling them to distributors. Ideally they would sell the same piece of content multiple times, better leveraging the cost of making the content in the first place. Indeed, this was a sticking point in the 1960 strike; from that 1960 New York Times article:

The guild asked for 100 percent payment of minimum salaries for the second showing in a new contract. The producers insisted that the first two runs be covered by the original salary. The producers contend that it is virtually impossible to get sufficient money out of the first showing of a movie produced solely for television to pay off the initial production investment. It is reported that many bank loans are predicated on earnings from the second run.

Content costs a lot to produce up front, but the marginal cost of showing it again is effectively zero; that means the more times you can show a piece of content the more you can spend up front. The number of times you could show it, though, was, as noted above, governed by available distribution; if distribution was scarce then there was an opportunity cost of showing old content, because you couldn’t show something new (which again, was why talent wanted a share of multiple airings).

What is critical to note is that this leverage was best realized by selling to as many distributors as possible. The classic example is the traditional movie window: first you sell a movie to first-run theaters, then to budget theaters, then to hotels and airlines, then to pay-per-view, then to videocassettes/DVDs, then to cable, and finally to broadcast TV. That’s seven distinct opportunities to sell a piece of content. Going straight to streaming, though, collapses seven windows to one, reducing the ability to make money off of a particular piece of content.

Studios are enduring this cost, though, in the service of building up their own streaming services, but that has its own costs: running a streaming service entails being in the direct-to-consumer business, which is a costly one: not only do you have to build up and maintain the technical infrastructure of the service, and incur costs in customer support, but you also have to worry about things like churn that simply aren’t a consideration when you’re selling content. All of this is very expensive!

The real pain, though, is opportunity cost: while studios are missing out on multi-window revenue, paying for their streaming service, and trying to simultaneously acquire customers and stopping them from churning, they are also forgoing revenue from established services like Netflix that would not only happily pay them for their content, but could actually justify a much higher price given their significantly larger user base across which that cost could be leveraged.

All of these costs, it should be noted, occur in the aggregate, which is a real problem in these negotiations: talent is concerned about their compensation on a per-show basis, but studios are bleeding money on an entity-level in their foolhardy pursuit of customer-facing streaming services. Most of the discussion about this mismatch is focused on how to properly compensate the talent; note this item from Puck:

The union and AMPTP have by and large agreed on the residuals improvements the DGA obtained in its recent deal, but the union also wants 2 percent of subscriber revenue to be shared with the cast of a successful show, with success measured by Parrot Analytics, an analysis firm that looks at viewership, social media engagement, and other factors, to determine “demand.” That proxy metric was proposed because the companies refuse to share their internal measurements, of course. But the studios declined to engage on that issue, and the management-side source asked how the producer of a show could be expected to share revenue earned not by the producer but by the platform (i.e., subscribers pay platforms; subscribers don’t pay producers).

I get the talent’s perspective, but I’m pretty sure the talent doesn’t want to pay for the cost of customer service or customer acquisition or churn mitigation! Then again, neither should the studios: it doesn’t make any sense to me why the studios decided they wanted to bear these costs, and that’s not the talent’s problem.

The Shrinking Pie

There remains, though, the shrinking pie I noted in the introduction: the removal of distribution costs that enabled the rise of streaming was not a benefit that was limited to Hollywood, nor was the shift to attention as the only scarce resource. Every person on earth has only 168 hours in a week, during which time they are presumably sleeping and working. Those few remaining hours can now be filled by YouTube, or gaming, or podcasts, or reading this Article; every single minute spent doing something other than consuming Hollywood content is a minute lost forever.

This is consideration enough without a labor battle: thanks to COVID a lot of people fell out of the habit of going to the movie theater, and it appears around 25% of the audience permanently found something better to do with their time; that same reality applies to TV. Just as newspapers once thought the Internet was a boon because it increased their addressable market, only to find out that it also drastically increased competition for readers’ attention, Hollywood has to face the reality that the ability to make far more shows extends not only to studios but also to literally anyone. That reality is going to come to the fore if this strike drags on: if people don’t have new movies or shows to watch they will find far more options to fill their time than existed in 1960; the risk to Hollywood is that some of those alternatives become a permanent feature of people’s media diets, in line with what seems to have happened during COVID.

The broader issue is that the video industry finally seems to be facing what happened to the print and music industry before them: the Internet comes bearing gifts like infinite capacity and free distribution, but those gifts are a poisoned chalice for industries predicated on scarcity. When anyone could publish text, most text-based businesses went from massive profitability to terminal decline; when anyone could distribute music the music industry could only be saved by tech companies like Spotify helping them sell convenience in place of plastic discs.

For the video industry the first step to survival must be to retreat to what they are good at — producing content that isn’t available anywhere else — and getting away from what they are not, i.e. running undifferentiated streaming services with massive direct costs and even larger opportunity ones. Talent, meanwhile, has to realize that they and the studios are not divided by this new paradigm, but jointly threatened: the Internet is bad news for content producers with outsized costs, and long-term sustainability will be that much harder to achieve if the focus is on increasing them.

I wrote a follow-up to this Article in this Daily Update.

Threads and the Social/Communications Map

If you’re only going to tweet once every 11 years, then you better make it count; the best way to do just that is to pull off a well-executed meme:

This offering from Meta CEO Mark Zuckerberg works on multiple levels. The surface interpretation is obvious given the timing of the tweet, which was posted just hours after Meta launched Threads, a text-based social network built on top of the Instagram graph: Threads is a Twitter clone.

The scene from which the meme is derived, though, gets at what I think is really going on: the Spider-Man on the right is Charles Cameo, an imposter who uses disguises to steal art treasures. To extend the analogy, Threads looks like Twitter at first glance, but is in fact something much different — and what it is stealing is certainly what Elon Musk and Twitter have always wanted:

Zuck's comment on seizing Twitter's opportunity

The important takeaway is that all of the levels of the meme are connected: Threads looks like Twitter, but its essential differences are almost certainly table-stakes in being something larger than Twitter ever was. The question is if that treasure is itself a mirage.

The Social/Communications Map of 2013

Back in 2013 I created The Social/Communications Map:

A drawing of the Social/Communications Map

That map came out of an Article called The Multitudes of Social that argued that social media was not a single category destined to be won by a single app, and that Facebook could never “own social”:

The very idea of owning social is a fool’s errand. To be social is to be human, and to be human is, as Whitman wrote, to contain multitudes. Multitudes of apps, in my case…

Facebook needs to appreciate that their dominance of social on the PC was an artifact of the PC’s lack of mobility and limited application in day-to-day life. Smartphones are with us literally everywhere, and there is so much more within us than any one social network can capture.

The point about there being a multitude of ways to communicate online has held up well; I think, though, the axis about permanence versus ephemerality was less important than it seemed when the big battle was between Facebook and Snapchat. A better axis leans into the “SocialCommunications” aspect of the title: the most important new social networks of the last few years have been notable for not really being social networks at all.

I’m referring to the TikTok-ization of user-generated content: the reason why TikTok was such a blindspot for Facebook is that, unlike Snapchat, it doesn’t depend on network effects, but rather abundance. One of the first times I wrote about TikTok was in the context of Quibi, the failed mobile video app from Hollywood impresario Jeffrey Katzenberg:

The single most important fact about both movies and television is that they were defined by scarcity: there were only so many movies that would ever be made to fill only so many theater slots, and in the case of TV, there were only 24 hours in a day. That meant that there was significant value in being someone who could figure out what was going to be a hit before it was ever created, and then investing to make it so. That sort of selection and production is what Katzenberg and the rest of Hollywood have been doing for decades, and it’s understandable that Katzenberg thought he could apply the same formula to mobile.

Mobile, though, is defined by the Internet, which is to say it is defined by abundance…So it is on TikTok, or any other app with user-generated content. The goal is not to pick out the hits, but rather to attract as much content as possible, and then algorithmically boost whatever turns out to be good…The truth is that Katzenberg got a lot right: YouTube did have a vulnerability in terms of video content on mobile, in part because it was a product built for the desktop; TikTok, like Quibi, is unequivocally a mobile application. Unlike Quibi, though, it is also an entertainment entity predicated on Internet assumptions about abundance, not Hollywood assumptions about scarcity.

It’s ultimately a math question: are you more likely to find compelling content from the few hundred people in your social network, or from the millions of people posting on the service? The answer is obviously the latter, but that answer is only achievable if you have the means of discovering that compelling content, and, to be fair to both Facebook and Twitter, the sort of computational power necessary to pull off a TikTok-style network didn’t exist when those companies got started.

The Social/Communications Map of 2023

Set that point about time of origin aside just for a moment; here is what I think a better representation of the Social/Communications Map looks like in 2023:

The new structure for the Social/Communications Map

The first change is that the symmetric/asymmetric axis has been replaced by the nature of the sorting algorithm: chronological order versus algorithmic selection. However, this isn’t that big of a change; consider messaging, which is by definition about symmetric social networking. Messaging only really makes sense if it is organized by time — imagine trying to carry on a conversation if every message you saw were algorithmically selected, instead of simply displayed in order. Algorithmic sorting, though, makes much more sense when you are consuming content that is broadcast to the world, and thus has no assumptions about or expectations for in-order contextual replies.

The second change is the TikTok-ization I noted above: my new vertical axis is user-generated content, by which I mean content across the network, versus network-generated content, by which I mean content from the people you choose to follow. If you maintain the same public/private distinction I had in the original, you get a landscape that looks something like this (note that Facebook is better thought of as a private social network, given that the default nature of posts is that they are only seen by those in your network).

The starting position of social media companies in the 2023 Social/Communications Map

This is where the bit above about historical time comes in: another way to look at this map is as a representation of how content on the Internet has evolved; the early web, and early forms of user-generated content like forums and blogs, were and are still located in the upper left. This quadrant is fairly decentralized, and is Aggregated by Google and search.

The lower left quadrant came next: one site held all of the content from your network, and presented it chronologically. Some sites, like Twitter and Instagram, stayed here for years; Facebook, though, quickly jumped ahead to the lower right quadrant, and organized your feed algorithmically. This quadrant became the other major pillar of Internet advertising (along with search): figuring out what content to show you from your network wasn’t too dissimilar of a problem from figuring out what ads to show you, and the nature of a dynamically-generated feed that was unique to every individual was something that was only possible with digital media.

The final stage is, as noted, represented by TikTok: once again your network doesn’t matter, because the content comes from anywhere. This world, though, unlike the open web, is governed by the algorithm, not time or search.

Twitter, Threads, and the Upper-Right

I was honestly surprised to find out that both Twitter and Instagram were in the lower left quadrant until 2016; that is when both services started offering an algorithmic timeline. Of course the surprise for the two services ran in the opposite direction: for Twitter it’s amazing that the company managed to change anything at all, and for Instagram it’s a surprise the service stayed the same for so long. Since then Instagram has heavily invested in its direct messaging product even as it has slowly abandoned the public parts of the lower left: everything is an algorithm and, with Reels, completely disconnected from your network.

How services have expanded on the map over time

Perhaps the starkest change that Musk has made to Twitter, meanwhile, has been a headlong rush into the upper right: the “For You” tab is far more aggressive about promoting tweets from people you don’t follow, and it’s increasingly impossible to escape; the app always defaults to “For You”, and there are no more 3rd-party app alternatives. Eugene Wei argues this has blown up the timeline and ruined the Twitter experience:

What established the boundaries of Twitter? Two things primarily. The topology of its graph, and the timeline algorithm. The two are so entwined you could consider them to be a single item. The algorithm determines how the nodes of that graph interact. In a literal sense, Twitter has always just been whose tweets show up in your timeline and in what order.

In the modern world, machine learning algorithms that mediate who interacts with whom and how in social media feeds are, in essence, social institutions. When you change those algorithms you might as well be reconfiguring a city around a user while they sleep. And so, if you were to take control of such a community, with years of information accumulated inside its black box of an algorithm, the one thing you might recommend is not punching a hole in the side of that black box and inserting a grenade. So of course that seems to have been what the new management team did. By pushing everyone towards paid subscriptions and kneecapping distribution for accounts who don’t pay, by switching a TikTok style algorithm, new Twitter has redrawn the once stable “borders” of Twitter’s communities.

This new pay-to-play scheme may not have altered the lattice of the Twitter graph, but it has changed how the graph is interpreted. There’s little difference. My For You feed shows me less from people I follow, so my effective Twitter graph is diverging further and further from my literal graph. Each of us sits at the center of our Twitter graph like a spider in its web built out of follows and likes, with some empty space made of blocks and mutes. We can sense when the algorithm changes. Something changed. The web feels deadened.

I’ve never cared much about the presence or not of a blue check by a user’s name, but I do notice when tweets from people I follow make up a smaller and smaller percentage of my feed. It’s as if neighbors of years moved out from my block overnight, replaced by strangers who all came knocking on my front door carrying not a casserole but a tweetstorm about how to tune my ChatGPT and MidJourney prompts.

Instagram’s Evolution has shown that this shift is possible, but the shift has been systemic and gradual — and even then subject to occasionally intense pushback. Musk’s Twitter, though, has been haphazard and blistering in its pace. What ought to concern the company about Threads, though, is the possibility that all of the upheaval — which effectively sacrifices the niche Twitter had carved out amongst text nerds that dominate industries like media — will not actually result in the user growth Musk is hoping for, because Threads got there first.

Indeed, this map is the key to understanding why it is that Threads looks like Twitter, but is in fact a very different product: Threads is solidly planted in the upper right. When you log onto the app for the first time, your feed is populated by the algorithm; there is some context given by whom you follow on Instagram, but Meta seems aware that accounts you might want to look at may be different than accounts you want to hear from, and is thus filling the feeds with what it thinks you might find interesting. That is how it can provide an at-least-somewhat-compelling first-run experience to 100 million people in five days.

Twitter, on the other hand, faces the burden of millions having tried the service in past iterations and quickly deciding it wasn’t for them; even if the algorithm were effective, it may already be too late to gain new users, even as you sacrifice what the service’s existing users preferred.

The Threads Experiment

This leads to the biggest open question about Threads’ long-term prospects, and, by extension, Twitter’s: did those millions of abandoned Twitter users give up because text-based social networking just wasn’t that interesting to them, or because Twitter made it too hard to get started? I’ve made the case that it’s the former, which means that Threads is a grand experiment as to the validity of that thesis. If those 100 million users stay engaged (and if that number continues to grow), then the people chalking up Twitter’s inability to grow or monetize effectively to the company’s inability to execute are correct.

At the same time, as Wei notes, Musk’s tenure has highlighted the problems with doing too much: what if Twitter succeeded to the extent it did not despite management’s seeming ineffectiveness, but because of it?

I’ve written before in Status as a Service or The Network’s the Thing about how Twitter hit upon some narrow product-market fit despite itself. It has never seemed to understand why it worked for some people or what it wanted to be, and how those two were related, if at all. But in a twist of fate that is often more of a factor in finding product-market fit than most like to admit, Twitter’s indecisiveness protected it from itself. Social alchemy at some scale can be a mysterious thing. When you’re uncertain which knot is securing your body to the face of a mountain, it’s best not to start undoing any of them willy-nilly. Especially if, as I think was the case for Twitter, the knots were tied by someone else (in this case, the users of Twitter themselves).

Many of those knots are tied to that lower left quadrant: a predominantly time-based feed makes sense if a service is predominantly about “What is happening?”, to use Twitter’s long-time prompt; a graph based on who you choose to follow doesn’t just show what you want to see, it also controls what you don’t (Wei notes that this is a particularly hard problem for algorithmically generated feeds). Both qualities seem particularly pertinent for a medium (text) that is information dense and favored by people interested in harvesting information, a very different goal than looking to pass the time with an entertaining video or ten.

It follows, then, that Twitter’s best defense against Threads may be to retreat to that lower left corner: focus on what is happening now, from people you chose to follow. The problem, though, is that while this might win the battle against Threads, it means that Musk will have lost the war when it comes to ever making a return on his $44 billion. In truth, though, that war is already lost: Musk’s lurch for the upper right was probably the best path to reigniting user growth, but if that is the corner that matters then Threads will win.

Thread’s Chronological Timeline

The other question is if Threads will come for Twitter’s place on the map; Head of Instagram Adam Mosseri says that a chronological timeline is coming:

Adam Mosseri promising a chronological timeline

Placing this option in the context of Facebook and Instagram actually suggests that this feature won’t matter very much; both services make it hard to find, and revert back to the default algorithmic feed, and for good reason: users may say they want a chronological feed, but their revealed preference is the opposite. Instagram founders Kevin Systrom and Mike Krieger, who initially opposed algorithmic ranking in Instagram, told me in a Stratechery Interview:

Kevin Systrom: I remember thinking when the team was like, “We’re thinking of using machine learning to sort the Explore page,” I’m not even sure what they call it now, but basically the Explore page and I remember saying, “It just feels like that’s a bunch of hocus-pocus that won’t work. Or maybe it’ll work but you won’t really understand what it’s doing and you won’t fully understand the implications of it, so we should probably just keep it very simple.” I was so wrong and I only remember it because I was so wrong, but you asked about feed, Mike would probably give you his anecdote about feed. But on the Explore page I was very anti and then I think I became pro only once I saw what it could do. Not in terms of just usage metrics, but just the quality of what people were served compared to some of our heuristics before…

Mike Krieger: I’ll share a funny anecdote about the Explore experiment. Facebook has all these internal A/B testing tooling and we hooked into it and we ran our first machine learning on the Explore experiment and we filed a bug report and I’m like, “Hey, your tool isn’t working, that’s not reporting results here.” And they said, “No, the results are just so strong that they’re literally off the charts. The little bars that show it literally is over 200%, you just should ship this yesterday.” The data looked really good.

That noted, observe Mosseri’s stated goal for the app, as articulated to Alex Heath of The Verge:

I think success will be creating a vibrant community, particularly of creators, because I do think this sort of public space is really, even more than most other types of social networks, a place where a small number of people produce most of the content that most everyone consumes. So I think it’s really about creators more than it is about average folks who I think are much more there just to be entertained. I think [we want] a vibrant community of creators that’s really culturally relevant. It would be great if it gets really, really big, but I’m actually more interested in if it becomes culturally relevant and if it gets hundreds of millions of users. But we’ll see how it goes over the next couple of months or probably a couple of years.

“Culturally relevant” is the one game that Twitter has won, far more than Facebook, and arguably more than Instagram: Twitter drives national and international media coverage, from TV to newspapers, to an extent that drastically exceeds its monetization potential. Meta, meanwhile, has been content to provide social networking for the silent majority, making tons of money along the way. The best way to do that with text — if it is even possible — would be to stay in that upper right corner; cultural relevancy, though, is still in the bottom left, even if there aren’t nearly as many users, or money.

And, it must be noted, Twitter is vulnerable in its home territory; I’ve long argued that the importance of convenience in terms of app success is underrated (see Threads starting with your Instagram sign-in and network), but its hard to think of anything that might motivate users to make a change more than resolving cognitive dissonance. There is a sizable segment of that culturally relevant audience Mosseri wants to capture who are opposed to Musk, and yet can’t give up Twitter; I suspect that much of the outpouring of glee over Threads’ early success is from this cohort that wants nothing more than non-Musk Twitter.

Ultimately, though, I think they may be disappointed: Meta is about algorithms and scale, and I would bet that Threads will leave real time reactions, news, and pitched battles to Twitter; Musk’s most important decision may be accepting that that is enough, because it’s all he’s going to get.

Amazon, Friction, and the FTC

It was Friday morning, and I needed sunglasses — specifically the nerdy ones that fit on top of a pair of prescription glasses. I wasn’t sure where to buy them — my dad (and who else would know better) suggested Walmart — but Amazon had a few; the only problem is that I was leaving early Saturday morning on a fishing trip, and surely that wouldn’t be sufficient time for e-commerce!

In fact, it was more than enough: Amazon had delivery options of 12-4pm, 4-8pm, or 4-8am the next morning; four hours later I had extra sunglasses in hand (and Walmart, for the record, didn’t have any).

This wasn’t the first time I’d leveraged Amazon’s same-day delivery: I was shocked to even see that it was an option when I arrived back in the U.S. and needed an ethernet cable at 4am; it showed up at 9:30am. It is fairly new, though; from the Wall Street Journal earlier this year:

Amazon.com Inc. is expanding ultrafast delivery options, a sign that it remains committed to pushing its logistics system for speed as it scales back plans in other areas. The tech giant is continuing to devote resources to facilities and services structured to deliver packages to customers in less than a day. The expansions are happening at a crucial point for Amazon, which faces competition for fast-delivery options while Chief Executive Officer Andy Jassy puts a renewed focus on profits.

A central part of Amazon’s ultrafast delivery strategy is its network of warehouses that the company calls same-day sites. The facilities are a fraction of the size of Amazon’s large fulfillment warehouses and are designed to prepare products for immediate delivery. In contrast, the larger Amazon warehouses typically rely on delivery stations closer to customers for the final stage of shipping.

Amazon has opened about 45 of the smaller sites since 2019 and could expand to at least 150 centers in the next several years, according to MWPVL International Inc., which tracks Amazon warehouse operations. The sites have primarily opened near large cities and deliver the most popular 100,000 items in Amazon’s catalog, MWPVL said. New locations recently opened in Los Angeles, San Francisco and Phoenix, according to Amazon, which declined to provide information on how many of the same-day sites it has.

The reason to bring this program up now is to provide some personal context about the FTC’s latest lawsuit, this time against Amazon. Again from the Wall Street Journal:

The Federal Trade Commission sued Amazon.com on Wednesday, alleging the retail giant worked for years to enroll consumers without consent into Amazon Prime and made it difficult to cancel their subscriptions to the program. The FTC’s complaint, filed in federal court in Seattle, alleged that Amazon has duped millions of consumers into enrolling in Amazon Prime, a $139 annual subscription service with more than 200 million members worldwide that has helped Amazon become an integral part of many American households’ shopping habits.

“Amazon tricked and trapped people into recurring subscriptions without their consent, not only frustrating users but also costing them significant money,” FTC Chair Lina Khan said. The complaint, which is partially redacted, is the culmination of an investigation that began in March 2021. The FTC, a federal agency tasked with enforcing antitrust laws and consumer protection laws, seeks monetary civil penalties without providing a dollar amount.

I started with my own anecdote to explain why I am not personally familiar with the FTC’s complaints about the ease of signing up for Prime and the difficulty of cancelling: I haven’t had even a thought of going through either process for years. Indeed, even though I only live in the U.S. for a part of the year Prime is still worth it (and you get international shipping considerations as well).

This, to my mind, is the chief reason why this complaint rubs me the wrong way: even if there is validity to the FTC’s complaints (more on this in a moment), the overall thrust of the Prime value proposition seems overwhelmingly positive for consumers; surely there are plenty of other products and subscriptions that aren’t just bad for consumers on the edges but also in their overall value proposition and reason for existing.

Dark Patterns

The FTC makes two primary allegations in its complaint; the first is about the use of “dark patterns” to sign up for Prime:

For years, Defendant Amazon.com, Inc. (“Amazon”) has knowingly duped millions of consumers into unknowingly enrolling in its Amazon Prime service (“Nonconsensual Enrollees” or “Nonconsensual Enrollment”). Specifically, Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions…

This Hacker News thread about the lawsuit helpfully contained several examples of Amazon’s dark patterns in terms of subscribing to Prime:

Amazon Prime dark pattern

Amazon Prime dark pattern

Amazon Prime dark pattern

Are these UI decisions that are designed to make subscribing to Prime very easy? Yes, and that is a generous way to put it, to say the least! At the same time, you can be less than generous in your critique, as well. The last image, for example, complains that Amazon is lying because the customer already qualifies for free shipping, while ignoring that the free shipping on offer from Prime arrives three days earlier! That seems like a meaningful distinction.

That noted, something I found interesting in that thread — and reader beware, the only thing less reliable than a writer relating their personal experience is a writer relating experiences they read on the Internet — are the people who argued that the reason they didn’t want Prime is that in their experience packages showed up in one or two days anyways.

This makes intuitive sense (again, with the caveat that I am relying on anonymous commentators on the Internet): it seems perfectly plausible that it makes more sense for Amazon to optimize its logistics around the delivery promises it makes for Prime customers, instead of carving out a less efficient delivery mechanism for non-Prime customers that would actually increase overall coordination costs.

This also complicates the view of Amazon’s dark patterns: perhaps the most intellectually honest position is that if Amazon believes it can most efficiently deliver packages by giving the same level of service to everyone that it ought to simply charge everyone; in other words, just as Costco requires a membership to even get in the store, Amazon ought to require a Prime membership to buy anything at all.

Given this, it seems likely to me that the people who have not signed up to Prime are free riders in the “free-rider problem” sense; from Wikipedia:

In the social sciences, the free-rider problem is a type of market failure that occurs when those who benefit from resources, public goods and common pool resources do not pay for them or under-pay. Examples of such goods are public roads or public libraries or services or other goods of a communal nature. Free riders are a problem for common pool resources because they may overuse it by not paying for the good (either directly through fees or tolls or indirectly through taxes). Consequently, the common pool resource may be under-produced overused or degraded. Additionally, it has been shown that despite evidence that people tend to be cooperative by nature (a prosocial behaviour), the presence of free-riders causes cooperation to deteriorate, perpetuating the free-rider problem.

In this view, Amazon “free-riders” get Prime benefits without paying for Prime; they earn this benefit by successfully navigating Amazon’s dark patterns, which, to be sure, are its own cost. I would also note that Amazon does benefit from free-riders: at the end of the day the most important driver of the company’s profitability is how much leverage it can gain on its massive costs; I would bet that from Amazon’s perspective a “free-rider” who buys things on Amazon is a net positive…as long as there aren’t too many of them.

What this means is that, to the extent the FTC is effective is the extent to which Amazon almost certainly makes delivery worse for non-Prime members (i.e. differentiates based on service level instead of dark pattern navigation capability) and/or simply makes Amazon.com Prime only, restricting availability to the people who the FTC insists ought not pay for faster delivery. It’s not clear to me how much of a win this is.

The Iliad Flow

The second complaint was about the cancellation process:

For years, Amazon also knowingly complicated the cancellation process for Prime subscribers who sought to end their membership. Under significant pressure from the Commission — and aware that its practices are legally indefensible — Amazon substantially revamped its Prime cancellation process for at least some subscribers shortly before the filing of this Complaint. However, prior to that time, the primary purpose of the Prime cancellation process was not to enable subscribers to cancel, but rather to thwart them. Fittingly, Amazon named that process “Iliad,” which refers to Homer’s epic about the long, arduous Trojan War. Amazon designed the Iliad cancellation process (“Iliad Flow”) to be labyrinthine, and Amazon and its leadership—including Lindsay, Grandinetti, and Ghani—slowed or rejected user experience changes that would have made Iliad simpler for consumers because those changes adversely affected Amazon’s bottom line.

As with Nonconsensual Enrollment, the Iliad Flow’s complexity resulted from Amazon’s use of dark patterns—manipulative design elements that trick users into making decisions they would not otherwise have made.

At the risk of once again over-indexing on forum behavior, it was striking that no one seemed to have saved-up screenshots about the cancellation process, perhaps because few Prime members seem to want to go through with it. Moreover, the FTC complaint doesn’t seem that egregious?

Under substantial pressure from the Commission, Amazon changed its Iliad cancellation process in or about April 2023, shortly before the filing of this Complaint. Prior to that point, there were only two ways to cancel a Prime subscription through Amazon: a) through the online labyrinthine cancellation flow known as the “Iliad Flow” on desktop and mobile devices; or b) by contacting customer service.

This is an important caveat, for those of us trying to validate the FTC’s complaints; if anyone has an independent depiction of the flow previously I’d love to see it. That said, here is how the FTC described it (the screenshots are from the FTC’s complaint, which is very low resolution):

The Iliad Flow required consumers intending to cancel to navigate a four-page, six-click, fifteen-option cancellation process. In contrast, customers could enroll in Prime with one or two clicks…To cancel via the Iliad Flow, a consumer had to first locate it, which Amazon made difficult. Consumers could access the Iliad Flow from Amazon.com by navigating to the Prime Central page, which consumers could reach by selecting the “Account & Lists” dropdown menu, reviewing the third column of dropdown links Amazon presented, and selecting the eleventh option in the third column (“Prime Membership”). This took the consumer to the Prime Central Page.

Once the consumer reached Prime Central, the consumer had to click on the “Manage Membership” button to access the dropdown menu. That revealed three options. The first two were “Share your benefits” (to add household members to Prime) and “Remind me before renewing” (Amazon then sent the consumer an email reminder before the next charge). The last option was “End Membership.” The “End Membership” button did not end membership. Rather, it took the consumer to the Iliad Flow. It was impossible to reach the Iliad Flow from Amazon.com in fewer than two clicks…

The Iliad flow, from the FTC complaint

Once consumers reached the Iliad Flow, they had to proceed through its entirety — spanning three pages, each of which presented consumers several options, beyond the Prime Central page — to cancel Prime. On the first page of the Iliad Flow, Amazon forced consumers to “[t]ake a look back at [their] journey with Prime” and presented them with a summary showing the Prime services they used. Amazon also displayed marketing material on Prime services, such as Prime Delivery, Prime Video, and Amazon Music Prime. Amazon placed a link for each service and encouraged consumers to access them immediately, i.e., “Start shopping today’s deals!”, “You can start watching videos by clicking here!”, and “Start listening now!” Clicking on any of these options took the consumer out of the Iliad Flow.

The Iliad flow, from the FTC complaint

Also, on page one of the Iliad Flow, Amazon presented consumers with three buttons at the bottom. “Remind Me Later,” the button on the left, sent the consumer a reminder three days before their Prime membership renews (an option Amazon had already presented the consumer once before, in the “Manage Membership” pull-down menu through which the consumer entered the Iliad Flow). The “Remind Me Later” button took the consumer out of the Iliad Flow without cancelling Prime. “Keep My Benefits,” on the right, also took the consumer out of the Iliad Flow without cancelling Prime. Finally, “Continue to Cancel,” in the middle, also did not cancel Prime but instead proceeded to the second page of the Iliad Flow. Therefore, consumers could not cancel their Prime subscription on the first page of the Iliad Flow.

The Iliad flow, from the FTC complaint

On the second page of the Iliad Flow, Amazon presented consumers with alternative or discounted pricing, such as the option to switch from monthly to annual payments (and vice-versa), student discounts, and discounts for individuals with EBT cards or who receive government assistance. Amazon emphasized the option to switch from monthly to annual payments by stating the amount a consumer would save at the top of this page in bold. Clicking the orange button (“Switch to annual payments”) or the links beneath took the consumer out of the Iliad Flow without cancelling.

The Iliad flow, from the FTC complaint

Right above these alternatives, Amazon stated “Items tied to your Prime membership will be affected if you cancel your membership,” positioned next to a warning icon. Amazon also warned consumers that “[b]y cancelling, you will no longer be eligible for your unclaimed Prime exclusive offers,” and hyperlinked to the Prime exclusive offers. Clicking this link took the consumer out of the Iliad Flow without cancelling.

The Iliad flow, from the FTC complaint

Finally, at the bottom of Iliad Flow page two, Amazon presented consumers with buttons offering the same three options as the first page: “Remind Me Later,” “Continue to Cancel,” and “Keep My Membership” (labelled “Keep My Benefits” on the first page). Once again, consumers could not cancel their Prime subscription on the second page of the Iliad Flow. Choosing either “Remind Me Later” or “Keep My Membership” took the consumer out of the Iliad Flow without cancelling. Consumers had to click “Continue to Cancel” to access the third page of the Iliad Flow.

On the third page of the Iliad Flow, Amazon showed consumers five different options, only one of which, “End Now”—presented last, at the bottom of the page— immediately cancelled a consumer’s Prime membership. Pressing any of the first four buttons took the consumer out of the Iliad Flow without immediately cancelling.

I’m going to stop quoting at this point, as the complaint spends two pages on the final cancellation page; I assumed that the “End now” button would be some tiny text link, but no, it’s perfectly prominent and, given it’s the last choice, arguably the most obvious one:

The Iliad flow, from the FTC complaint

Set aside all of the discussion above about the overall value of Prime and the problem of free-loaders: this specific part of the complaint is absolutely ridiculous. Amazon’s flow — at least as depicted by the FTC in their own complaint — is completely reasonable, and that’s even before you start discussing the contrast with entities that let you sign up on the web but only cancel by call. Amazon’s entry into the cancellation process is clear, the flow is clear, and it’s not a crime that they seek to educate would-be cancellers as to why they might not want to cancel.

This last point is important because it gets at why this complaint is fundamentally rooted in hostility to business. The reason to argue that dark patterns are bad is because customers are not sufficiently educated or capable enough to navigate a deliberately confusing interface that is driving you in a specific direction (like subscription to Prime in the first place). I’m wary of the costs of government regulators getting involved in product design on a philosophical level, but I am sympathetic to the moral point.

However, if you accept the premise of the previous paragraph, then it is inconsistent to complain about a company trying to educate consumers about the value they are deriving for a product in the course of canceling that product. To put it another way, the FTC’s complaint about dark patterns when it comes to signing up for Prime is rooted in the assumption that consumers lack knowledge and are easily tricked; the FTC’s complaint about Amazon presenting reasons to not cancel is rooted in the assumption that consumers are already fully-informed and ought to be able to accomplish their goal in as few clicks as possible. The better explanation is that the FTC is simply anti-business.

Friction and Aggregation Theory

There is a broader point to make about the question of not just dark patterns, but also a number of other objectionable practices on the Internet, particularly tracking and targeting. One of the earliest Articles I wrote on Stratechery was called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction. With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Change is guaranteed, but the type of change is not; never is that more true than today. See, friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad. We are creating the future, and “better” does not win by default.

Set aside the dramatic centuries-spanning exposition; the fundamental point is that the removal of friction leads to a different set of trade-offs. In the case of targeting and tracking, the payoff is a massive increase in consumer welfare by virtue of access to all of the world’s information (Google), and all of the world’s people (Meta); in the case of things like dark patterns and personal appeals, the payoff is ordering sunglasses for your upcoming fishing trip at 10am and having them in hand at 4pm, or, more broadly, to have access to anything you need no matter where you live.

To note these trade-offs is not to say that the trade-off is worth it or not; it’s to note that the trade-offs exist. And, to that end, the frustration so many feel about the FTC’s recent actions, particularly this specific lawsuit, is the extent to which the Commission seems determined to act as if trade-offs don’t exist.

This is, of course, downstream from Chairperson Lina Khan’s famous law review article, Amazon’s Antitrust Paradox. Khan, and much of the movement she represents, is intrinsically opposed to “big”, and frankly, I’m sympathetic to the point. The problem with this movement’s critique, though, is that because it believes “big is bad”, it assumes that companies become big by acting badly.

The reality of Aggregation Theory, though, is the opposite: on the Internet, thanks to zero marginal costs in terms of serving new customers, and zero transaction costs in terms of scalability, the biggest companies are those that serve customers most effectively, and leverage demand into power over supply; this is the opposite of the analog world, where control of scarcity — i.e. control of supply — was the way to be dominant. The reason this matters is that all of our antitrust laws were created for the latter world; trying to apply the wrong framework to a new reality will only serve to increase costs or reduce access to people on whose behalf the regulator is ostensibly fighting.

I wrote a follow-up to this Article in this Daily Update.

Apple Vision

It really is one of the best product names in Apple history: Vision is a description of a product, it is an aspiration for a use case, and it is a critique on the sort of society we are building, behind Apple’s leadership more than anyone else.

I am speaking, of course, about Apple’s new mixed reality headset that was announced at yesterday’s WWDC, with a planned ship date of early 2024, and a price of $3,499. I had the good fortune of using an Apple Vision in the context of a controlled demo — which is an important grain of salt, to be sure — and I found the experience extraordinary.

The high expectations came from the fact that not only was this product being built by Apple, the undisputed best hardware maker in the world, but also because I am, unlike many, relatively optimistic about VR. What surprised me is that Apple exceeded my expectations on both counts: the hardware and experience were better than I thought possible, and the potential for Vision is larger than I anticipated. The societal impacts, though, are much more complicated.

The Vision Product

VR + AR

I have, for as long as I have written about the space, highlighted the differences between VR (virtual reality) and AR (augmented reality). From a 2016 Update:

I think it’s useful to make a distinction between virtual and augmented reality. Just look at the names: “virtual” reality is about an immersive experience completely disconnected from one’s current reality, while “augmented” reality is about, well, augmenting the reality in which one is already present. This is more than a semantic distinction about different types of headsets: you can divide nearly all of consumer technology along this axis. Movies and videogames are about different realities; productivity software and devices like smartphones are about augmenting the present. Small wonder, then, that all of the big virtual reality announcements are expected to be video game and movie related.

Augmentation is more interesting: for the most part it seems that augmentation products are best suited as spokes around a hub; a car’s infotainment system, for example, is very much a device that is focused on the current reality of the car’s occupants, and as evinced by Ford’s announcement, the future here is to accommodate the smartphone. It’s the same story with watches and wearables generally, at least for now.

I highlight that timing reference because it’s worth remembering that smartphones were originally conceived of as a spoke around the PC hub; it turned out, though, that by virtue of their mobility — by being useful in more places, and thus capable of augmenting more experiences — smartphones displaced the PC as the hub. Thus, when thinking about the question of what might displace the smartphone, I suspect what we today think of a “spoke” will be a good place to start. And, I’d add, it’s why platform companies like Microsoft and Google have focused on augmented, not virtual, reality, and why the mysterious Magic Leap has raised well over a billion dollars to-date; always in your vision is even more compelling than always in your pocket (as is always on your wrist).

I’ll come back to that last paragraph later on; I don’t think it’s quite right, in part because Apple Vision shows that the first part of the excerpt wasn’t right either. Apple Vision is technically a VR device that experientially is an AR device, and it’s one of those solutions that, once you have experienced it, is so obviously the correct implementation that it’s hard to believe there was ever any other possible approach to the general concept of computerized glasses.

This reality — pun intended — hits you the moment you finish setting up the device, which includes not only fitting the headset to your head and adding a prescription set of lenses, if necessary, but also setting up eye tracking (which I will get to in a moment). Once you have jumped through those hoops you are suddenly back where you started: looking at the room you are in with shockingly full fidelity.

What is happening is that Apple Vision is utilizing some number of its 12 cameras to capture the outside world, and displaying them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wearing safety goggles: you’re looking through something, that isn’t exactly like total clarity but is of sufficiently high resolution and speed that there is no reason to think it’s not real.

The speed is essential: Apple claims that the threshold for your brain to notice any sort of delay in what you see and what your body expects you to see (which is what causes known VR issues like motion sickness) is 12 milliseconds, and that the Vision visual pipeline displays what it sees to your eyes in 12 milliseconds or less. This is particularly remarkable given that the time for the image sensor to capture and process what it is seeing is along the lines of 7~8 milliseconds, which is to say that the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds.

This is, truly, something that only Apple could do, because this speed is function of two things: first, the Apple-designed R1 processor (Apple also designed part of the image sensor), and second, the integration with Apple’s software. Here is Mike Rockwell, who led the creation of the headset, explaining “visionOS”:

None of this advanced technology could come to life without a powerful operating system called “visionOS”. It’s built on the foundation of the decades of engineering innovation in macOS, iOS, and iPad OS. To that foundation we added a host of new capabilities to support the low latency requirements of spatial computing, such as a new real-time execution engine that guarantees performance-critical workloads, a dynamically foveated rendering pipeline that delivers maximum image quality to exactly where your eyes are looking for every single frame, a first-of-its-kind multi-app 3D engine that allows different apps to run simultaneously in the same simulation, and importantly, the existing application frameworks we’ve extended to natively support spatial experiences. visionOS is the first operating system designed from the ground up for spatial computing.

The key part here is the “real-time execution engine”; “real time” isn’t just a descriptor of the experience of using Vision Pro: it’s a term-of-art for a different kind of computing. Here’s how Wikipedia defines a real-time operating system:

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

Real-time operating systems are used in embedded systems for applications with critical functionality, like a car, for example: it’s ok to have an infotainment system that sometimes hangs or even crashes, in exchange for more flexibility and capability, but the software that actually operates the vehicle has to be reliable and unfailingly fast. This is, in broad strokes, one way to think about how visionOS works: while the user experience is a time-sharing operating system that is indeed a variation of iOS, and runs on the M2 chip, there is a subsystem that primarily operates the R1 chip that is real-time; this means that even if visionOS hangs or crashes, the outside world is still rendered under that magic 12 milliseconds.

This is, needless to say, the most meaningful manifestation yet of Apple’s ability to integrate hardware and software: while previously that integration manifested itself in a better user experience in the case of a smartphone, or a seemingly impossible combination of power and efficiency in the case of Apple Silicon laptops, in this case that integration makes possible the melding of VR and AR into a single Vision.

Mirrorless and Mixed Reality

In the early years of digital cameras there was bifurcation between consumer cameras that were fully digital, and high-end cameras that had a digital sensor behind a traditional reflex mirror that pushed actual light to an optical viewfinder. Then, in 2008, Panasonic released the G1, the first-ever mirrorless camera with an interchangeable lens system. The G1 had a viewfinder, but the viewfinder was in fact a screen.

This system was, at the beginning, dismissed by most high-end camera users: sure, a mirrorless system allowed for a simpler and smaller design, but there was no way a screen could ever compare to actually looking through the lens of the camera like you could with a reflex mirror. Fast forward to today, though, and nearly every camera on the market, including professional ones, are mirrorless: not only did those tiny screens get a lot better, brighter, and faster, but they also brought many advantages of their own, including the ability to see exactly what a photo would look like before you took it.

Mirrorless cameras were exactly what popped into my mind when the Vision Pro launched into that default screen I noted above, where I could effortlessly see my surroundings. The field of view was a bit limited on the edges, but when I actually brought up the application launcher, or was using an app or watching a video, the field of vision relative to an AR experience like a Hololens was positively astronomical. In other words, by making the experience all digital, the Vision Pro delivers an actually useful AR experience that makes the still massive technical challenges facing true AR seem irrelevant.

The payoff is the ability to then layer in digital experiences into your real-life environment: this can include productivity applications, photos and movies, conference calls, and whatever else developers might come up with, all of which can be used without losing your sense of place in the real world. To just take one small example, while using the Vision Pro, my phone kept buzzing with notifications; I simply took the phone out of my pocket, opened control center, and turned on do-not-disturb. What was remarkable only in retrospect is that I did all of that while technically being closed off to the world in virtual reality, but my experience was of simply glancing at the phone in my hand without even thinking about it.

Making everything digital pays off in other ways, as well; the demo included this dinosaur experience, where the dinosaur seems to enter the room:

The whole reason this works is because while the room feels real, it is in fact rendered digitally.

It remains to be seen how well this experience works in reverse: the Vision Pro includes “EyeSight”, which is Apple’s name for the front-facing display that shows your eyes to those around you. EyeSight wasn’t a part of the demo, so it remains to be seen if it is as creepy as it seems it might be: the goal, though, is the same: maintain a sense of place in the real world not by solving seemingly-impossible physics problems, but by simply making everything digital.

The User Interface

That the user’s eyes can be displayed on the outside of the Vision Pro is arguably a by-product of the technology that undergirds the Vision Pro’s user interface: what you are looking at is tracked by the Vision Pro, and when you want to take action on whatever you are looking at you simply touch your fingers together. Notably, your fingers don’t need to be extended into space: the entire time I used the Vision Pro my hands were simply resting in my lap, their movement tracked by the Vision Pro’s cameras.

It’s astounding how well this works, and how natural it feels. What is particularly surprising is how high-resolution this UI is; look at this crop of a still from Apple’s presentation:

The Photos app in visionOS

The bar at the bottom of Photos is how you “grab” Photos to move it anywhere (literally); the small circle next to the bar is to close the app. On the left are various menu items unique to Photos. What is notable about these is how small they are: this isn’t a user interface like iOS or iPadOS that has to accommodate big blunt fingers; rather, visionOS’s eye tracking is so accurate that it can easily delineate the exact user interface element you are looking at, which again, you trigger by simply touching your fingers together. It’s extraordinary, and works extraordinarily well.

Of course you can also use a keyboard and trackpad, connected via Bluetooth, and you can also project a Mac into the Vision Pro; the full version of the above screenshot has a Mac running Final Cut Pro to the left of Photos:

macOS in visionOS

I didn’t get the chance to try the Mac projection, but truthfully, while I went into this keynote the most excited about this capability, the native interface worked so well that I suspect I am going to prefer using native apps, even if those apps are also available for the Mac.

The Vision Aspiration

The Vision Pro as Novelty Device

An incredible product is one thing; the question on everyone’s mind, though, is what exactly is this useful for? Who has room for another device in their life, particularly one that costs $3,499?

This question is, more often than not, more important to the success of a product than the quality of the product itself. Apple’s own history of new products is an excellent example:

  • The PC (including the Mac) brought computing to the masses for the first time; there was a massive amount of greenfield in people’s lives, and the product category was a massive success.
  • The iPhone expanded computing from the desktop to every other part of a person’s life. It turns out that was an even larger opportunity than the desktop, and the product category was an even larger success.
  • The iPad, in contrast to the Mac and iPhone, sort of sat in the middle, a fact that Steve Jobs noted when he introduced the product in 2010:

All of us use laptops and smartphones now. Everybody uses a laptop and/or a smartphone. And the question has arisen lately, is there room for a third category of device in the middle? Something that’s between a laptop and a smartphone. And of course we’ve pondered this question for years as well. The bar is pretty high. In order to create a new category of devices those devices are going to have to be far better at doing some key tasks. They’re going to have to be far better at doing some really important things, better than laptop, better than the smartphone.

Jobs went on to list a number of things he thought the iPad might be better at, including web browsing, email, viewing photos, watching videos, listening to music, playing games, and reading eBooks.

Steve Jobs introducing the iPad

In truth, the only one of those categories that has truly taken off is watching video, particularly streaming services. That’s a pretty significant use case, to be sure, and the iPad is a successful product (and one whose potential use cases has been dramatically expanded by the Apple Pencil) that makes nearly as much revenue as the Mac, even though it dominates the tablet market to a much greater extent than the Mac does the PC market. At the same time, it’s not close to the iPhone, which makes sense: the iPad is a nice addition to one’s device collection, whereas an iPhone is essential.

The critics are right that this will be Apple Vision’s challenge at the beginning: a lot of early buyers will probably be interested in the novelty value, or will be Apple super fans, and it’s reasonable to wonder if the Vision Pro might becomes the world’s most expensive paper weight. To use an updated version of Jobs’ slide:

The Vision Pro as novelty item

Small wonder that Apple has reportedly pared its sales estimates to less than a million devices.

The Vision Pro and Productivity

As I noted above, I have been relatively optimistic about VR, in part because I believe the most compelling use case is for work. First, if a device actually makes someone more productive, it is far easier to justify the cost. Second, while it is a barrier to actually put on a headset — to go back to my VR/AR framing above, a headset is a destination device — work is a destination. I wrote in another Update in the context of Meta’s Horizon Workrooms:

The point of invoking the changes wrought by COVID, though, was to note that work is a destination, and its a destination that occupies a huge amount of our time. Of course when I wrote that skeptical article in 2018 a work destination was, for the vast majority of people, a physical space; suddenly, though, for millions of white collar workers in particular, it’s a virtual space. And, if work is already a virtual space, then suddenly virtual reality seems far more compelling. In other words, virtual reality may be much more important than previously thought because the vector by which it will become pervasive is not the consumer space (and gaming), but rather the enterprise space, particularly meetings.

Apple did discuss meetings in the Vision Pro, including a framework for personas — their word for avatars — that is used for Facetime and will be incorporated into upcoming Zoom, Teams, and Webex apps. What is much more compelling to me, though, is simply using a Vision Pro instead of a Mac (or in conjunction with one, by projecting the screen).

At the risk of over-indexing on my own experience, I am a huge fan of multiple monitors: I have four at my desk, and it is frustrating to be on the road right now typing this on a laptop screen. I would absolutely pay for a device to have a huge workspace with me anywhere I go, and while I will reserve judgment until I actually use a Vision Pro, I could see it being better at my desk as well.

I have tried this with the Quest, but the screen is too low of resolution to work comfortably, the user interface is a bit clunky, and the immersion is too complete: it’s hard to even drink coffee with it on. Oh, and the battery life isn’t nearly good enough. Vision Pro, though, solves all of these problems: the resolution is excellent, I already raved about the user interface, and critically, you can still see around you and interact with objects and people. Moreover, this is where the external battery solution is an advantage, given that you can easily plug the battery pack into a charger and use the headset all day (and, assuming Apple’s real-time rendering holds up, you won’t get motion sickness).1

Again, I’m already biased on this point, given both my prediction and personal workflow, but if the Vision Pro is a success, I think that an important part of its market will to at first be used alongside a Mac, and as the native app ecosystem develops, to be used in place of one.

The Vision Pro as productivity device

To put it even more strongly, the Vision Pro is, I suspect, the future of the Mac.

Vision and the iPad

The larger Vision Pro opportunity is to move in on the iPad and to become the ultimate consumption device:

The Vision Pro as consumption device

The keynote highlighted the movie watching experience of the Vision Pro, and it is excellent and immersive. Of course it isn’t, in the end, that much different than having an excellent TV in a dark room.

What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest.

It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing.

What is fascinating is that such a season pass should, in my estimation, look very different from a traditional TV broadcast, what with its multiple camera angles, announcers, scoreboard slug, etc. I wouldn’t want any of that: if I want to see the score, I can simply look up at the scoreboard as if I’m in the stadium; the sounds are provided by the crowd and PA announcer. To put it another way, the Apple Immersive Video Format, to a far greater extent than I thought possible, truly makes you feel like you are in a different place.

Again, though, this was a 10 second clip (there was another one for a baseball game, shot from the home team’s dugout, that was equally compelling). There is a major chicken-and-egg issue in terms of producing content that actually delivers this experience, which is probably why the keynote most focused on 2D video. That, by extension, means it is harder to justify buying a Vision Pro for consumption purposes. The experience is so compelling though, that I suspect this problem will be solved eventually, at which point the addressable market isn’t just the Mac, but also the iPad.

What is left in place in this vision is the iPhone: I think that smartphones are the pinnacle in terms of computing, which is to say that the Vision Pro makes sense everywhere the iPhone doesn’t.

The Vision Critique

I recognize how absurdly positive and optimistic this Article is about the Vision Pro, but it really does feel like the future. That future, though, is going to take time: I suspect there will be a slow burn, particularly when it comes to replacing product categories like the Mac or especially the iPad.

Moreover, I didn’t even get into one of the features Apple is touting most highly, which is the ability of the Vision Pro to take “pictures” — memories, really — of moments in time and render them in a way that feels incredibly intimate and vivid.

One of the issues is the fact that recording those memories does, for now, entail wearing the Vision Pro in the first place, which is going to be really awkward! Consider this video of a girl’s birthday party:

It’s going to seem pretty weird when dad is wearing a headset as his daughter blows out birthday candles; perhaps this problem will be fixed by a separate line of standalone cameras that capture photos in the Apple Immersive Video Format, which is another way to say that this is a bit of a chicken-and-egg problem.

What was far more striking, though, was how the consumption of this video was presented in the keynote:

Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:

I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.

The most personal electronics device ever

Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.

That, though, is the trend: long-time readers know that I have long bemoaned that it was the desktop computer that was christened the “personal” computer, given that the iPhone is much more personal, but now even the iPhone has been eclipsed. The arc of technology, in large part led by Apple, is for ever more personal experiences, and I’m not sure it’s an accident that that trend is happening at the same time as a society-wide trend away from family formation and towards an increase in loneliness.

This, I would note, is where the most interesting comparisons to Meta’s Quest efforts lie. The unfortunate reality for Meta is that they seem completely out-classed on the hardware front. Yes, Apple is working with a 7x advantage in price, which certainly contributes to things like superior resolution, but that bit about the deep integration between Apple’s own silicon and its custom-made operating system are going to very difficult to replicate for a company that has (correctly) committed to an Android-based OS and a Qualcomm-designed chip.

What is more striking, though, is the extent to which Apple is leaning into a personal computing experience, whereas Meta, as you would expect, is focused on social. I do think that presence is a real thing, and incredibly compelling, but achieving presence depends on your network also having VR devices, which makes Meta’s goals that much more difficult to achieve. Apple, meanwhile, isn’t even bothering with presence: even its Facetime integration was with an avatar in a window, leaning into the fact you are apart, whereas Meta wants you to feel like you are together.

In other words, there is actually a reason to hope that Meta might win: it seems like we could all do with more connectedness, and less isolation with incredible immersive experiences to dull the pain of loneliness. One wonders, though, if Meta is in fact fighting Apple not just on hardware, but on the overall trend of society; to put it another way, bullishness about the Vision Pro may in fact be a function of being bearish about our capability to meaningfully connect.

I wrote a follow-up to this Article in this Daily Update.


  1. You can also use a Quest that is plugged in, but it’s not really designed to have a cord sticking out of it at a right angle 

Windows and the AI Platform Shift

Microsoft’s Build developer conference has a bit of an odd history, which I recounted in a 2016 Update: the conference was born in 2011 as a showcase for a completely new approach to Windows, but by its second iteration it had already become a symbol of corporate infighting and dysfunction. The next three iterations were mostly forgettable in their focus on Windows and Windows Phone. The turning point came in 2017; I wrote in another Update:

Last week was Microsoft’s annual Build developer conference, and as usual, there were two keynotes over two days. What was interesting, and, I think, telling, was the order: for the first six years of the conference the first day’s keynote was dedicated to Windows and other consumer-facing products; day two was for Azure and Office 365. This year, though, the order was the opposite: Wednesday’s keynote was not only about Azure and Office 365, the first 30 minutes in particular were a genuinely compelling statement of vision by CEO Satya Nadella that, much like the schedule, put Windows firmly in the backseat.

This was a step in The End of Windows, which I wrote about a year later: CEO Satya Nadella’s greatest achievement as CEO was transforming Microsoft’s culture away from its Windows-centricity, which, it should be noted, existed for a very good reason. From the conclusion:

It’s important to note that Windows persisted as the linchpin of Microsoft’s strategy for over three decades for a very good reason: it made everything the company did possible. Windows had the ecosystem and the lock-in, and provided the foundation for Office and Windows Server, both of which were built with the assumption of Windows at the center.

Office 365 and Azure are comparatively weaker strategically: Office 365 has document lock-in, but the exact same forces that weakened Windows in the first place weaken the idea of documents as well. It’s not clear why new companies in particular would even care. Azure, meanwhile, is chasing AWS, with a huge amount of business coming from Linux VMs that could run anywhere.

Unsurprisingly, both are still benefiting from Windows: Office 365 really does, as Nadella noted in his retreat, work better on Windows, and vice versa; it is seamless for organizations that have been using Office for years to move to Office 365. Azure’s biggest advantage, meanwhile, is that it allows for hybrid deployments, where workloads are split between legacy on-premise Windows servers and Azure’s public cloud; that legacy was built on Windows.

This, then, is Nadella’s next challenge: to understand that Windows is not and will not drive future growth is one thing; identifying future drivers of said growth is another. Even in its division Windows remains the best thing Microsoft has going — it had such a powerful hold on Microsoft’s culture precisely because it was so successful.

That 2017 Build talked a lot about the “Intelligent Edge”; it was in 2018 that the vision of Microsoft Teams as Microsoft’s cloud OS started to appear. In 2019 Nadella’s keynote (and Stratechery Interview) were about being a platform company, with manifestations through the Power Platform, Microsoft 365, and Gaming (i.e. not Windows), a theme that continued over the last few years. The keynotes were all pretty good — Nadella has always been very effective at laying down an overarching vision that ties all of the announcement together — but there was always that missing piece: why would new customers or new companies ever get started with Microsoft in the first place?

Build 2023

Nadella had a different spring in his step at yesterday’s Build keynote; after greeting developers (yay for in-person keynotes!), this was his opening line:

You know these developer conferences are special times, special places to be, especially when platform shifts are in the air.

That was followed by a brief overview of the history of computing that placed AI as the continuation of a singular trend, and yet a step-change:

Just to put this in perspective, last summer I was reading Mitchell Waldrop’s Dream Machine while I was playing with DV3, as GPT-4 was called then, and it just brought in perspective what this is all about. I think that concept of “Dream Machine” perhaps best communicates what we have really been doing over the last 70 years. All the way starting with what Vannevar Bush wrote in his most seminal paper, “As We May Think”, where he had all of these concepts like associated memory, or Licklider, who was the first one to conceptualize the human-computer symbiosis. The Mother of All Demos that came in 68, to the Xerox Alto, and then, of course, the PDC that I attended which was the PC Server one in 91. 93 is when we had the Mosaic moment and then there was iPhone and the Cloud, and all of these would be one continuous journey.

The other thing I’ve always loved is Jobs’ description of computers as “bicycles for the mind”; it’s sort of a beautiful metaphor that I think captures the essence of what computing is. But then last November we got an upgrade: we went from the bicycle to the steam engine with the launch of ChatGPT. It was like the Mosaic moment for this generation of the AI platform. Now we look forward as developers to what we can do going forward. So it’s an exciting time.

It’s obvious why Microsoft would want this to be a moment. AI, specifically Microsoft’s partnership with OpenAI (which now extends to plug-in compatibility between Bing and ChatGPT, and Bing results incorporated into ChatGPT), is exactly what Microsoft has been searching for since The End of Windows: a reason to move to the Microsoft ecosystem.

AI and Microsoft Customer Acquisition

I’ve already discussed why AI is so compelling for Microsoft in terms of their productivity apps, and why startups should feel threatened:

Silicon Valley needs to rediscover its Microsoft fear, and Business Chat gets at why. Make no mistake, the Copilots are impressive, although it is reasonable to expect that Google Workspace’s implementation will be at least comparable. The problem with the Workspace + vertical SaaS app stack, though, is that none of it is designed to work together. I’ve been arguing for years this is an underrated reasons why Teams beat Slack; from 2020:

This is where Teams thrives: if you fully commit to the Microsoft ecosystem, one app combines your contacts, conversations, phone calls, access to files, 3rd-party applications, in a way that “just works”…This is what Slack — and Silicon Valley, generally — failed to understand about Microsoft’s competitive advantage: the company doesn’t win just because it bundles, or because it has a superior ground game. By virtue of doing everything, even if mediocrely, the company is providing a whole that is greater than the sum of its parts, particularly for the non-tech workers that are in fact most of the market. Slack may have infused its chat client with love, but chatting is a means to an end, and Microsoft often seems like the only enterprise company that understands that.

Business Chat takes this integration advantage and combines it with a far more compelling UI: you can simply ask for information about any project or customer or whatever else you can think of, and Business Chat can find whatever is relevant and give you an answer (with citations) — as long as the content in question is in the so-called “Microsoft Graph”. That right there is the threat: it’s easy to see how this demo will impress CIO’s eager to save money both in terms of productivity and also software; now Microsoft can emphasize that the results will be that much better the more Microsoft tools you use, from CRM to note-taking to communications (and to the extent that they open up Business Chat, it will be the responsibility of any vertical SaaS company to fit into the box Microsoft provides them).

In short, Microsoft has always had the vision for integration of business software; only over the last few years has it actually had an implementation that made sense in the cloud. Now, though, Microsoft has an actual reason-to-switch that is very tangible and that no one, other than Google, can potentially compete with — and even if Google actually ships something, the last decade of neglect in terms of building an alternative to the Microsoft Graph concept means that any competitor to Business Chat will be significantly behind.

All of this was on display during Nadella’s keynote, including a new data lake offering in Microsoft Fabric, with a CoPilot of course, and an entire CoPilot stack for developers to build on.

What was more surprising — not so much in its existence, but rather my reaction to it — was the previously exiled Windows; here was the demo video Nadella played for Windows Copilot:

Now this is obviously a demo video, so Copilot is almost certainly being shown in its best light (and it was odd that the live demo a few minutes later basically recreated the video with the exact same examples). The part I was surprised about, though, was actually Nadella’s introduction to the video:

Next, we’re bringing the Copilot to the biggest canvas of all: Windows. You are going to hear a lot from Panos tomorrow about it, but I think that this is going to make every user a power user of Windows.

This was a big disappointment! I didn’t want to wait until tomorrow (later today as you read this), I wanted the Windows talk right now.

This may seem like a small thing, but remember, it was only six years ago that I applauded Nadella for successfully demoting Windows to the Day 2 keynote; now I want the Windows talk front-and-center. The difference, though, is that this excitement is not based on preserving Windows centrality, but rather the possibilities in terms of manifesting this new paradigm in as many places as possible. To use Nadella’s terms, Windows is now a canvas for AI, not the director of the show.

Apple and the AI Shift

Go back to the “Pursuit of the Dream Machine” slide in the video above, particularly the bottom row:

Satya Nadella's slide about the evolution of the "Dream Machine"

The PC and Server era was the Windows era, when Microsoft was at its peak; the World Wide Web era started the long decline of Windows API lock-in; that dissolution of lock-in reached its nadir with the iPhone and Cloud era, when Microsoft had to go out of its way to fit in with someone else’s platform, and end Windows’ centrality to the company.

You can tell this same story in reverse, from Apple’s perspective: the PC and Server era was the Mac-in-the-corner era; yes, it was nice to use, particularly for design, but there were fewer programs and challenging compatibility issues. The Internet made it easier to own a Mac, particularly with the rise of web apps; the iPhone, meanwhile, was a completely new paradigm that, crucially, was driven by consumers, not enterprises. That was when Apple truly became dominant, and exerted total control over the associated ecosystem — including over Microsoft.

What, though, if AI is the platform shift Nadella thinks it is? It’s already compelling enough that I can’t wait for a keynote about Windows, for the first time in over a decade. At the same time, I have much lower expectations for Apple’s developer conference next month, at least as far as AI is concerned.1 Of course, given Apple’s secrecy, it’s possible a “Copilot”-type product is in the works, but that seems unlikely given that most of the smoke is centered around the company’s long-rumored headset announcement.

I am, to be clear, quite excited about Apple’s headset; yes, the rumor is that it will cost $3,000, with a bill of materials that runs to around $1,500, but I think that is a smart move: costs always come down over time, while delivering a compelling experience for a brand new product category should take priority. Still, even $1,500 in hardware could very well be let down by software, particularly Siri. The Information reported last month:

Inside Apple, Siri remains widely derided for its lack of functionality and improvements since Giannandrea took over, say multiple former Siri employees. For example, the team building Apple’s mixed-reality headset, including its leader Mike Rockwell, has expressed disappointment in the demonstrations the Siri team created to showcase how the voice assistant could control the headset, according to two people familiar with the matter. At one point, Rockwell’s team considered building alternative methods for controlling the device using voice commands, the people said (the headset team ultimately ditched that idea).

Apple’s dominance of the smartphone era, the overall experience of which is delineated by software quality, hardware excellence, and a superior ecosystem, hasn’t been bothered by Siri’s disappointing performance. And, to the extent a headset era is beginning, it’s reasonable to expect that Apple’s usual advantages, particularly in terms of performance and industrial design, will be major factors. Moreover, even if Apple doesn’t announce major LLM-based features at this year’s developer conference, the smartphone — and by extension, the iPhone — isn’t going anywhere anytime soon.

Still, the very fact that Windows is suddenly interesting again, while a new Apple product faces a major software question, is evidence for Nadella’s argument that AI is a platform shift, and for the first time in a long time it is Microsoft that actually has a clear path to not just leveraging its base but actually expanding it.

Apple, meanwhile, still dominates the platforms where AI will be used for the foreseeable future — ChatGPT released their app on iPhone first, after all — but then again, Windows was still the dominant platform for the first decade-and-a-half of the Internet. Ultimately, though, the Internet eroded Windows’ dominance and set the stage for the smartphone; surely Apple knows it ought not risk a similar erosion of differentiation at the hand of AI, particularly as they courageously build products beyond the iPhones.


  1. I am optimistic that there will be an announcement about embracing open source models