Disney’s Taylor Swift Era

Bill Simmons had a quick aside about Taylor Swift on the most recent episode of his eponymous podcast.

I have never seen anything like the phenomenon around this concert tour, and I have been alive for all the concert tours since the mid-70s…from a cultural standpoint, from a multi-generation standpoint: fathers and daughters, daughters and moms, multiple generations. You have people like my daughter who is 18, who has not even known life without a Taylor Swift song, and then you have people in their 30s who grew up with her, and then you have the moms who are used to listening with the daughters, and then the show itself: I had friends who went to the first show, and I think she played 45 songs — it was 3+ hours. This is like Michael Jordan shit, whatever is happening with her…

She’s sold out 6 straight shows here in Los Angeles; I’ve been here for 21 years I can’t remember anything as important as these Taylor Swift tickets, just being in the building for that. People coming from all parts of California to go, it’s really something. This is the summer of Taylor.

There is much to say about the summer of Taylor that is pertinent to technology and the Internet: start with the desire for communal in-person experiences driven not only by the pandemic, but also the general fracturing of culture inherent in a media landscape where personalized content delivered directly to your personal device is the norm. Then there is the way in which social media acts as a FOMO1 generator: being able to access every moment of every show on social media doesn’t decrease the value of attending the show, but rather increases the desire to obtain a scarce number of tickets to see the spectacle in person. And, it must be said, there is the excellence at play: not only does Swift have an incredible catalog of popular songs, the show itself spares no expense — or exertion on Swift’s part — to give the fans exactly what they were hoping for.

What made the final LA show unique though (which I had the good fortune of attending with my daughter), was the announcement of 1989 (Taylor’s Version). It wasn’t exactly a secret that the announcement was coming on 8/9, and obviously Swift’s ongoing project to re-release her old albums is well-documented at this point. What is surprising is just how much people care: Speak Now (Taylor’s Version), which was announced earlier in the tour, hit number one on the Billboard charts, giving Swift more number one albums than any woman in history; it seems inevitable that 1989 (Taylor’s Version) will be lucky number 13 for her career.

Taylor’s Versions

What is striking about the popularity of these re-releases is that it is the latest manifestation of Swift’s insistence that the opportunities for musicians are greater than ever. I had never really listened to Swift’s music when she wrote an editorial in the Wall Street Journal in 2014 entitled For Taylor Swift, the Future of Music is a Love Story:

Where will the music industry be in 20 years, 30 years, 50 years?

Before I tell you my thoughts on the matter, you should know that you’re reading the opinion of an enthusiastic optimist: one of the few living souls in the music industry who still believes that the music industry is not dying…it’s just coming alive.

There are many (many) people who predict the downfall of music sales and the irrelevancy of the album as an economic entity. I am not one of them. In my opinion, the value of an album is, and will continue to be, based on the amount of heart and soul an artist has bled into a body of work, and the financial value that artists (and their labels) place on their music when it goes out into the marketplace. Piracy, file sharing and streaming have shrunk the numbers of paid album sales drastically, and every artist has handled this blow differently.

In recent years, you’ve probably read the articles about major recording artists who have decided to practically give their music away, for this promotion or that exclusive deal. My hope for the future, not just in the music industry, but in every young girl I meet…is that they all realize their worth and ask for it.

Music is art, and art is important and rare. Important, rare things are valuable. Valuable things should be paid for. It’s my opinion that music should not be free, and my prediction is that individual artists and their labels will someday decide what an album’s price point is. I hope they don’t underestimate themselves or undervalue their art.

I had two reactions to this editorial. The first echoed Nilay Patel’s take that Taylor Swift doesn’t understand supply and demand:

This might make sense if you’re Taylor Swift and your enormous army of fans will pre-order anything you tell them to, but the most important lesson of the Internet music revolution is that the vast majority of consumers actually reward convenience. That’s why the iPod was a huge hit even though digitally-compressed music sounded terrible at the time, and it’s why teenagers today get most of their music on YouTube, even though YouTube sounds worse still. It’s also why the album is dead: you can’t sell a handful of singles and some okayish filler songs to people for $10 or $15 or $25 anymore, because convenient Internet music distribution has utterly destroyed the need to bundle everything together. You can just Google the singles and listen to them for free…

The single hardest economic problem posed by the internet is the end of scarcity. Even just 10 years ago, most people experienced a one-to-one relationship between creative works and the physical objects they were delivered on: your music came on CDs, your movies came on DVDs, and your news came on printed magazines and newspapers. Since there’s a scarce number of these objects in the world, it’s easy to buy and sell them, because their prices will follow the laws of supply and demand: limited-edition vinyl records will be more expensive than regular CDs, because there are simply fewer of them. If you wanted a CD full of songs in 1995, you went to a store and paid for them, because there was essentially no other way to get those songs. Even if you wanted to steal the music, you had to pay some price: you needed to have a friend with the right CD, and you needed time and blank CDs to make a copy.

But on the internet, there’s no scarcity: there’s an endless amount of everything available to everyone. The laws of supply and demand don’t work terribly well when there’s infinite supply. Swift is right that “important, rare things are valuable,” but she’s failed to understand that the idea of rarity simply doesn’t exist in the digital marketplace.

All fair points! Swift, though, persisted: later that year she pulled her music from Spotify, which meant fans had to actually buy the original 1989 when it came out in October (it was my first Swift purchase); what struck me about this move, in conjunction with the editorial, was that this was, from a certain perspective, a gift she was giving her fans. From an Update in November 2014:

Swift has long since proved herself a master at building a connection with fans, engaging on social media, customizing her concerts, and spilling her secrets through her songs. And, for 1989 she took things to a new level. What I think Swift has realized, though, is that reaching out to your fans is not enough: it has to be reciprocal: what selling an album for actual cash money does is give people a way to commit. They are quite literally giving Swift something valuable in exchange for her work…

The problem with Spotify is that at a very fundamental level it treats music as a commodity. You can’t choose where your $10/month goes based on the emotional impact of a song; the hot new hit by the artist you’ll never hear from again is treated exactly the same as the artist that was with you when you needed him or her the most. It cheapens the connection, not by withholding money per se, but by denying the commitment inherent in an explicit purchase…

And so, Swift has put up something of a door: access to her costs fans at least $12.99. Here’s the thing about doors though: while they keep people out, they also keep them in. Simply by virtue of having paid money directly to Swift for Swift’s music that album is already more meaningful to its 1 million buyers than the exact same music would have been were it listened to via Spotify’s all-you-can-eat subscription (or free with ads!), no matter how much money Ek and team pay out.

A few months later, though, and 1989 was on Spotify; every album since then has been also, and 1989 (Taylor’s Version) will be as well. Something funny will happen, though, when 1989 (Taylor’s Version) is released: streams of 1989 will plummet, while 1989 (Taylor’s Version) will shoot to the top of the charts; the realities of music are such that not even Swift can hold out on streaming,2 but she has still given her fans the capability of reciprocating their relationship by making the conscious decision to only listen to (Taylor’s Version)s.

Still, the value of a streaming choice doesn’t go that far; Patel’s economic argument was right, after all. The real money for Swift comes from the concerts, with the Eras Tour set to be the first to gross $1 billion; physical scarcity is still the best way for a creator to capture value.

Disney’s Earnings

Last week Disney reported earnings, including a 23% decline in profit in its traditional TV business; that was more than made up for by an 11% increase in profit in its parks, experiences, and products segment, which accounted for 68% of Disney’s profit. Disney’s theme parks and cruises have always been an essential part of the Disney model; from a 2017 Update:

The answer reminded me of this famous chart Walt Disney created to show how the Disney business worked:

Walt Disney's Disney Map

At the center, of course, are the Disney Studios, and rightly so. Not only does differentiated content drive movie theater revenue, it creates the universes and characters that earn TV licensing revenue, music recording revenue, and merchandise sales.

What has always made Disney unique, though, is Disneyland: there the differentiated content comes to life, and, given the lack of an arrow, I suspect not even Walt Disney himself appreciated the extent to which theme parks and the connection with the customer they engendered drive the rest of the business. “Disney” is just as much of a brand as is Mickey Mouse or Buzz Lightyear, with stores, a cable channel, and a reason to watch a movie even if you know nothing about it.

It was the theme park angle that made me excited about Disney+; I wrote in 2019:

This is the only appropriate context in which to think about Disney+. While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable — the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disney’s studios — but the larger project is Disney itself.

By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.

I think, in retrospect, that this was an example of my falling in love with elegance and spending insufficient time in spreadsheets: Walt Disney’s chart may have been a satisfying business model, but the reality of Disney’s TV business is that it was scalable in a way that that Disney chart never could be. The beauty of the cable bundle is that nearly every household in America paid for it every single month, regardless of whether or not Disney had a hit TV show or a must-watch sporting event; thanks to its suite of channels, anchored by ESPN, Disney received a big chunk of that money, and it grew like clockwork. In that world Walt Disney’s model was a nice side business to the real money-maker — that’s a pretty good reason for Disney to have held on to that model as long as they did.

In fact, you can very much make the case that Disney and all of its peers ought to have held on longer: yes, streaming — i.e. Netflix — leveraged the Internet for distribution of video, but that didn’t mean that Disney and Time Warner and Paramount and all of the rest had to. Those Netflix multiples, though, which far exceeded anyone else’s in Hollywood, were too tempting, and soon enough everyone was putting their best content on streaming services, leaving the cable bundle to wither.

Disney’s Taylor Swift Era

The end result is the inversion you see in Disney’s recent results. Disney is, from this point forward, not much different than Taylor Swift: sure, there is money to be made (hopefully) in areas like streaming, but the real durable value and outsized profits will come from real life experiences. This is, to be sure, a good business, but it has its limits: it is remarkable that Swift performed six shows in seven nights in Los Angeles, but it was still only six shows; concerts don’t scale like CD sales used to. Disney, similarly, only has so many theme parks, that only accommodate so many people, and operating those theme parks takes significant ongoing resources.

It’s interesting, then, to observe how differently Swift and Disney are perceived at this moment in time: I opened with Simmons analogizing Swift to Jordan, and I think it’s a fair comparison; the reality of the fractured world wrought by the Internet is that any star who can emerge from the noise becomes bigger than anything we have seen before, from hunger for a unifying experience if nothing else, and admission to that experience becomes valuable through unprecedented demand combined with physically limited supply.

That limitation, though, implies a lack of scale, which means that Swift is as big as she will ever be; that’s ok, because it’s bigger than anyone has ever has been. Disney, meanwhile, may have its own physical experiences, made valuable by their scarcity, but it will never be as valuable as owning distribution. The best thing Iger can do now is move the company on from the heights it once reached; maybe someday Disney and its investors will forget that those outsized profits ever existed.


  1. Fear Of Missing Out 

  2. It is worth noting that Swift’s fans still bought over 1 million physical copies of her most recent album

Hollywood on Strike

From the New York Times:

The Screen Actors Guild today announced that it will strike at 12:01 A.M. Friday against the multi million-dollar television entertainment production business. Union and employer sources held out very little hope that the work stoppage might be averted. Pessimism was heightened by the fact that representatives of both sides could see no grounds, on the basis of unofficial discussions held in recent days, for the resumption of formal contract negotiations, which were broken off July 13.

John L. Dales, national executive secretary of the guild, said the principal point at issue was the refusal of the producers “to agree to make any residual payment whatsoever to actors for the second run of a video film.” Under terms of the original contract negotiated three years ago, the performers receive additional pay on a percentage basis of salary minimums, starting with the third showing of a film and continuing through the sixth. That contract expired Wednesday.

Oh, I’m sorry — that’s an article from 63 years ago.

The 1960 edition of the New York Times about the Hollywood strike

Here’s the one in the New York Times from last week:

The Hollywood actors’ union approved a strike on Thursday for the first time in 43 years, bringing the $134 billion American movie and television business to a halt over anger about pay and fears of a tech-dominated future.The leaders of SAG-AFTRA, the union representing 160,000 television and movie actors, announced the strike after negotiations with studios over a new contract collapsed, with streaming services and artificial intelligence at the center of the standoff. On Friday, the actors will join screenwriters, who walked off the job in May, on picket lines in New York, Los Angeles and the dozens of other American cities where scripted shows and movies are made.

Actors and screenwriters had not been on strike at the same time since 1960, when Marilyn Monroe was still starring in films and Ronald Reagan was the head of the actors’ union. Dual strikes pit more than 170,000 workers against old-line studios like Disney, Universal, Sony and Paramount, as well tech juggernauts like Netflix, Amazon and Apple. Many of the actors’ demands mirror those of the writers, who belong to the Writers Guild of America. Both unions say they are trying to ensure living wages for workaday members, in particular those making movies or television shows for streaming services.

The reason to start with 1960 is that that was the last time actors and writers were on strike at the same time; the primary driver of that unrest was the rise of television. As for the last actors strike, in 1980? That was about the rise of home video. This leads to the first takeaway: the most important driver of unrest between studios and talent has always been technological paradigm shifts, and this time is no different.

In this case it is the rise of streaming that strikes me as more consequential than AI, but to first dispatch with the latter, it seems to me that writers are relatively more threatened by AI; it’s much more plausible today to imagine using an LLM to generate a B-movie script or filler television than it is to imagine AI replicating actors (particularly since actors licensing their likeness may in fact turn out to be very lucrative).

What is worth noting about AI is that those concerns are in-line with traditional Hollywood talent concerns when it comes to new technology: both unions have in strikes past been focused on preserving union jobs in the face of technological replacements. That is what led to the rise of residuals, which were at the core of the 1960 strike: if studios were showing movies on TV, then that meant they were occupying scarce time with content that actors weren’t getting paid for, which is to say that the actors in the movie that was being shown were competing with themselves; thus the union demand that they be paid for it.

This by extension is why I think the AI questions in this debate will probably be easier to solve: there is already a paradigm in place in Hollywood to make sure that the talent gets a cut of every airing of a piece of entertainment, and again, while you can envision an LLM writing a script, I wouldn’t be surprised if Hollywood executives primarily see the issue as something to give on while getting concessions on the more consequential issue. That, as I noted above, is streaming, and the reason why this negotiation is probably going to be very difficult is that it is exceptionally hard to divide up a pie that is shriveling before one’s eyes.

Scarcity and Residuals

There was a very interesting answer in this Slate interview with Wayne Federman, who wrote a piece in The Atlantic a decade ago about the 1960 strike:

For this strike, before we even negotiated, we already had strike authorization from the membership. In 1960, they didn’t. The 1960 strike was really about one issue, and this strike is about multiple issues. This is about how residuals, specifically for streaming entertainment, are being calculated. Those numbers are … not really released. It’s not like a Nielsen rating. Sometimes you’ll hear something like, Oh, 1.2 million minutes of Squid Game — what does that mean? Does that mean that many people watched one minute of it, or does that mean people watched it a number of times, or … ? I don’t know why it’s all proprietary for these streamers, but that’s just where we’re at. We want a little more transparency in that, [to consider] that if we’re on a hit show, is that paid differently than a [nonhit] show? And then there’s this A.I. situation.

You said that the studios were sort of giving up these residuals through clenched teeth. Do you think that their position on that has changed?

That’s the amazing outcome of what Ronald Reagan — and other negotiators at the time — was able to do: In a way, they were changing the paradigm of how Hollywood money is divided up. They were striking for an idea: that we deserve this for A, B, C, and D reasons. You get residuals now. Not everyone; editors don’t get residuals, but directors do. I get residuals for streaming services, but they’re just not the same. They’re not as good as cable, and they’re not as good as network. When you look at the check, you’re like, OK, this doesn’t seem like a lot. But, again, you don’t know how many people are watching it.

And also, I think when we first started looking at streaming services, we were like, We want these services to thrive so that there’ll be more work for actors. So I think that’s why we were not militant about residuals for these new platforms. No one is saying, Oh, we paid you to be on this Netflix show, and we never have to pay you a residual. The problem is that it’s not as hearty as it used to be for these other mediums. But the idea of residuals … is not going away, unless [the companies] decide to try to break the unions and just use nonunion actors and not pay residuals.

One of the ways Netflix broke into Hollywood was by eschewing back-end deals for top talent and paying more upfront; this removed the potential for huge upside if a show was a massive hit, but it guaranteed that talent got paid, even if a show wasn’t a success. Over time Netflix and other streamers have started to pay back-end bonuses in addition to residuals, but as Federman notes, the lack of transparency into how exactly those residuals are calculated is a big sticking point.

The most interesting paragraph to me, though, is the last one: at the risk of taking Federman’s word for it, the sentiment that unions saw streaming services as a net positive rings true, and aligns with the previous item. The entire idea of residuals arose from the idea that talent shouldn’t have to compete with itself when it came to re-running a movie or show; the key thing to note, though, is that this concern made sense in a world where there was scarce distribution. To go back to the 1960s, there were only three networks: that meant there were only 504 hours in a week to air content on television; airing a two-hour movie reduced the available space for talent to 502 hours.

Streaming, though, is purely additive. The Internet makes distribution effectively free, which means there are an infinite number of hours available for talent to monetize. This does, it’s worth noting, render talent’s original argument for residuals moot; if anything Netflix had it right when it temporarily shifted the model to simply paying up front. In fact, Federman unwittingly makes this point when he describes the mindset of studio heads in 1960:

Let’s say you get hired to act in a film. Basically, the person hiring you is taking the risk. They’re paying you your salary, and in return, they own that product. So, what SAG was saying was, You can play that film anywhere in the world, you can play it in Italy, you can have it dubbed — but when you put it on television, that’s a new revenue stream. Also, the argument was that that is taking work away from other actors. Because if you have this movie on, that time slot is no longer available for working actors.

On the other side, the head of 20th Century Fox [Spyros Skouras], his argument was very simple: Why should I pay you twice for the same job? I’ve already paid you for this job. I own this at this point. And that was basically the position of all of these studio owners. At the beginning of the strike, they were like, We’re not even going to talk about residuals. It’s a nonstarter. And Reagan said, We’re “trying to negotiate for the right to negotiate.” That’s how far apart they were. It was so foreign to these guys that they would have to share their revenues with actors after they’d already paid the actors. Ultimately, one studio, Universal Pictures—believe it or not, the head of Universal, a guy named Lew Wasserman, used to be Ronald Reagan’s agent—was the first domino that dropped. I think Lew Wasserman thought it was inevitable anyway: If it wasn’t going to happen in 1960, it might happen in ’65. And then one after another [gave in], until, I think, the 20th Century guy was the last guy, who was like, All right, I’ll give it, I’ll pay you again for something I’ve already paid you for, through clenched teeth.

Wasserman was right: studios were going to have to share the scarce resource, which was time on TV, with talent. Again, though, scarcity in terms of distribution is now gone; the only scarce resource on the Internet is consumer time and attention, and commanding that is far more difficult and risky. Look no further than the deteriorating financial condition of most of Hollywood: not only are the studios competing with Netflix and Amazon and Apple, but also with things like YouTube and social media. Indeed, you could very easily make the case that a far more legible labor action would be for the studios to lock out the talent in an attempt to remove residuals completely, given how much more risk any content producer is taking on today.

This angle is, obviously, a non-starter, but it does point at why these negotiations are likely to be so fraught: actors and writers are angling to get a larger share of revenue that they arguably no longer deserve.

The Cost of Streaming

There is an even larger problem, though, which is that studios have — in my estimation — yet to come to grips with the true cost of streaming. To go back to the old model, studios were in the business of making movies or TV shows and then selling them to distributors. Ideally they would sell the same piece of content multiple times, better leveraging the cost of making the content in the first place. Indeed, this was a sticking point in the 1960 strike; from that 1960 New York Times article:

The guild asked for 100 percent payment of minimum salaries for the second showing in a new contract. The producers insisted that the first two runs be covered by the original salary. The producers contend that it is virtually impossible to get sufficient money out of the first showing of a movie produced solely for television to pay off the initial production investment. It is reported that many bank loans are predicated on earnings from the second run.

Content costs a lot to produce up front, but the marginal cost of showing it again is effectively zero; that means the more times you can show a piece of content the more you can spend up front. The number of times you could show it, though, was, as noted above, governed by available distribution; if distribution was scarce then there was an opportunity cost of showing old content, because you couldn’t show something new (which again, was why talent wanted a share of multiple airings).

What is critical to note is that this leverage was best realized by selling to as many distributors as possible. The classic example is the traditional movie window: first you sell a movie to first-run theaters, then to budget theaters, then to hotels and airlines, then to pay-per-view, then to videocassettes/DVDs, then to cable, and finally to broadcast TV. That’s seven distinct opportunities to sell a piece of content. Going straight to streaming, though, collapses seven windows to one, reducing the ability to make money off of a particular piece of content.

Studios are enduring this cost, though, in the service of building up their own streaming services, but that has its own costs: running a streaming service entails being in the direct-to-consumer business, which is a costly one: not only do you have to build up and maintain the technical infrastructure of the service, and incur costs in customer support, but you also have to worry about things like churn that simply aren’t a consideration when you’re selling content. All of this is very expensive!

The real pain, though, is opportunity cost: while studios are missing out on multi-window revenue, paying for their streaming service, and trying to simultaneously acquire customers and stopping them from churning, they are also forgoing revenue from established services like Netflix that would not only happily pay them for their content, but could actually justify a much higher price given their significantly larger user base across which that cost could be leveraged.

All of these costs, it should be noted, occur in the aggregate, which is a real problem in these negotiations: talent is concerned about their compensation on a per-show basis, but studios are bleeding money on an entity-level in their foolhardy pursuit of customer-facing streaming services. Most of the discussion about this mismatch is focused on how to properly compensate the talent; note this item from Puck:

The union and AMPTP have by and large agreed on the residuals improvements the DGA obtained in its recent deal, but the union also wants 2 percent of subscriber revenue to be shared with the cast of a successful show, with success measured by Parrot Analytics, an analysis firm that looks at viewership, social media engagement, and other factors, to determine “demand.” That proxy metric was proposed because the companies refuse to share their internal measurements, of course. But the studios declined to engage on that issue, and the management-side source asked how the producer of a show could be expected to share revenue earned not by the producer but by the platform (i.e., subscribers pay platforms; subscribers don’t pay producers).

I get the talent’s perspective, but I’m pretty sure the talent doesn’t want to pay for the cost of customer service or customer acquisition or churn mitigation! Then again, neither should the studios: it doesn’t make any sense to me why the studios decided they wanted to bear these costs, and that’s not the talent’s problem.

The Shrinking Pie

There remains, though, the shrinking pie I noted in the introduction: the removal of distribution costs that enabled the rise of streaming was not a benefit that was limited to Hollywood, nor was the shift to attention as the only scarce resource. Every person on earth has only 168 hours in a week, during which time they are presumably sleeping and working. Those few remaining hours can now be filled by YouTube, or gaming, or podcasts, or reading this Article; every single minute spent doing something other than consuming Hollywood content is a minute lost forever.

This is consideration enough without a labor battle: thanks to COVID a lot of people fell out of the habit of going to the movie theater, and it appears around 25% of the audience permanently found something better to do with their time; that same reality applies to TV. Just as newspapers once thought the Internet was a boon because it increased their addressable market, only to find out that it also drastically increased competition for readers’ attention, Hollywood has to face the reality that the ability to make far more shows extends not only to studios but also to literally anyone. That reality is going to come to the fore if this strike drags on: if people don’t have new movies or shows to watch they will find far more options to fill their time than existed in 1960; the risk to Hollywood is that some of those alternatives become a permanent feature of people’s media diets, in line with what seems to have happened during COVID.

The broader issue is that the video industry finally seems to be facing what happened to the print and music industry before them: the Internet comes bearing gifts like infinite capacity and free distribution, but those gifts are a poisoned chalice for industries predicated on scarcity. When anyone could publish text, most text-based businesses went from massive profitability to terminal decline; when anyone could distribute music the music industry could only be saved by tech companies like Spotify helping them sell convenience in place of plastic discs.

For the video industry the first step to survival must be to retreat to what they are good at — producing content that isn’t available anywhere else — and getting away from what they are not, i.e. running undifferentiated streaming services with massive direct costs and even larger opportunity ones. Talent, meanwhile, has to realize that they and the studios are not divided by this new paradigm, but jointly threatened: the Internet is bad news for content producers with outsized costs, and long-term sustainability will be that much harder to achieve if the focus is on increasing them.

I wrote a follow-up to this Article in this Daily Update.

Threads and the Social/Communications Map

If you’re only going to tweet once every 11 years, then you better make it count; the best way to do just that is to pull off a well-executed meme:

This offering from Meta CEO Mark Zuckerberg works on multiple levels. The surface interpretation is obvious given the timing of the tweet, which was posted just hours after Meta launched Threads, a text-based social network built on top of the Instagram graph: Threads is a Twitter clone.

The scene from which the meme is derived, though, gets at what I think is really going on: the Spider-Man on the right is Charles Cameo, an imposter who uses disguises to steal art treasures. To extend the analogy, Threads looks like Twitter at first glance, but is in fact something much different — and what it is stealing is certainly what Elon Musk and Twitter have always wanted:

Zuck's comment on seizing Twitter's opportunity

The important takeaway is that all of the levels of the meme are connected: Threads looks like Twitter, but its essential differences are almost certainly table-stakes in being something larger than Twitter ever was. The question is if that treasure is itself a mirage.

The Social/Communications Map of 2013

Back in 2013 I created The Social/Communications Map:

A drawing of the Social/Communications Map

That map came out of an Article called The Multitudes of Social that argued that social media was not a single category destined to be won by a single app, and that Facebook could never “own social”:

The very idea of owning social is a fool’s errand. To be social is to be human, and to be human is, as Whitman wrote, to contain multitudes. Multitudes of apps, in my case…

Facebook needs to appreciate that their dominance of social on the PC was an artifact of the PC’s lack of mobility and limited application in day-to-day life. Smartphones are with us literally everywhere, and there is so much more within us than any one social network can capture.

The point about there being a multitude of ways to communicate online has held up well; I think, though, the axis about permanence versus ephemerality was less important than it seemed when the big battle was between Facebook and Snapchat. A better axis leans into the “SocialCommunications” aspect of the title: the most important new social networks of the last few years have been notable for not really being social networks at all.

I’m referring to the TikTok-ization of user-generated content: the reason why TikTok was such a blindspot for Facebook is that, unlike Snapchat, it doesn’t depend on network effects, but rather abundance. One of the first times I wrote about TikTok was in the context of Quibi, the failed mobile video app from Hollywood impresario Jeffrey Katzenberg:

The single most important fact about both movies and television is that they were defined by scarcity: there were only so many movies that would ever be made to fill only so many theater slots, and in the case of TV, there were only 24 hours in a day. That meant that there was significant value in being someone who could figure out what was going to be a hit before it was ever created, and then investing to make it so. That sort of selection and production is what Katzenberg and the rest of Hollywood have been doing for decades, and it’s understandable that Katzenberg thought he could apply the same formula to mobile.

Mobile, though, is defined by the Internet, which is to say it is defined by abundance…So it is on TikTok, or any other app with user-generated content. The goal is not to pick out the hits, but rather to attract as much content as possible, and then algorithmically boost whatever turns out to be good…The truth is that Katzenberg got a lot right: YouTube did have a vulnerability in terms of video content on mobile, in part because it was a product built for the desktop; TikTok, like Quibi, is unequivocally a mobile application. Unlike Quibi, though, it is also an entertainment entity predicated on Internet assumptions about abundance, not Hollywood assumptions about scarcity.

It’s ultimately a math question: are you more likely to find compelling content from the few hundred people in your social network, or from the millions of people posting on the service? The answer is obviously the latter, but that answer is only achievable if you have the means of discovering that compelling content, and, to be fair to both Facebook and Twitter, the sort of computational power necessary to pull off a TikTok-style network didn’t exist when those companies got started.

The Social/Communications Map of 2023

Set that point about time of origin aside just for a moment; here is what I think a better representation of the Social/Communications Map looks like in 2023:

The new structure for the Social/Communications Map

The first change is that the symmetric/asymmetric axis has been replaced by the nature of the sorting algorithm: chronological order versus algorithmic selection. However, this isn’t that big of a change; consider messaging, which is by definition about symmetric social networking. Messaging only really makes sense if it is organized by time — imagine trying to carry on a conversation if every message you saw were algorithmically selected, instead of simply displayed in order. Algorithmic sorting, though, makes much more sense when you are consuming content that is broadcast to the world, and thus has no assumptions about or expectations for in-order contextual replies.

The second change is the TikTok-ization I noted above: my new vertical axis is user-generated content, by which I mean content across the network, versus network-generated content, by which I mean content from the people you choose to follow. If you maintain the same public/private distinction I had in the original, you get a landscape that looks something like this (note that Facebook is better thought of as a private social network, given that the default nature of posts is that they are only seen by those in your network).

The starting position of social media companies in the 2023 Social/Communications Map

This is where the bit above about historical time comes in: another way to look at this map is as a representation of how content on the Internet has evolved; the early web, and early forms of user-generated content like forums and blogs, were and are still located in the upper left. This quadrant is fairly decentralized, and is Aggregated by Google and search.

The lower left quadrant came next: one site held all of the content from your network, and presented it chronologically. Some sites, like Twitter and Instagram, stayed here for years; Facebook, though, quickly jumped ahead to the lower right quadrant, and organized your feed algorithmically. This quadrant became the other major pillar of Internet advertising (along with search): figuring out what content to show you from your network wasn’t too dissimilar of a problem from figuring out what ads to show you, and the nature of a dynamically-generated feed that was unique to every individual was something that was only possible with digital media.

The final stage is, as noted, represented by TikTok: once again your network doesn’t matter, because the content comes from anywhere. This world, though, unlike the open web, is governed by the algorithm, not time or search.

Twitter, Threads, and the Upper-Right

I was honestly surprised to find out that both Twitter and Instagram were in the lower left quadrant until 2016; that is when both services started offering an algorithmic timeline. Of course the surprise for the two services ran in the opposite direction: for Twitter it’s amazing that the company managed to change anything at all, and for Instagram it’s a surprise the service stayed the same for so long. Since then Instagram has heavily invested in its direct messaging product even as it has slowly abandoned the public parts of the lower left: everything is an algorithm and, with Reels, completely disconnected from your network.

How services have expanded on the map over time

Perhaps the starkest change that Musk has made to Twitter, meanwhile, has been a headlong rush into the upper right: the “For You” tab is far more aggressive about promoting tweets from people you don’t follow, and it’s increasingly impossible to escape; the app always defaults to “For You”, and there are no more 3rd-party app alternatives. Eugene Wei argues this has blown up the timeline and ruined the Twitter experience:

What established the boundaries of Twitter? Two things primarily. The topology of its graph, and the timeline algorithm. The two are so entwined you could consider them to be a single item. The algorithm determines how the nodes of that graph interact. In a literal sense, Twitter has always just been whose tweets show up in your timeline and in what order.

In the modern world, machine learning algorithms that mediate who interacts with whom and how in social media feeds are, in essence, social institutions. When you change those algorithms you might as well be reconfiguring a city around a user while they sleep. And so, if you were to take control of such a community, with years of information accumulated inside its black box of an algorithm, the one thing you might recommend is not punching a hole in the side of that black box and inserting a grenade. So of course that seems to have been what the new management team did. By pushing everyone towards paid subscriptions and kneecapping distribution for accounts who don’t pay, by switching a TikTok style algorithm, new Twitter has redrawn the once stable “borders” of Twitter’s communities.

This new pay-to-play scheme may not have altered the lattice of the Twitter graph, but it has changed how the graph is interpreted. There’s little difference. My For You feed shows me less from people I follow, so my effective Twitter graph is diverging further and further from my literal graph. Each of us sits at the center of our Twitter graph like a spider in its web built out of follows and likes, with some empty space made of blocks and mutes. We can sense when the algorithm changes. Something changed. The web feels deadened.

I’ve never cared much about the presence or not of a blue check by a user’s name, but I do notice when tweets from people I follow make up a smaller and smaller percentage of my feed. It’s as if neighbors of years moved out from my block overnight, replaced by strangers who all came knocking on my front door carrying not a casserole but a tweetstorm about how to tune my ChatGPT and MidJourney prompts.

Instagram’s Evolution has shown that this shift is possible, but the shift has been systemic and gradual — and even then subject to occasionally intense pushback. Musk’s Twitter, though, has been haphazard and blistering in its pace. What ought to concern the company about Threads, though, is the possibility that all of the upheaval — which effectively sacrifices the niche Twitter had carved out amongst text nerds that dominate industries like media — will not actually result in the user growth Musk is hoping for, because Threads got there first.

Indeed, this map is the key to understanding why it is that Threads looks like Twitter, but is in fact a very different product: Threads is solidly planted in the upper right. When you log onto the app for the first time, your feed is populated by the algorithm; there is some context given by whom you follow on Instagram, but Meta seems aware that accounts you might want to look at may be different than accounts you want to hear from, and is thus filling the feeds with what it thinks you might find interesting. That is how it can provide an at-least-somewhat-compelling first-run experience to 100 million people in five days.

Twitter, on the other hand, faces the burden of millions having tried the service in past iterations and quickly deciding it wasn’t for them; even if the algorithm were effective, it may already be too late to gain new users, even as you sacrifice what the service’s existing users preferred.

The Threads Experiment

This leads to the biggest open question about Threads’ long-term prospects, and, by extension, Twitter’s: did those millions of abandoned Twitter users give up because text-based social networking just wasn’t that interesting to them, or because Twitter made it too hard to get started? I’ve made the case that it’s the former, which means that Threads is a grand experiment as to the validity of that thesis. If those 100 million users stay engaged (and if that number continues to grow), then the people chalking up Twitter’s inability to grow or monetize effectively to the company’s inability to execute are correct.

At the same time, as Wei notes, Musk’s tenure has highlighted the problems with doing too much: what if Twitter succeeded to the extent it did not despite management’s seeming ineffectiveness, but because of it?

I’ve written before in Status as a Service or The Network’s the Thing about how Twitter hit upon some narrow product-market fit despite itself. It has never seemed to understand why it worked for some people or what it wanted to be, and how those two were related, if at all. But in a twist of fate that is often more of a factor in finding product-market fit than most like to admit, Twitter’s indecisiveness protected it from itself. Social alchemy at some scale can be a mysterious thing. When you’re uncertain which knot is securing your body to the face of a mountain, it’s best not to start undoing any of them willy-nilly. Especially if, as I think was the case for Twitter, the knots were tied by someone else (in this case, the users of Twitter themselves).

Many of those knots are tied to that lower left quadrant: a predominantly time-based feed makes sense if a service is predominantly about “What is happening?”, to use Twitter’s long-time prompt; a graph based on who you choose to follow doesn’t just show what you want to see, it also controls what you don’t (Wei notes that this is a particularly hard problem for algorithmically generated feeds). Both qualities seem particularly pertinent for a medium (text) that is information dense and favored by people interested in harvesting information, a very different goal than looking to pass the time with an entertaining video or ten.

It follows, then, that Twitter’s best defense against Threads may be to retreat to that lower left corner: focus on what is happening now, from people you chose to follow. The problem, though, is that while this might win the battle against Threads, it means that Musk will have lost the war when it comes to ever making a return on his $44 billion. In truth, though, that war is already lost: Musk’s lurch for the upper right was probably the best path to reigniting user growth, but if that is the corner that matters then Threads will win.

Thread’s Chronological Timeline

The other question is if Threads will come for Twitter’s place on the map; Head of Instagram Adam Mosseri says that a chronological timeline is coming:

Adam Mosseri promising a chronological timeline

Placing this option in the context of Facebook and Instagram actually suggests that this feature won’t matter very much; both services make it hard to find, and revert back to the default algorithmic feed, and for good reason: users may say they want a chronological feed, but their revealed preference is the opposite. Instagram founders Kevin Systrom and Mike Krieger, who initially opposed algorithmic ranking in Instagram, told me in a Stratechery Interview:

Kevin Systrom: I remember thinking when the team was like, “We’re thinking of using machine learning to sort the Explore page,” I’m not even sure what they call it now, but basically the Explore page and I remember saying, “It just feels like that’s a bunch of hocus-pocus that won’t work. Or maybe it’ll work but you won’t really understand what it’s doing and you won’t fully understand the implications of it, so we should probably just keep it very simple.” I was so wrong and I only remember it because I was so wrong, but you asked about feed, Mike would probably give you his anecdote about feed. But on the Explore page I was very anti and then I think I became pro only once I saw what it could do. Not in terms of just usage metrics, but just the quality of what people were served compared to some of our heuristics before…

Mike Krieger: I’ll share a funny anecdote about the Explore experiment. Facebook has all these internal A/B testing tooling and we hooked into it and we ran our first machine learning on the Explore experiment and we filed a bug report and I’m like, “Hey, your tool isn’t working, that’s not reporting results here.” And they said, “No, the results are just so strong that they’re literally off the charts. The little bars that show it literally is over 200%, you just should ship this yesterday.” The data looked really good.

That noted, observe Mosseri’s stated goal for the app, as articulated to Alex Heath of The Verge:

I think success will be creating a vibrant community, particularly of creators, because I do think this sort of public space is really, even more than most other types of social networks, a place where a small number of people produce most of the content that most everyone consumes. So I think it’s really about creators more than it is about average folks who I think are much more there just to be entertained. I think [we want] a vibrant community of creators that’s really culturally relevant. It would be great if it gets really, really big, but I’m actually more interested in if it becomes culturally relevant and if it gets hundreds of millions of users. But we’ll see how it goes over the next couple of months or probably a couple of years.

“Culturally relevant” is the one game that Twitter has won, far more than Facebook, and arguably more than Instagram: Twitter drives national and international media coverage, from TV to newspapers, to an extent that drastically exceeds its monetization potential. Meta, meanwhile, has been content to provide social networking for the silent majority, making tons of money along the way. The best way to do that with text — if it is even possible — would be to stay in that upper right corner; cultural relevancy, though, is still in the bottom left, even if there aren’t nearly as many users, or money.

And, it must be noted, Twitter is vulnerable in its home territory; I’ve long argued that the importance of convenience in terms of app success is underrated (see Threads starting with your Instagram sign-in and network), but its hard to think of anything that might motivate users to make a change more than resolving cognitive dissonance. There is a sizable segment of that culturally relevant audience Mosseri wants to capture who are opposed to Musk, and yet can’t give up Twitter; I suspect that much of the outpouring of glee over Threads’ early success is from this cohort that wants nothing more than non-Musk Twitter.

Ultimately, though, I think they may be disappointed: Meta is about algorithms and scale, and I would bet that Threads will leave real time reactions, news, and pitched battles to Twitter; Musk’s most important decision may be accepting that that is enough, because it’s all he’s going to get.

Amazon, Friction, and the FTC

It was Friday morning, and I needed sunglasses — specifically the nerdy ones that fit on top of a pair of prescription glasses. I wasn’t sure where to buy them — my dad (and who else would know better) suggested Walmart — but Amazon had a few; the only problem is that I was leaving early Saturday morning on a fishing trip, and surely that wouldn’t be sufficient time for e-commerce!

In fact, it was more than enough: Amazon had delivery options of 12-4pm, 4-8pm, or 4-8am the next morning; four hours later I had extra sunglasses in hand (and Walmart, for the record, didn’t have any).

This wasn’t the first time I’d leveraged Amazon’s same-day delivery: I was shocked to even see that it was an option when I arrived back in the U.S. and needed an ethernet cable at 4am; it showed up at 9:30am. It is fairly new, though; from the Wall Street Journal earlier this year:

Amazon.com Inc. is expanding ultrafast delivery options, a sign that it remains committed to pushing its logistics system for speed as it scales back plans in other areas. The tech giant is continuing to devote resources to facilities and services structured to deliver packages to customers in less than a day. The expansions are happening at a crucial point for Amazon, which faces competition for fast-delivery options while Chief Executive Officer Andy Jassy puts a renewed focus on profits.

A central part of Amazon’s ultrafast delivery strategy is its network of warehouses that the company calls same-day sites. The facilities are a fraction of the size of Amazon’s large fulfillment warehouses and are designed to prepare products for immediate delivery. In contrast, the larger Amazon warehouses typically rely on delivery stations closer to customers for the final stage of shipping.

Amazon has opened about 45 of the smaller sites since 2019 and could expand to at least 150 centers in the next several years, according to MWPVL International Inc., which tracks Amazon warehouse operations. The sites have primarily opened near large cities and deliver the most popular 100,000 items in Amazon’s catalog, MWPVL said. New locations recently opened in Los Angeles, San Francisco and Phoenix, according to Amazon, which declined to provide information on how many of the same-day sites it has.

The reason to bring this program up now is to provide some personal context about the FTC’s latest lawsuit, this time against Amazon. Again from the Wall Street Journal:

The Federal Trade Commission sued Amazon.com on Wednesday, alleging the retail giant worked for years to enroll consumers without consent into Amazon Prime and made it difficult to cancel their subscriptions to the program. The FTC’s complaint, filed in federal court in Seattle, alleged that Amazon has duped millions of consumers into enrolling in Amazon Prime, a $139 annual subscription service with more than 200 million members worldwide that has helped Amazon become an integral part of many American households’ shopping habits.

“Amazon tricked and trapped people into recurring subscriptions without their consent, not only frustrating users but also costing them significant money,” FTC Chair Lina Khan said. The complaint, which is partially redacted, is the culmination of an investigation that began in March 2021. The FTC, a federal agency tasked with enforcing antitrust laws and consumer protection laws, seeks monetary civil penalties without providing a dollar amount.

I started with my own anecdote to explain why I am not personally familiar with the FTC’s complaints about the ease of signing up for Prime and the difficulty of cancelling: I haven’t had even a thought of going through either process for years. Indeed, even though I only live in the U.S. for a part of the year Prime is still worth it (and you get international shipping considerations as well).

This, to my mind, is the chief reason why this complaint rubs me the wrong way: even if there is validity to the FTC’s complaints (more on this in a moment), the overall thrust of the Prime value proposition seems overwhelmingly positive for consumers; surely there are plenty of other products and subscriptions that aren’t just bad for consumers on the edges but also in their overall value proposition and reason for existing.

Dark Patterns

The FTC makes two primary allegations in its complaint; the first is about the use of “dark patterns” to sign up for Prime:

For years, Defendant Amazon.com, Inc. (“Amazon”) has knowingly duped millions of consumers into unknowingly enrolling in its Amazon Prime service (“Nonconsensual Enrollees” or “Nonconsensual Enrollment”). Specifically, Amazon used manipulative, coercive, or deceptive user-interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions…

This Hacker News thread about the lawsuit helpfully contained several examples of Amazon’s dark patterns in terms of subscribing to Prime:

Amazon Prime dark pattern

Amazon Prime dark pattern

Amazon Prime dark pattern

Are these UI decisions that are designed to make subscribing to Prime very easy? Yes, and that is a generous way to put it, to say the least! At the same time, you can be less than generous in your critique, as well. The last image, for example, complains that Amazon is lying because the customer already qualifies for free shipping, while ignoring that the free shipping on offer from Prime arrives three days earlier! That seems like a meaningful distinction.

That noted, something I found interesting in that thread — and reader beware, the only thing less reliable than a writer relating their personal experience is a writer relating experiences they read on the Internet — are the people who argued that the reason they didn’t want Prime is that in their experience packages showed up in one or two days anyways.

This makes intuitive sense (again, with the caveat that I am relying on anonymous commentators on the Internet): it seems perfectly plausible that it makes more sense for Amazon to optimize its logistics around the delivery promises it makes for Prime customers, instead of carving out a less efficient delivery mechanism for non-Prime customers that would actually increase overall coordination costs.

This also complicates the view of Amazon’s dark patterns: perhaps the most intellectually honest position is that if Amazon believes it can most efficiently deliver packages by giving the same level of service to everyone that it ought to simply charge everyone; in other words, just as Costco requires a membership to even get in the store, Amazon ought to require a Prime membership to buy anything at all.

Given this, it seems likely to me that the people who have not signed up to Prime are free riders in the “free-rider problem” sense; from Wikipedia:

In the social sciences, the free-rider problem is a type of market failure that occurs when those who benefit from resources, public goods and common pool resources do not pay for them or under-pay. Examples of such goods are public roads or public libraries or services or other goods of a communal nature. Free riders are a problem for common pool resources because they may overuse it by not paying for the good (either directly through fees or tolls or indirectly through taxes). Consequently, the common pool resource may be under-produced overused or degraded. Additionally, it has been shown that despite evidence that people tend to be cooperative by nature (a prosocial behaviour), the presence of free-riders causes cooperation to deteriorate, perpetuating the free-rider problem.

In this view, Amazon “free-riders” get Prime benefits without paying for Prime; they earn this benefit by successfully navigating Amazon’s dark patterns, which, to be sure, are its own cost. I would also note that Amazon does benefit from free-riders: at the end of the day the most important driver of the company’s profitability is how much leverage it can gain on its massive costs; I would bet that from Amazon’s perspective a “free-rider” who buys things on Amazon is a net positive…as long as there aren’t too many of them.

What this means is that, to the extent the FTC is effective is the extent to which Amazon almost certainly makes delivery worse for non-Prime members (i.e. differentiates based on service level instead of dark pattern navigation capability) and/or simply makes Amazon.com Prime only, restricting availability to the people who the FTC insists ought not pay for faster delivery. It’s not clear to me how much of a win this is.

The Iliad Flow

The second complaint was about the cancellation process:

For years, Amazon also knowingly complicated the cancellation process for Prime subscribers who sought to end their membership. Under significant pressure from the Commission — and aware that its practices are legally indefensible — Amazon substantially revamped its Prime cancellation process for at least some subscribers shortly before the filing of this Complaint. However, prior to that time, the primary purpose of the Prime cancellation process was not to enable subscribers to cancel, but rather to thwart them. Fittingly, Amazon named that process “Iliad,” which refers to Homer’s epic about the long, arduous Trojan War. Amazon designed the Iliad cancellation process (“Iliad Flow”) to be labyrinthine, and Amazon and its leadership—including Lindsay, Grandinetti, and Ghani—slowed or rejected user experience changes that would have made Iliad simpler for consumers because those changes adversely affected Amazon’s bottom line.

As with Nonconsensual Enrollment, the Iliad Flow’s complexity resulted from Amazon’s use of dark patterns—manipulative design elements that trick users into making decisions they would not otherwise have made.

At the risk of once again over-indexing on forum behavior, it was striking that no one seemed to have saved-up screenshots about the cancellation process, perhaps because few Prime members seem to want to go through with it. Moreover, the FTC complaint doesn’t seem that egregious?

Under substantial pressure from the Commission, Amazon changed its Iliad cancellation process in or about April 2023, shortly before the filing of this Complaint. Prior to that point, there were only two ways to cancel a Prime subscription through Amazon: a) through the online labyrinthine cancellation flow known as the “Iliad Flow” on desktop and mobile devices; or b) by contacting customer service.

This is an important caveat, for those of us trying to validate the FTC’s complaints; if anyone has an independent depiction of the flow previously I’d love to see it. That said, here is how the FTC described it (the screenshots are from the FTC’s complaint, which is very low resolution):

The Iliad Flow required consumers intending to cancel to navigate a four-page, six-click, fifteen-option cancellation process. In contrast, customers could enroll in Prime with one or two clicks…To cancel via the Iliad Flow, a consumer had to first locate it, which Amazon made difficult. Consumers could access the Iliad Flow from Amazon.com by navigating to the Prime Central page, which consumers could reach by selecting the “Account & Lists” dropdown menu, reviewing the third column of dropdown links Amazon presented, and selecting the eleventh option in the third column (“Prime Membership”). This took the consumer to the Prime Central Page.

Once the consumer reached Prime Central, the consumer had to click on the “Manage Membership” button to access the dropdown menu. That revealed three options. The first two were “Share your benefits” (to add household members to Prime) and “Remind me before renewing” (Amazon then sent the consumer an email reminder before the next charge). The last option was “End Membership.” The “End Membership” button did not end membership. Rather, it took the consumer to the Iliad Flow. It was impossible to reach the Iliad Flow from Amazon.com in fewer than two clicks…

The Iliad flow, from the FTC complaint

Once consumers reached the Iliad Flow, they had to proceed through its entirety — spanning three pages, each of which presented consumers several options, beyond the Prime Central page — to cancel Prime. On the first page of the Iliad Flow, Amazon forced consumers to “[t]ake a look back at [their] journey with Prime” and presented them with a summary showing the Prime services they used. Amazon also displayed marketing material on Prime services, such as Prime Delivery, Prime Video, and Amazon Music Prime. Amazon placed a link for each service and encouraged consumers to access them immediately, i.e., “Start shopping today’s deals!”, “You can start watching videos by clicking here!”, and “Start listening now!” Clicking on any of these options took the consumer out of the Iliad Flow.

The Iliad flow, from the FTC complaint

Also, on page one of the Iliad Flow, Amazon presented consumers with three buttons at the bottom. “Remind Me Later,” the button on the left, sent the consumer a reminder three days before their Prime membership renews (an option Amazon had already presented the consumer once before, in the “Manage Membership” pull-down menu through which the consumer entered the Iliad Flow). The “Remind Me Later” button took the consumer out of the Iliad Flow without cancelling Prime. “Keep My Benefits,” on the right, also took the consumer out of the Iliad Flow without cancelling Prime. Finally, “Continue to Cancel,” in the middle, also did not cancel Prime but instead proceeded to the second page of the Iliad Flow. Therefore, consumers could not cancel their Prime subscription on the first page of the Iliad Flow.

The Iliad flow, from the FTC complaint

On the second page of the Iliad Flow, Amazon presented consumers with alternative or discounted pricing, such as the option to switch from monthly to annual payments (and vice-versa), student discounts, and discounts for individuals with EBT cards or who receive government assistance. Amazon emphasized the option to switch from monthly to annual payments by stating the amount a consumer would save at the top of this page in bold. Clicking the orange button (“Switch to annual payments”) or the links beneath took the consumer out of the Iliad Flow without cancelling.

The Iliad flow, from the FTC complaint

Right above these alternatives, Amazon stated “Items tied to your Prime membership will be affected if you cancel your membership,” positioned next to a warning icon. Amazon also warned consumers that “[b]y cancelling, you will no longer be eligible for your unclaimed Prime exclusive offers,” and hyperlinked to the Prime exclusive offers. Clicking this link took the consumer out of the Iliad Flow without cancelling.

The Iliad flow, from the FTC complaint

Finally, at the bottom of Iliad Flow page two, Amazon presented consumers with buttons offering the same three options as the first page: “Remind Me Later,” “Continue to Cancel,” and “Keep My Membership” (labelled “Keep My Benefits” on the first page). Once again, consumers could not cancel their Prime subscription on the second page of the Iliad Flow. Choosing either “Remind Me Later” or “Keep My Membership” took the consumer out of the Iliad Flow without cancelling. Consumers had to click “Continue to Cancel” to access the third page of the Iliad Flow.

On the third page of the Iliad Flow, Amazon showed consumers five different options, only one of which, “End Now”—presented last, at the bottom of the page— immediately cancelled a consumer’s Prime membership. Pressing any of the first four buttons took the consumer out of the Iliad Flow without immediately cancelling.

I’m going to stop quoting at this point, as the complaint spends two pages on the final cancellation page; I assumed that the “End now” button would be some tiny text link, but no, it’s perfectly prominent and, given it’s the last choice, arguably the most obvious one:

The Iliad flow, from the FTC complaint

Set aside all of the discussion above about the overall value of Prime and the problem of free-loaders: this specific part of the complaint is absolutely ridiculous. Amazon’s flow — at least as depicted by the FTC in their own complaint — is completely reasonable, and that’s even before you start discussing the contrast with entities that let you sign up on the web but only cancel by call. Amazon’s entry into the cancellation process is clear, the flow is clear, and it’s not a crime that they seek to educate would-be cancellers as to why they might not want to cancel.

This last point is important because it gets at why this complaint is fundamentally rooted in hostility to business. The reason to argue that dark patterns are bad is because customers are not sufficiently educated or capable enough to navigate a deliberately confusing interface that is driving you in a specific direction (like subscription to Prime in the first place). I’m wary of the costs of government regulators getting involved in product design on a philosophical level, but I am sympathetic to the moral point.

However, if you accept the premise of the previous paragraph, then it is inconsistent to complain about a company trying to educate consumers about the value they are deriving for a product in the course of canceling that product. To put it another way, the FTC’s complaint about dark patterns when it comes to signing up for Prime is rooted in the assumption that consumers lack knowledge and are easily tricked; the FTC’s complaint about Amazon presenting reasons to not cancel is rooted in the assumption that consumers are already fully-informed and ought to be able to accomplish their goal in as few clicks as possible. The better explanation is that the FTC is simply anti-business.

Friction and Aggregation Theory

There is a broader point to make about the question of not just dark patterns, but also a number of other objectionable practices on the Internet, particularly tracking and targeting. One of the earliest Articles I wrote on Stratechery was called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction. With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Change is guaranteed, but the type of change is not; never is that more true than today. See, friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad. We are creating the future, and “better” does not win by default.

Set aside the dramatic centuries-spanning exposition; the fundamental point is that the removal of friction leads to a different set of trade-offs. In the case of targeting and tracking, the payoff is a massive increase in consumer welfare by virtue of access to all of the world’s information (Google), and all of the world’s people (Meta); in the case of things like dark patterns and personal appeals, the payoff is ordering sunglasses for your upcoming fishing trip at 10am and having them in hand at 4pm, or, more broadly, to have access to anything you need no matter where you live.

To note these trade-offs is not to say that the trade-off is worth it or not; it’s to note that the trade-offs exist. And, to that end, the frustration so many feel about the FTC’s recent actions, particularly this specific lawsuit, is the extent to which the Commission seems determined to act as if trade-offs don’t exist.

This is, of course, downstream from Chairperson Lina Khan’s famous law review article, Amazon’s Antitrust Paradox. Khan, and much of the movement she represents, is intrinsically opposed to “big”, and frankly, I’m sympathetic to the point. The problem with this movement’s critique, though, is that because it believes “big is bad”, it assumes that companies become big by acting badly.

The reality of Aggregation Theory, though, is the opposite: on the Internet, thanks to zero marginal costs in terms of serving new customers, and zero transaction costs in terms of scalability, the biggest companies are those that serve customers most effectively, and leverage demand into power over supply; this is the opposite of the analog world, where control of scarcity — i.e. control of supply — was the way to be dominant. The reason this matters is that all of our antitrust laws were created for the latter world; trying to apply the wrong framework to a new reality will only serve to increase costs or reduce access to people on whose behalf the regulator is ostensibly fighting.

I wrote a follow-up to this Article in this Daily Update.

Apple Vision

It really is one of the best product names in Apple history: Vision is a description of a product, it is an aspiration for a use case, and it is a critique on the sort of society we are building, behind Apple’s leadership more than anyone else.

I am speaking, of course, about Apple’s new mixed reality headset that was announced at yesterday’s WWDC, with a planned ship date of early 2024, and a price of $3,499. I had the good fortune of using an Apple Vision in the context of a controlled demo — which is an important grain of salt, to be sure — and I found the experience extraordinary.

The high expectations came from the fact that not only was this product being built by Apple, the undisputed best hardware maker in the world, but also because I am, unlike many, relatively optimistic about VR. What surprised me is that Apple exceeded my expectations on both counts: the hardware and experience were better than I thought possible, and the potential for Vision is larger than I anticipated. The societal impacts, though, are much more complicated.

The Vision Product

VR + AR

I have, for as long as I have written about the space, highlighted the differences between VR (virtual reality) and AR (augmented reality). From a 2016 Update:

I think it’s useful to make a distinction between virtual and augmented reality. Just look at the names: “virtual” reality is about an immersive experience completely disconnected from one’s current reality, while “augmented” reality is about, well, augmenting the reality in which one is already present. This is more than a semantic distinction about different types of headsets: you can divide nearly all of consumer technology along this axis. Movies and videogames are about different realities; productivity software and devices like smartphones are about augmenting the present. Small wonder, then, that all of the big virtual reality announcements are expected to be video game and movie related.

Augmentation is more interesting: for the most part it seems that augmentation products are best suited as spokes around a hub; a car’s infotainment system, for example, is very much a device that is focused on the current reality of the car’s occupants, and as evinced by Ford’s announcement, the future here is to accommodate the smartphone. It’s the same story with watches and wearables generally, at least for now.

I highlight that timing reference because it’s worth remembering that smartphones were originally conceived of as a spoke around the PC hub; it turned out, though, that by virtue of their mobility — by being useful in more places, and thus capable of augmenting more experiences — smartphones displaced the PC as the hub. Thus, when thinking about the question of what might displace the smartphone, I suspect what we today think of a “spoke” will be a good place to start. And, I’d add, it’s why platform companies like Microsoft and Google have focused on augmented, not virtual, reality, and why the mysterious Magic Leap has raised well over a billion dollars to-date; always in your vision is even more compelling than always in your pocket (as is always on your wrist).

I’ll come back to that last paragraph later on; I don’t think it’s quite right, in part because Apple Vision shows that the first part of the excerpt wasn’t right either. Apple Vision is technically a VR device that experientially is an AR device, and it’s one of those solutions that, once you have experienced it, is so obviously the correct implementation that it’s hard to believe there was ever any other possible approach to the general concept of computerized glasses.

This reality — pun intended — hits you the moment you finish setting up the device, which includes not only fitting the headset to your head and adding a prescription set of lenses, if necessary, but also setting up eye tracking (which I will get to in a moment). Once you have jumped through those hoops you are suddenly back where you started: looking at the room you are in with shockingly full fidelity.

What is happening is that Apple Vision is utilizing some number of its 12 cameras to capture the outside world, and displaying them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wearing safety goggles: you’re looking through something, that isn’t exactly like total clarity but is of sufficiently high resolution and speed that there is no reason to think it’s not real.

The speed is essential: Apple claims that the threshold for your brain to notice any sort of delay in what you see and what your body expects you to see (which is what causes known VR issues like motion sickness) is 12 milliseconds, and that the Vision visual pipeline displays what it sees to your eyes in 12 milliseconds or less. This is particularly remarkable given that the time for the image sensor to capture and process what it is seeing is along the lines of 7~8 milliseconds, which is to say that the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds.

This is, truly, something that only Apple could do, because this speed is function of two things: first, the Apple-designed R1 processor (Apple also designed part of the image sensor), and second, the integration with Apple’s software. Here is Mike Rockwell, who led the creation of the headset, explaining “visionOS”:

None of this advanced technology could come to life without a powerful operating system called “visionOS”. It’s built on the foundation of the decades of engineering innovation in macOS, iOS, and iPad OS. To that foundation we added a host of new capabilities to support the low latency requirements of spatial computing, such as a new real-time execution engine that guarantees performance-critical workloads, a dynamically foveated rendering pipeline that delivers maximum image quality to exactly where your eyes are looking for every single frame, a first-of-its-kind multi-app 3D engine that allows different apps to run simultaneously in the same simulation, and importantly, the existing application frameworks we’ve extended to natively support spatial experiences. visionOS is the first operating system designed from the ground up for spatial computing.

The key part here is the “real-time execution engine”; “real time” isn’t just a descriptor of the experience of using Vision Pro: it’s a term-of-art for a different kind of computing. Here’s how Wikipedia defines a real-time operating system:

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

Real-time operating systems are used in embedded systems for applications with critical functionality, like a car, for example: it’s ok to have an infotainment system that sometimes hangs or even crashes, in exchange for more flexibility and capability, but the software that actually operates the vehicle has to be reliable and unfailingly fast. This is, in broad strokes, one way to think about how visionOS works: while the user experience is a time-sharing operating system that is indeed a variation of iOS, and runs on the M2 chip, there is a subsystem that primarily operates the R1 chip that is real-time; this means that even if visionOS hangs or crashes, the outside world is still rendered under that magic 12 milliseconds.

This is, needless to say, the most meaningful manifestation yet of Apple’s ability to integrate hardware and software: while previously that integration manifested itself in a better user experience in the case of a smartphone, or a seemingly impossible combination of power and efficiency in the case of Apple Silicon laptops, in this case that integration makes possible the melding of VR and AR into a single Vision.

Mirrorless and Mixed Reality

In the early years of digital cameras there was bifurcation between consumer cameras that were fully digital, and high-end cameras that had a digital sensor behind a traditional reflex mirror that pushed actual light to an optical viewfinder. Then, in 2008, Panasonic released the G1, the first-ever mirrorless camera with an interchangeable lens system. The G1 had a viewfinder, but the viewfinder was in fact a screen.

This system was, at the beginning, dismissed by most high-end camera users: sure, a mirrorless system allowed for a simpler and smaller design, but there was no way a screen could ever compare to actually looking through the lens of the camera like you could with a reflex mirror. Fast forward to today, though, and nearly every camera on the market, including professional ones, are mirrorless: not only did those tiny screens get a lot better, brighter, and faster, but they also brought many advantages of their own, including the ability to see exactly what a photo would look like before you took it.

Mirrorless cameras were exactly what popped into my mind when the Vision Pro launched into that default screen I noted above, where I could effortlessly see my surroundings. The field of view was a bit limited on the edges, but when I actually brought up the application launcher, or was using an app or watching a video, the field of vision relative to an AR experience like a Hololens was positively astronomical. In other words, by making the experience all digital, the Vision Pro delivers an actually useful AR experience that makes the still massive technical challenges facing true AR seem irrelevant.

The payoff is the ability to then layer in digital experiences into your real-life environment: this can include productivity applications, photos and movies, conference calls, and whatever else developers might come up with, all of which can be used without losing your sense of place in the real world. To just take one small example, while using the Vision Pro, my phone kept buzzing with notifications; I simply took the phone out of my pocket, opened control center, and turned on do-not-disturb. What was remarkable only in retrospect is that I did all of that while technically being closed off to the world in virtual reality, but my experience was of simply glancing at the phone in my hand without even thinking about it.

Making everything digital pays off in other ways, as well; the demo included this dinosaur experience, where the dinosaur seems to enter the room:

The whole reason this works is because while the room feels real, it is in fact rendered digitally.

It remains to be seen how well this experience works in reverse: the Vision Pro includes “EyeSight”, which is Apple’s name for the front-facing display that shows your eyes to those around you. EyeSight wasn’t a part of the demo, so it remains to be seen if it is as creepy as it seems it might be: the goal, though, is the same: maintain a sense of place in the real world not by solving seemingly-impossible physics problems, but by simply making everything digital.

The User Interface

That the user’s eyes can be displayed on the outside of the Vision Pro is arguably a by-product of the technology that undergirds the Vision Pro’s user interface: what you are looking at is tracked by the Vision Pro, and when you want to take action on whatever you are looking at you simply touch your fingers together. Notably, your fingers don’t need to be extended into space: the entire time I used the Vision Pro my hands were simply resting in my lap, their movement tracked by the Vision Pro’s cameras.

It’s astounding how well this works, and how natural it feels. What is particularly surprising is how high-resolution this UI is; look at this crop of a still from Apple’s presentation:

The Photos app in visionOS

The bar at the bottom of Photos is how you “grab” Photos to move it anywhere (literally); the small circle next to the bar is to close the app. On the left are various menu items unique to Photos. What is notable about these is how small they are: this isn’t a user interface like iOS or iPadOS that has to accommodate big blunt fingers; rather, visionOS’s eye tracking is so accurate that it can easily delineate the exact user interface element you are looking at, which again, you trigger by simply touching your fingers together. It’s extraordinary, and works extraordinarily well.

Of course you can also use a keyboard and trackpad, connected via Bluetooth, and you can also project a Mac into the Vision Pro; the full version of the above screenshot has a Mac running Final Cut Pro to the left of Photos:

macOS in visionOS

I didn’t get the chance to try the Mac projection, but truthfully, while I went into this keynote the most excited about this capability, the native interface worked so well that I suspect I am going to prefer using native apps, even if those apps are also available for the Mac.

The Vision Aspiration

The Vision Pro as Novelty Device

An incredible product is one thing; the question on everyone’s mind, though, is what exactly is this useful for? Who has room for another device in their life, particularly one that costs $3,499?

This question is, more often than not, more important to the success of a product than the quality of the product itself. Apple’s own history of new products is an excellent example:

  • The PC (including the Mac) brought computing to the masses for the first time; there was a massive amount of greenfield in people’s lives, and the product category was a massive success.
  • The iPhone expanded computing from the desktop to every other part of a person’s life. It turns out that was an even larger opportunity than the desktop, and the product category was an even larger success.
  • The iPad, in contrast to the Mac and iPhone, sort of sat in the middle, a fact that Steve Jobs noted when he introduced the product in 2010:

All of us use laptops and smartphones now. Everybody uses a laptop and/or a smartphone. And the question has arisen lately, is there room for a third category of device in the middle? Something that’s between a laptop and a smartphone. And of course we’ve pondered this question for years as well. The bar is pretty high. In order to create a new category of devices those devices are going to have to be far better at doing some key tasks. They’re going to have to be far better at doing some really important things, better than laptop, better than the smartphone.

Jobs went on to list a number of things he thought the iPad might be better at, including web browsing, email, viewing photos, watching videos, listening to music, playing games, and reading eBooks.

Steve Jobs introducing the iPad

In truth, the only one of those categories that has truly taken off is watching video, particularly streaming services. That’s a pretty significant use case, to be sure, and the iPad is a successful product (and one whose potential use cases has been dramatically expanded by the Apple Pencil) that makes nearly as much revenue as the Mac, even though it dominates the tablet market to a much greater extent than the Mac does the PC market. At the same time, it’s not close to the iPhone, which makes sense: the iPad is a nice addition to one’s device collection, whereas an iPhone is essential.

The critics are right that this will be Apple Vision’s challenge at the beginning: a lot of early buyers will probably be interested in the novelty value, or will be Apple super fans, and it’s reasonable to wonder if the Vision Pro might becomes the world’s most expensive paper weight. To use an updated version of Jobs’ slide:

The Vision Pro as novelty item

Small wonder that Apple has reportedly pared its sales estimates to less than a million devices.

The Vision Pro and Productivity

As I noted above, I have been relatively optimistic about VR, in part because I believe the most compelling use case is for work. First, if a device actually makes someone more productive, it is far easier to justify the cost. Second, while it is a barrier to actually put on a headset — to go back to my VR/AR framing above, a headset is a destination device — work is a destination. I wrote in another Update in the context of Meta’s Horizon Workrooms:

The point of invoking the changes wrought by COVID, though, was to note that work is a destination, and its a destination that occupies a huge amount of our time. Of course when I wrote that skeptical article in 2018 a work destination was, for the vast majority of people, a physical space; suddenly, though, for millions of white collar workers in particular, it’s a virtual space. And, if work is already a virtual space, then suddenly virtual reality seems far more compelling. In other words, virtual reality may be much more important than previously thought because the vector by which it will become pervasive is not the consumer space (and gaming), but rather the enterprise space, particularly meetings.

Apple did discuss meetings in the Vision Pro, including a framework for personas — their word for avatars — that is used for Facetime and will be incorporated into upcoming Zoom, Teams, and Webex apps. What is much more compelling to me, though, is simply using a Vision Pro instead of a Mac (or in conjunction with one, by projecting the screen).

At the risk of over-indexing on my own experience, I am a huge fan of multiple monitors: I have four at my desk, and it is frustrating to be on the road right now typing this on a laptop screen. I would absolutely pay for a device to have a huge workspace with me anywhere I go, and while I will reserve judgment until I actually use a Vision Pro, I could see it being better at my desk as well.

I have tried this with the Quest, but the screen is too low of resolution to work comfortably, the user interface is a bit clunky, and the immersion is too complete: it’s hard to even drink coffee with it on. Oh, and the battery life isn’t nearly good enough. Vision Pro, though, solves all of these problems: the resolution is excellent, I already raved about the user interface, and critically, you can still see around you and interact with objects and people. Moreover, this is where the external battery solution is an advantage, given that you can easily plug the battery pack into a charger and use the headset all day (and, assuming Apple’s real-time rendering holds up, you won’t get motion sickness).1

Again, I’m already biased on this point, given both my prediction and personal workflow, but if the Vision Pro is a success, I think that an important part of its market will to at first be used alongside a Mac, and as the native app ecosystem develops, to be used in place of one.

The Vision Pro as productivity device

To put it even more strongly, the Vision Pro is, I suspect, the future of the Mac.

Vision and the iPad

The larger Vision Pro opportunity is to move in on the iPad and to become the ultimate consumption device:

The Vision Pro as consumption device

The keynote highlighted the movie watching experience of the Vision Pro, and it is excellent and immersive. Of course it isn’t, in the end, that much different than having an excellent TV in a dark room.

What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest.

It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing.

What is fascinating is that such a season pass should, in my estimation, look very different from a traditional TV broadcast, what with its multiple camera angles, announcers, scoreboard slug, etc. I wouldn’t want any of that: if I want to see the score, I can simply look up at the scoreboard as if I’m in the stadium; the sounds are provided by the crowd and PA announcer. To put it another way, the Apple Immersive Video Format, to a far greater extent than I thought possible, truly makes you feel like you are in a different place.

Again, though, this was a 10 second clip (there was another one for a baseball game, shot from the home team’s dugout, that was equally compelling). There is a major chicken-and-egg issue in terms of producing content that actually delivers this experience, which is probably why the keynote most focused on 2D video. That, by extension, means it is harder to justify buying a Vision Pro for consumption purposes. The experience is so compelling though, that I suspect this problem will be solved eventually, at which point the addressable market isn’t just the Mac, but also the iPad.

What is left in place in this vision is the iPhone: I think that smartphones are the pinnacle in terms of computing, which is to say that the Vision Pro makes sense everywhere the iPhone doesn’t.

The Vision Critique

I recognize how absurdly positive and optimistic this Article is about the Vision Pro, but it really does feel like the future. That future, though, is going to take time: I suspect there will be a slow burn, particularly when it comes to replacing product categories like the Mac or especially the iPad.

Moreover, I didn’t even get into one of the features Apple is touting most highly, which is the ability of the Vision Pro to take “pictures” — memories, really — of moments in time and render them in a way that feels incredibly intimate and vivid.

One of the issues is the fact that recording those memories does, for now, entail wearing the Vision Pro in the first place, which is going to be really awkward! Consider this video of a girl’s birthday party:

It’s going to seem pretty weird when dad is wearing a headset as his daughter blows out birthday candles; perhaps this problem will be fixed by a separate line of standalone cameras that capture photos in the Apple Immersive Video Format, which is another way to say that this is a bit of a chicken-and-egg problem.

What was far more striking, though, was how the consumption of this video was presented in the keynote:

Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:

I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.

The most personal electronics device ever

Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.

That, though, is the trend: long-time readers know that I have long bemoaned that it was the desktop computer that was christened the “personal” computer, given that the iPhone is much more personal, but now even the iPhone has been eclipsed. The arc of technology, in large part led by Apple, is for ever more personal experiences, and I’m not sure it’s an accident that that trend is happening at the same time as a society-wide trend away from family formation and towards an increase in loneliness.

This, I would note, is where the most interesting comparisons to Meta’s Quest efforts lie. The unfortunate reality for Meta is that they seem completely out-classed on the hardware front. Yes, Apple is working with a 7x advantage in price, which certainly contributes to things like superior resolution, but that bit about the deep integration between Apple’s own silicon and its custom-made operating system are going to very difficult to replicate for a company that has (correctly) committed to an Android-based OS and a Qualcomm-designed chip.

What is more striking, though, is the extent to which Apple is leaning into a personal computing experience, whereas Meta, as you would expect, is focused on social. I do think that presence is a real thing, and incredibly compelling, but achieving presence depends on your network also having VR devices, which makes Meta’s goals that much more difficult to achieve. Apple, meanwhile, isn’t even bothering with presence: even its Facetime integration was with an avatar in a window, leaning into the fact you are apart, whereas Meta wants you to feel like you are together.

In other words, there is actually a reason to hope that Meta might win: it seems like we could all do with more connectedness, and less isolation with incredible immersive experiences to dull the pain of loneliness. One wonders, though, if Meta is in fact fighting Apple not just on hardware, but on the overall trend of society; to put it another way, bullishness about the Vision Pro may in fact be a function of being bearish about our capability to meaningfully connect.


  1. You can also use a Quest that is plugged in, but it’s not really designed to have a cord sticking out of it at a right angle 

Windows and the AI Platform Shift

Microsoft’s Build developer conference has a bit of an odd history, which I recounted in a 2016 Update: the conference was born in 2011 as a showcase for a completely new approach to Windows, but by its second iteration it had already become a symbol of corporate infighting and dysfunction. The next three iterations were mostly forgettable in their focus on Windows and Windows Phone. The turning point came in 2017; I wrote in another Update:

Last week was Microsoft’s annual Build developer conference, and as usual, there were two keynotes over two days. What was interesting, and, I think, telling, was the order: for the first six years of the conference the first day’s keynote was dedicated to Windows and other consumer-facing products; day two was for Azure and Office 365. This year, though, the order was the opposite: Wednesday’s keynote was not only about Azure and Office 365, the first 30 minutes in particular were a genuinely compelling statement of vision by CEO Satya Nadella that, much like the schedule, put Windows firmly in the backseat.

This was a step in The End of Windows, which I wrote about a year later: CEO Satya Nadella’s greatest achievement as CEO was transforming Microsoft’s culture away from its Windows-centricity, which, it should be noted, existed for a very good reason. From the conclusion:

It’s important to note that Windows persisted as the linchpin of Microsoft’s strategy for over three decades for a very good reason: it made everything the company did possible. Windows had the ecosystem and the lock-in, and provided the foundation for Office and Windows Server, both of which were built with the assumption of Windows at the center.

Office 365 and Azure are comparatively weaker strategically: Office 365 has document lock-in, but the exact same forces that weakened Windows in the first place weaken the idea of documents as well. It’s not clear why new companies in particular would even care. Azure, meanwhile, is chasing AWS, with a huge amount of business coming from Linux VMs that could run anywhere.

Unsurprisingly, both are still benefiting from Windows: Office 365 really does, as Nadella noted in his retreat, work better on Windows, and vice versa; it is seamless for organizations that have been using Office for years to move to Office 365. Azure’s biggest advantage, meanwhile, is that it allows for hybrid deployments, where workloads are split between legacy on-premise Windows servers and Azure’s public cloud; that legacy was built on Windows.

This, then, is Nadella’s next challenge: to understand that Windows is not and will not drive future growth is one thing; identifying future drivers of said growth is another. Even in its division Windows remains the best thing Microsoft has going — it had such a powerful hold on Microsoft’s culture precisely because it was so successful.

That 2017 Build talked a lot about the “Intelligent Edge”; it was in 2018 that the vision of Microsoft Teams as Microsoft’s cloud OS started to appear. In 2019 Nadella’s keynote (and Stratechery Interview) were about being a platform company, with manifestations through the Power Platform, Microsoft 365, and Gaming (i.e. not Windows), a theme that continued over the last few years. The keynotes were all pretty good — Nadella has always been very effective at laying down an overarching vision that ties all of the announcement together — but there was always that missing piece: why would new customers or new companies ever get started with Microsoft in the first place?

Build 2023

Nadella had a different spring in his step at yesterday’s Build keynote; after greeting developers (yay for in-person keynotes!), this was his opening line:

You know these developer conferences are special times, special places to be, especially when platform shifts are in the air.

That was followed by a brief overview of the history of computing that placed AI as the continuation of a singular trend, and yet a step-change:

Just to put this in perspective, last summer I was reading Mitchell Waldrop’s Dream Machine while I was playing with DV3, as GPT-4 was called then, and it just brought in perspective what this is all about. I think that concept of “Dream Machine” perhaps best communicates what we have really been doing over the last 70 years. All the way starting with what Vannevar Bush wrote in his most seminal paper, “As We May Think”, where he had all of these concepts like associated memory, or Licklider, who was the first one to conceptualize the human-computer symbiosis. The Mother of All Demos that came in 68, to the Xerox Alto, and then, of course, the PDC that I attended which was the PC Server one in 91. 93 is when we had the Mosaic moment and then there was iPhone and the Cloud, and all of these would be one continuous journey.

The other thing I’ve always loved is Jobs’ description of computers as “bicycles for the mind”; it’s sort of a beautiful metaphor that I think captures the essence of what computing is. But then last November we got an upgrade: we went from the bicycle to the steam engine with the launch of ChatGPT. It was like the Mosaic moment for this generation of the AI platform. Now we look forward as developers to what we can do going forward. So it’s an exciting time.

It’s obvious why Microsoft would want this to be a moment. AI, specifically Microsoft’s partnership with OpenAI (which now extends to plug-in compatibility between Bing and ChatGPT, and Bing results incorporated into ChatGPT), is exactly what Microsoft has been searching for since The End of Windows: a reason to move to the Microsoft ecosystem.

AI and Microsoft Customer Acquisition

I’ve already discussed why AI is so compelling for Microsoft in terms of their productivity apps, and why startups should feel threatened:

Silicon Valley needs to rediscover its Microsoft fear, and Business Chat gets at why. Make no mistake, the Copilots are impressive, although it is reasonable to expect that Google Workspace’s implementation will be at least comparable. The problem with the Workspace + vertical SaaS app stack, though, is that none of it is designed to work together. I’ve been arguing for years this is an underrated reasons why Teams beat Slack; from 2020:

This is where Teams thrives: if you fully commit to the Microsoft ecosystem, one app combines your contacts, conversations, phone calls, access to files, 3rd-party applications, in a way that “just works”…This is what Slack — and Silicon Valley, generally — failed to understand about Microsoft’s competitive advantage: the company doesn’t win just because it bundles, or because it has a superior ground game. By virtue of doing everything, even if mediocrely, the company is providing a whole that is greater than the sum of its parts, particularly for the non-tech workers that are in fact most of the market. Slack may have infused its chat client with love, but chatting is a means to an end, and Microsoft often seems like the only enterprise company that understands that.

Business Chat takes this integration advantage and combines it with a far more compelling UI: you can simply ask for information about any project or customer or whatever else you can think of, and Business Chat can find whatever is relevant and give you an answer (with citations) — as long as the content in question is in the so-called “Microsoft Graph”. That right there is the threat: it’s easy to see how this demo will impress CIO’s eager to save money both in terms of productivity and also software; now Microsoft can emphasize that the results will be that much better the more Microsoft tools you use, from CRM to note-taking to communications (and to the extent that they open up Business Chat, it will be the responsibility of any vertical SaaS company to fit into the box Microsoft provides them).

In short, Microsoft has always had the vision for integration of business software; only over the last few years has it actually had an implementation that made sense in the cloud. Now, though, Microsoft has an actual reason-to-switch that is very tangible and that no one, other than Google, can potentially compete with — and even if Google actually ships something, the last decade of neglect in terms of building an alternative to the Microsoft Graph concept means that any competitor to Business Chat will be significantly behind.

All of this was on display during Nadella’s keynote, including a new data lake offering in Microsoft Fabric, with a CoPilot of course, and an entire CoPilot stack for developers to build on.

What was more surprising — not so much in its existence, but rather my reaction to it — was the previously exiled Windows; here was the demo video Nadella played for Windows Copilot:

Now this is obviously a demo video, so Copilot is almost certainly being shown in its best light (and it was odd that the live demo a few minutes later basically recreated the video with the exact same examples). The part I was surprised about, though, was actually Nadella’s introduction to the video:

Next, we’re bringing the Copilot to the biggest canvas of all: Windows. You are going to hear a lot from Panos tomorrow about it, but I think that this is going to make every user a power user of Windows.

This was a big disappointment! I didn’t want to wait until tomorrow (later today as you read this), I wanted the Windows talk right now.

This may seem like a small thing, but remember, it was only six years ago that I applauded Nadella for successfully demoting Windows to the Day 2 keynote; now I want the Windows talk front-and-center. The difference, though, is that this excitement is not based on preserving Windows centrality, but rather the possibilities in terms of manifesting this new paradigm in as many places as possible. To use Nadella’s terms, Windows is now a canvas for AI, not the director of the show.

Apple and the AI Shift

Go back to the “Pursuit of the Dream Machine” slide in the video above, particularly the bottom row:

Satya Nadella's slide about the evolution of the "Dream Machine"

The PC and Server era was the Windows era, when Microsoft was at its peak; the World Wide Web era started the long decline of Windows API lock-in; that dissolution of lock-in reached its nadir with the iPhone and Cloud era, when Microsoft had to go out of its way to fit in with someone else’s platform, and end Windows’ centrality to the company.

You can tell this same story in reverse, from Apple’s perspective: the PC and Server era was the Mac-in-the-corner era; yes, it was nice to use, particularly for design, but there were fewer programs and challenging compatibility issues. The Internet made it easier to own a Mac, particularly with the rise of web apps; the iPhone, meanwhile, was a completely new paradigm that, crucially, was driven by consumers, not enterprises. That was when Apple truly became dominant, and exerted total control over the associated ecosystem — including over Microsoft.

What, though, if AI is the platform shift Nadella thinks it is? It’s already compelling enough that I can’t wait for a keynote about Windows, for the first time in over a decade. At the same time, I have much lower expectations for Apple’s developer conference next month, at least as far as AI is concerned.1 Of course, given Apple’s secrecy, it’s possible a “Copilot”-type product is in the works, but that seems unlikely given that most of the smoke is centered around the company’s long-rumored headset announcement.

I am, to be clear, quite excited about Apple’s headset; yes, the rumor is that it will cost $3,000, with a bill of materials that runs to around $1,500, but I think that is a smart move: costs always come down over time, while delivering a compelling experience for a brand new product category should take priority. Still, even $1,500 in hardware could very well be let down by software, particularly Siri. The Information reported last month:

Inside Apple, Siri remains widely derided for its lack of functionality and improvements since Giannandrea took over, say multiple former Siri employees. For example, the team building Apple’s mixed-reality headset, including its leader Mike Rockwell, has expressed disappointment in the demonstrations the Siri team created to showcase how the voice assistant could control the headset, according to two people familiar with the matter. At one point, Rockwell’s team considered building alternative methods for controlling the device using voice commands, the people said (the headset team ultimately ditched that idea).

Apple’s dominance of the smartphone era, the overall experience of which is delineated by software quality, hardware excellence, and a superior ecosystem, hasn’t been bothered by Siri’s disappointing performance. And, to the extent a headset era is beginning, it’s reasonable to expect that Apple’s usual advantages, particularly in terms of performance and industrial design, will be major factors. Moreover, even if Apple doesn’t announce major LLM-based features at this year’s developer conference, the smartphone — and by extension, the iPhone — isn’t going anywhere anytime soon.

Still, the very fact that Windows is suddenly interesting again, while a new Apple product faces a major software question, is evidence for Nadella’s argument that AI is a platform shift, and for the first time in a long time it is Microsoft that actually has a clear path to not just leveraging its base but actually expanding it.

Apple, meanwhile, still dominates the platforms where AI will be used for the foreseeable future — ChatGPT released their app on iPhone first, after all — but then again, Windows was still the dominant platform for the first decade-and-a-half of the Internet. Ultimately, though, the Internet eroded Windows’ dominance and set the stage for the smartphone; surely Apple knows it ought not risk a similar erosion of differentiation at the hand of AI, particularly as they courageously build products beyond the iPhones.


  1. I am optimistic that there will be an announcement about embracing open source models

Google I/O and the Coming AI Battles

Some things in tech are shocking, but not surprising — think of a CEO of a struggling company losing their job. Sure, the news is unexpected, but it makes sense if you think about it. Other news, though, is shocking and surprising, and Google’s February keynote in Paris — which appeared to be a panicked response to Microsoft’s GPT-powered Bing announcement — was both.

The shocking part was just how poor the presentation was: there was very little content that was new, the slides and speakers were out of sync, and the nadir came when one of the presenters started a demo and only then realized they didn’t have a phone to demo with.

The surprise is that Google would be caught so far out of pocket about AI, and not just because AI would seem to be in Google’s sweet spot: in fact Google has been talking about AI at Google I/O in particular for years now, and I’ve consistently found the company’s framing of its work very impressive. Go back to 2016, when I excerpted CEO Sundar Pichai’s long digression into how machine learning is used in its products and wrote:

Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do. It’s no different than Ballmer exclaiming how much he loves Windows: that was the product representation of Microsoft’s mission, a perspective that perhaps grants the deposed CEO just a hint of grace for his inability to move on.

The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.

Two years later I called Google I/O boring, and I meant it as a compliment:

This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google I/Os, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself. That is the best place to be, for a person and for a company.

That was the year that Google invented the transformer, the key invention undergirding the large language models that power ChatGPT, the product that seemed to fluster Google so much over the past six months. What was impressive about this year’s Google I/O is that it managed to combine what was compelling about Google’s last several I/O’s — its clear AI capabilities and the products in which to manifest them — with the urgency and aggressiveness that are exactly what you would hope to see from a company feeling threatened for the first time in years.

Google’s AI Evolution

I noted above that Google introduced Google Photos as being Gmail for photos; two of my favorites slides from Pichai’s opening remarks showed how both products are evidence of Google’s evolving AI capabilities. Gmail evolved from “Smart Reply” to “Smart Compose” to “Help Me Write”:

Google's Gmail AI progression

Google Photos evolved from “Find Photos” to “Magic Eraser” to “Magic Editor”:

Google Photos AI progression

This was a very clever way to reinforce the idea that Google has been at this AI stuff for a while, and it’s true! It was also a reminder that one of Google’s big advantages mirrors Microsoft’s: the company has a bunch of user-facing products in which to surface AI capabilities in genuinely useful ways. Pichai noted that Google had 15 products with over 500 million users:

Google's 15 products with more than 500 million users

And six products with 2 billion users:

Google's six products with more than 2 billion users

Perhaps the most notable part of the keynote, though, was Pichai’s opening statement:

Seven years into our journey as an AI-first company, we’re at an exciting inflection point. We have an opportunity to make AI even more helpful for people, for businesses, for communities, for everyone. We’ve been applying AI to make our products radically more helpful for a while. With generative AI, we’re taking the next step. With a bold and responsible approach, we are re-imagining all of our core products, including search.

It seems notable that “responsible” was preceded by “bold”; the overriding message from Google I/O is that Google is in it to win it and, as Pichai noted, that includes search.

Generative AI and Search

After Google I/O 2019 — which I also found impressive — I raised the issue of Google’s long-term business model:

More importantly, while Google Assistant continues to impress — putting everything on device promises a major breakthrough in speed, a major limiting factor for Assistants today — it is not at all clear what Google’s business model is. It is hard to imagine anything as profitable as search ads, which benefit not only from precise targeting — the user explicitly says what they want! — but also an auction format that leverages the user to pick winners, and incentivizes those winners to overpay for the chance of forming an ongoing relationship with that user.

Google, as I expected, is taking a hybrid approach: most searches are not commercial, and so Google is going to place generated text right at the top:

Google's generative AI search results

For searches that do have commercial possibilities, ads will still get top billing:

Google's generative AI search results with ads at the top

This seems like a reasonable approach, aided by the fact that non-commercial searches are probably more likely to benefit from AI anyways; this is also an approach that already looks more compelling and yes, bold, than Microsoft’s grafting on of Bing Chat to the side of traditional search.

Of course Microsoft has actually launched its new search experience, and Satya Nadella’s eagerness to chip away at both Google’s marketshare and its margins remains a threat: generating these answers costs money, and Google’s models may still trail GPT-4 on an apples-to-apples basis. Still, this demo specifically and this year’s Google I/O generally are a pretty strong response to what Sam Altman told me in a Stratechery Interview after Bing’s launch:

I think it’s fabulous for both of us. I think there’s so much upside for both of us here. We’re going to discover what these new models can do, but if I were sitting on a lethargic search monopoly and had to think about a world where there was going to be a real challenge to the way that monetization of this works and new ad units, and maybe even a temporary downward pressure, I would not feel great about that.

Those challenges remain, but at least the “lethargic search monopoly” has woken up.

Sustaining and Disruptive Innovation

If there is one thing everyone is sure about, it is that AI is going to be very disruptive; in January’s AI and the Big Five, though, I noted that it seemed more likely that AI would be a sustaining innovation:

The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.

To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.

My conclusion in that Article was that AI would be a sustaining innovation for Apple, Amazon, Meta, and Microsoft; the big question was Google and search:

That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.

Or maybe not. I tend to believe that disruptive innovations are actually quite rare, but when they come, they are basically impossible for the incumbent company to respond to: their business models, shareholders, and most important customers make it impossible for management to respond. If that is true, though, then an incumbent responding is in fact evidence that an innovation is actually not disruptive, but sustaining.

To that end, I take this Google I/O as evidence that AI is in fact a sustaining technology for all of Big Tech, including Google. Moreover, if that is the case, then that is a reason to be less bearish on the search company, because all of the reasons to expect them to have a leadership position — from capabilities to data to infrastructure to a plethora of consumer touch points — remain. Still, the challenges facing search as presently constructed — particularly its ad model — remain.

Revolution or Alignment

Another question I have been puzzling over is if AI is a Technological Revolution of the sort chronicled by Carlota Perez in Technological Revolutions and Financial Capital.

Once again, the conventional wisdom is that AI represent an entirely new paradigm; no less a luminary than Bill Gates wrote:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it…

I’m lucky to have been involved with the PC revolution and the Internet revolution. I’m just as excited about this moment. This new technology can help people everywhere improve their lives. At the same time, the world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits, and so that everyone can enjoy those benefits no matter where they live or how much money they have. The Age of AI is filled with opportunities and responsibilities.

Gates insinuates that the PC revolution, the Internet revolution, and the AI revolution are discrete events, but they can also be viewed as three applications of the defining economic feature of digitization — zero marginal costs — to information:

  • The PC allowed for zero marginal cost duplication of information; this is what undergirded breakthroughs like word processors and spreadsheets and the other productivity applications Gates specialized in.
  • The Internet allows for zero marginal cost distribution of information. This led to markets based on abundance, not scarcity, giving rise to Aggregators like Google.
  • AI is zero marginal cost generation of information (well, nearly zero, relative to humans). As I wrote last year generative models unbundle idea creation from idea substantiation, which can then be duplicated and distributed at zero marginal cost.

Moreover, these three revolutions had to come in the order they did: the very concept of the Internet doesn’t make sense without there being disparate computers, and these AI models are trained on the Internet.

I would also note that this progression is in-line with the argument I made in 2020’s The End of the Beginning: my argument in that Article is that the various tech revolutions were all manifestations of the same trend towards continuous computing everywhere; I didn’t mention AI in that Article, but the fact that AI appears to be a sustaining innovation supports the idea that the big winners of tech’s beginning will be the foundation of what happens in tech in the future.

Perez for her part has argued that the current revolution is still in the installation phase (I lay out her argument in this Article); for her the missing ingredient is coordination with and by governments.

Bard and the E.U.

One of Google’s other I/O announcements was the widespread availability of Bard, its ChatGPT competitor. The more interesting news was where it was not available; from Android Authority:

Google announced at its I/O developer conference that its Bard chatbot would be broadly available in 180 markets. It marks a major expansion for the platform, which saw a very limited release at first. Canada and Europe are missing from the list of supported markets, though. Now, Google has hinted at a possible reason for these omissions in an emailed response to an Android Authority query. A Google spokesperson noted the following:

Bard will soon be able to support the 40 top languages, and while we haven’t finalized the timeline for expansion plans, we will roll it out gradually and responsibly, and continue to be a helpful and engaged partner to regulators as we navigate these new technologies together.

The company’s assertion that it was a “helpful and engaged partner to regulators” suggests that Bard is skipping the E.U. and Canada for now due to regulatory concerns.

Once again there is a conventional wisdom take: “Haha silly Europe and its regulations means it will miss out on AI”, and, for now, that’s obviously true. It seems like a safe bet, though, that Google and Microsoft and Meta and other tech giants will indeed be a “helpful and engaged partner to regulators” to their ultimate benefit. After all, consider what those regulations might look like, starting with Canada and this bit from that same article:

Canadian lawmakers recently introduced legislation aimed at regulating AI. The Artificial Intelligence and Data Act (AIDA) mandates assessments, risk management, monitoring, data anonymization, transparency, and record-keeping practices around AI systems. AIDA would also introduce penalties of up to 3% of a company’s global revenue or $10 million.

That is a lot of red tape that will certainly be annoying for Google et al to manage, but also eminently manageable given their scale and resources; a proposed AI law in the E.U. will have an even larger regulatory load.

What is notable is that the technological arc I traced above is bending towards more government control: PCs granted incredible freedom and capabilities to individuals, but the Internet’s devolvement into Aggregator-mediated networks gave governments distinct chokepoints on which to push for control of distribution, whether that be explicit as in China or implicit as in much of the West. AI, meanwhile, to the extent it is centralized in the major players, means there are direct loci of control on the actual generation of information.

This gives credence to Perez’s argument that the IT revolution has not yet achieved government alignment: it just wasn’t structurally possible previously. Whether said alignment does in fact mean an imminent “Golden Era” as Perez forecasts remains to be seen. What is worth noting is that it is very much in Google’s interest that this alignment becomes concrete: the best way to forestall truly disruptive technologies is to regulate them away.

The AI Radical Reformation

There is another aspect of the E.U. regulations that seems a bit more sinister. From Technomancers.ai:

In a bold stroke, the E.U.’s amended AI Act would ban American companies such as OpenAI, Amazon, Google, and IBM from providing API access to generative AI models. The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.

Any model made available in the E.U., without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The E.U. is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.

If enacted, enforcement would be out of the hands of E.U. member states. Under the AI Act, third parties could sue national governments to compel fines. The act has extraterritorial jurisdiction. A European government could be compelled by third parties to seek conflict with American developers and businesses.

This is a pretty explosive allegation, but Delos Prime, the author, backs it up with citations to the proposed law, and I think it’s a reasonable interpretation. As is ever the case with proposals like this, there isn’t explicit language about, say, banning API access; rather, Prime’s conclusion is that that is the effective outcome of, for example, API providers being made liable for all uses of their API, just as open source authors and distributors would be held liable for all uses of their models.

The question of how the U.S. would respond to such a law is obviously a hugely important one: there is a case to be made that making U.S. companies liable for simply open-sourcing models is a flagrant violation of sovereignty; I’m sure the E.U. would argue that U.S. Internet companies effectively exporting U.S. values like free speech was the exact same thing.

This is where history is interesting to consider, particularly the invention that I have long held is most analogous to the Internet: the printing press. I wrote in The Internet and the Third Estate:

In the Middle Ages the principal organizing entity for Europe was the Catholic Church. Relatedly, the Catholic Church also held a de facto monopoly on the distribution of information: most books were in Latin, copied laboriously by hand by monks. There was some degree of ethnic affinity between various members of the nobility and the commoners on their lands, but underneath the umbrella of the Catholic Church were primarily independent city-states.

The printing press changed all of this. Suddenly Martin Luther, whose critique of the Catholic Church was strikingly similar to Jan Hus 100 years earlier, was not limited to spreading his beliefs to his local area (Prague in the case of Hus), but could rather see those beliefs spread throughout Europe; the nobility seized the opportunity to interpret the Bible in a way that suited their local interests, gradually shaking off the control of the Catholic Church.

Meanwhile, the economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.

How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing language across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers. This consolidation occurred at varying rates — England and France several hundred years before Germany and Italy — but in nearly every case the First Estate became not the clergy of the Catholic Church but a national monarch, even as the monarch gave up power to a new kind of meritocratic nobility epitomized by Burke.

The culmination of this upheaval was the Peace of Westphalia, from whence we get the name Westphalian System; to quote Wikipedia:

The Westphalian system is the political order in international law that each state has exclusive sovereignty over its territory and are the monopolists of the ability to conduct warfare. The principle developed in Europe after the Peace of Westphalia in 1648, based on the state theory of Jean Bodin and the natural law teachings of Hugo Grotius. It underlies the modern international system of sovereign states and is enshrined in the United Nations Charter, which states that “nothing … shall authorize the United Nations to intervene in matters which are essentially within the domestic jurisdiction of any state.”

The Westphalian system faces any number of challenges, from globalization to humanitarian interventions to the Internet. The E.U.’s attempts to regulate AI are a perfect example: given there are no borders on the Internet — outside the Great Firewall, anyways1 — the E.U. appears poised to hold American companies liable for models released on U.S. servers; in the case of Google, it has, for now, found it prudent to unilaterally not serve the E.U., lest it face the same challenges confronting OpenAI.

This fight can, in a sense, be analogized to the Protestant and Catholic battles in Europe; in this case U.S. tech companies are the universal Internet, while Europe is seeking to protect its sovereignty. Or, perhaps, you prefer the opposite analogy, wherein Europe is seeking to export its beliefs to the rest of the world and, given the economic incentives towards having one product everywhere, very well may succeed (see cookie banners).

The open source part, though, is distinct: open source models running locally might be a big boon to Apple, but they are the truly disruptive threat to centralized companies like Google and OpenAI. In other words, they are a third force, distinct from regulators and centralized operators; they are the Radical Reformation.

This thought occurred to me while reading this fascinating thread from Owen Cyclops about American religion. It’s hard to find one single tweet to capture the thread, but Cyclops’ point is that the printing press led to three distinct religious groups: the Catholics, the Protestants, and then a whole host of fringe groups that were persecuted by both, and which, by extension, played a prominent role in American history.

In this view the application of the printing press’s impact to the formation of modern Europe is incomplete; you also have to consider the fringe, which is to say the United States.

And, by extension, if the digital transformation, from PC to Internet to AI, is of similar impact to the printing press, then the question at hand is not simply the nature of nation states going forward, but also the potential on the fringe.


This is, admittedly, a rather speculative and far-reaching Article, particularly given I started with Google I/O. I think it is meaningful, though, that Google made clear it views AI as a sustaining innovation, and that it intends to fully implement generative AI across its business, including search. Of course that means there are battles to come within that context: the aggressiveness and competitiveness we’ve seen from these large tech companies is a refreshing change from the stasis of the previous decade.

At the same time, the fact that all of Big Tech is on board and, given their supranational nature, will inevitably be incentivized to be a “helpful and engaged partner to regulators”, suggests that the true fight will be between centralized models and open source: the universal Catholic church and national religion Protestants had their conflicts, but they were unified in their disdain for the Anabaptists et al.

In this view these proposed E.U. regulations are simply the first salvo in what may be the defining war of the digital era: will centralized — and thus controllable — entities win, or will there be a flowering on the fringe of open models that truly explore the potential of AI, for better or for worse.


  1. One surprising conclusion of these regulatory battles is that it is China which is arguably the greatest advocate for the Westphalian system today, and that the Great Firewall, while deleterious for Chinese freedom, may have been a gift in terms of Western freedom 

The Unified Content Business Model

In 2017 BuzzFeed CEO Jonah Peretti declared that “if you’re thinking about an electorate and you’re thinking about the public and you’re thinking about people being informed, the subscription model in media does not help inform the broad public”; in 2017 The Athletic CEO Alex Mather went in the opposite direction, telling me in a Stratechery Interview that his publication would be differentiated by “less clickbait, no ads, no game recaps, no hot takes, really focusing in on the deeper stories, the insider stuff, the minutiae for the really diehard fan.”

Today in 2023 BuzzFeed has shuttered its News team and The Athletic has been acquired by the New York Times; the latter’s top priority was adding advertising.

BuzzFeed’s bet on news was a bet on Facebook and Google’s willingness to subsidize free news, a bet that didn’t pay off. The Athletic had a happier ending, even if there are arguments that the New York Times overpaid, given that the sports publication had never made a profit; it’s also the case that neither BuzzFeed News nor The Athletic were running the proper business model for content on the Internet. The question going forward should not be advertising or subscriptions; the answer, in meme form:

"Why not both?" meme

This is, of course, a return to form for content production; it’s also both an evolution and refutation of a point I argued in 2015’s Popping the Publishing Bubble:

It is easy to feel sorry for publishers: before the Internet most were swimming in money, and for the first few years online it looked like online publications with lower costs of production would be profitable as well. The problem, though, was the assumption that advertising money would always be there, resulting in a “build it and they will come” mentality that focused almost exclusively on content production and far too little on sustainable business models.

In fact, publishers going forward need to have the exact opposite attitude from publishers in the past: instead of focusing on journalism and getting the business model for free, publishers need to start with a sustainable business model and focus on journalism that works hand-in-hand with the business model they have chosen. First and foremost that means publishers need to answer the most fundamental question required of any enterprise: are they a niche or scale business?

  • Niche businesses make money by maximizing revenue per user on a (relatively) small user base
  • Scale businesses make money by maximizing the number of users they reach

The truth is most publications are trying to do a little bit of everything: gain more revenue per user here, reach more users over there. However, unless you’re the New York Times (and even then it’s questionable), trying to do everything is a recipe for failing at everything; these two strategies require different revenue models, different journalistic focuses, and even different presentation styles.

I think my position today is more of an evolution than a refutation, because I do still think it is essential for an content entity to understand if it is in the niche or scale game; the refutation is twofold: first, everything is a niche, and second, nearly all content businesses should have both subscriptions and advertising.

Three Success Stories

Three companies have helped convince me that “both” is the best business model for content businesses.

New York Times: I shouldn’t have been so quick in that 2015 Article to dismiss the New York Times: last year the company had $2.3 billion in revenue; $1.6 billion from subscriptions, and $523 million from advertising (the rest was from “Other”, including licensing, affiliate referrals, live events, etc.). Moreover, the advertising business, at least according to New York Times CEO Meredith Kopit Levien, is compelling precisely because it’s attached to a subscription business; from a Stratechery Interview:

We have painstakingly built an ad business that we as a growing subscription-first business can feel really proud of, and painstakingly built a business where the ad business runs on the same high octane gas as the subscription business. That is registered, logged in, highly engaged qualified audience who spend a lot of time with our product and where we get a lot of signal in privacy-forward ways, non-intrusive ways about what’s interesting to them.

In other words, the New York Times has a big advantage in terms of first party data, in addition to serving a premium advertising segment, precisely because it has focused first-and-foremost on having a subscription-driven editorial approach. This gets back to my “evolution” point: what the New York Times got right is that while it has both business models, it has been very clear-eyed that the subscription model aligns with its editorial approach (and vice-versa), and subsequently made clear that advertising is valuable as long as it it is subservient to that model.

YouTube: Speaking of 2015, that was the year that YouTube launched YouTube Premium (it was called YouTube Red at the time), and I didn’t get it at first. I wrote in an Update:

My initial reaction to YouTube Red was befuddlement; while the “pay-to-remove-ads” business model may make sense for a small independent developer just slapping an ad network inside their app, for a company operating at YouTube’s scale the model has a potentially fatal contradiction: the people who are most likely to be willing to pay to remove ads are usually the exact same people advertisers most want to reach. True, few if any customers are likely to generate $120/year worth of advertising revenue, but the entire point of an advertising business model is to sell access to a wide audience at scale; YouTube Red potentially limits the size of that audience even as it makes it less valuable on an average user basis.

Moreover, this seems a particularly strange time to reduce the focus on advertising for YouTube in particular: for years we have been waiting for a significant shift in brand advertising dollars from TV to digital, and with TV viewership suddenly declining significantly, particularly amongst millennials, it seems that time is finally nigh. Why not double-down on advertising?

All discussions of YouTube need to include the very large caveat that Google still — in what I believe is a violation of SEC rules — refuses to disclose YouTube’s costs (and thus profit); the company also doesn’t break out YouTube Premium subscriptions. However, it was notable on the company’s last earnings call that management called out the fact that YouTube subscriptions drove 9% growth in “Google Other” revenue, even as YouTube ad revenue once again declined (because, I suspect, Apple’s App Tracking Transparency changes). That ad revenue, though, is still up massively from 2015; there doesn’t seem to be any tension in having both models at the same time.

What is notable is that YouTube Premium is a much simpler product than YouTube Red tried to be: the latter invested in original content, both from Hollywood and from YouTube creators; the rebrand to YouTube Premium, though, led to the end of most of those initiatives. Instead the value proposition was as simple as could be: YouTube without ads (and some additional functionality, including downloads and background listening, for music in particular). This, in the end, is the subscription model that I think makes the most sense for a scale product whose primary business model is ads: pay to remove them, but otherwise leave the product the same.

Netflix: The company I should probably use here is Hulu; one of the reasons I argued Why Netflix Should Sell Ads is because Hulu had already shown that a streaming service could have a higher average revenue per customer on a cheaper ad-supported plan than on a higher-priced no-ad plan. Netflix has already achieved exactly that; last quarter the company’s $6.99 “Basic + Ads” plan had a higher average revenue per member than the company’s $15.49 “Standard” plan in North America.

Netflix has already adjusted, changing “Basic + Ads” to “Standard + Ads”; the only difference with the regular Standard plan is that, as with YouTube, there are no downloads with the ad-supported plan (which makes total sense: if a device is offline then it can’t be served ads). I think, though, the streaming service should go further. Work backwards from the proposition that the only difference between the ad-supported plan and the Standard plan is the absence of ads, and layer on the fact that Netflix is a scale service that seeks to have content for everyone, and it follows that Netflix should probably end up in the same place that YouTube is: have one free ad-supported plan and one paid plan, with the only difference being the presence or lack thereof of ads.

Leverage and Content Costs

The irony of Netflix being both ad supported and subscription supported is that that was the business model of TV; customers paid for cable (which passed along affiliate fees to cable networks and retransmission fees to broadcast networks) and also had to endure advertisements during their favorite shows.

TV, meanwhile, was only one piece of a larger content strategy for entertainment companies: the products that required the most investment — movies — were subject to a windowing approach, where a film was first released in theaters, then pay-per-view, then premium TV, then cable, and finally free TV. The logic behind this approach was to utilize time to maximize leverage on the costs necessary to create the content in the first place. Netflix isn’t particularly interested in windowing (which I think is understandable in the case of movies, even if I think they should do weekly releases for their most popular shows), but offering the choice of whether or not ads are included is leveraging convenience and the overall user experience to achieve a similar sort of segmentation.

What is important to note is that leverage is still very important: Netflix has an advantage over other streaming services because it has the most subscribers, which means its per-subscriber cost for new content is lower. That advantage grows with the new subscribers Netflix is able to attract to their ad-supported tier; if Netflix had a free ad-supported plan that advantage would be even larger. YouTube, meanwhile, gets its content for free, but has to make major investments in infrastructure, moderation, etc. (the amount of which we don’t — but should — know). The key takeaway, though, is that while the means of leveraging content may have changed with the Internet, the importance of doing so remains the same.

The Hole in the Funnel

Of course YouTube isn’t the only “free” content on the Internet: social networks like Facebook or Twitter and user-generated content networks like TikTok have tons of content as well. There are also things like video games, and even traditional TV — the amount of content available to end users is infinite, and covers every possible niche; AI is only going to make the abundance more overwhelming, and perfectly customized to boot.

In this world the only scarce resource is attention: even if a user is “second-screening” — on their phone while watching TV, for example — they are only ever paying attention to one piece of content at any given moment. It follows, then, that value is a function of attention, because value is always downstream from scarcity.

This has further implications for content companies: to maximize leverage on their costs content companies must have well-considered funnels for acquiring customers and monetizing them. One example I’ve been thinking a lot about is the NBA: the league’s social media presence is very strong, but there was a major disconnect between broad awareness of players and teams and the league’s business model, which, as I explained earlier this year, was anchored in the cable bundle, with a looming weak spot in terms of local rights and regional sports networks. The conundrum facing the league was clear:

The NBA's hole in the funnel

In a world where everyone has cable then having a strong social media presence is great; people can simply turn on the TV to check out a game. If a huge number of households have cut the cord, though, particularly in younger demographics that will hopefully grow into lifelong fans, then the price of entry is signing up for Pay TV, a major commitment that hardly seems worth it for a sport you are only vaguely aware of.

The Phoenix Suns, though, are trying something different: the team is (pending litigation from its former regional sports network) going to make its local games available over-the-air and on a team-branded streaming service. As I discussed yesterday this fills in the missing piece of the funnel:

This is why I have been concerned about the long-term outlook for the league: it’s hard enough to get attention in the modern era, but it’s even harder if your product isn’t even available to half of your addressable market. That was the reality, though, of being on an RSN: social media gave the NBA top of the funnel awareness, but there wasn’t an obvious next step for potential new fans who weren’t yet willing to pay for pay TV.

However, if this deal goes through, that next step will exist in Phoenix: potential fans can check out games over the air or through a Suns/Mercury app; if they like what they see they will soon be disappointed that they can’t see the best games, which are reserved for the national TV networks. That, though, is a good thing: now the Suns/Mercury are giving fans a reason to get cable again (or something like YouTube TV), increasing the value of the NBA to those networks along the way. And, of course, there is the most obvious way to monetize content on the Internet: deliver a real-world experience that can’t be replicated online — in this case attending a game in person.

All of this is good news for the long term value of [Suns owner Mat] Ishbia’s teams, and, by extension, good news for the NBA and the networks that buy its rights. Yes, forgone RSN money will hurt, but not having an effective customer acquisition funnel hurts even more.

This exact reasoning applies to Netflix, too: the company used to offer free trials, but given how easy they were to abuse the company ended them in 2020; that, though, meant the company had its own hole in the funnel. Potential users might hear about shows that interested them, but the only way to check them out was to actually pull out their credit card; that is, to be fair, still the case today, but at least the ad-supported plan is cheaper. This is also the argument as to why the ad-supported plan should eventually be free: the best way to get customers interested in your premium subscription is to get them watching your content and getting annoyed that ads are in the way of consuming more of it!

Creator Services

All of this is not, for the record, a prelude to introducing ads on Stratechery. While Stratechery arguably competes with the New York Times for user attention, I don’t have the resources to sell ads, even if I think it would be a perfectly fine thing to do (although my approach to ethics would preclude accepting advertising from any company I cover).

Things would be different if and when I ever launch video: one of the things that makes YouTube such a brilliant platform is that from the very early days the service has had a revenue share with creators. This lets creators benefit from the capabilities of the largest digital ad company in the world without any additional work; the creator’s only job is to create content that is compelling enough to earn views (and thus advertising revenue), which benefits YouTube.

Spotify is working to build out something similar for podcasting: yes, there is podcast advertising, but the market is fundamentally limited to high-price or high-LTV products, and the barrier to being a viable podcast is fairly high as far as total listeners go (and the market, at least anecdotally, seems to be struggling). What Spotify is trying to build is something much more akin to YouTube: targeted advertising at scale, where the only responsibility of content creators is to make something that users want to listen to.

I think that Substack is making a mistake in not doing the same thing for writing: it’s simply not viable for one-person or small-team publishers to sell ads effectively; Substack, though, has aggregated a whole slew of them, and thus is uniquely positioned to create value for its writers collectively by building out an ad product. This, by extension, could help Substack authors with their own hole in the funnel: moving readers from links in tweets to paying subscribers would be easier if there were a money-making layer in the middle.

Of course the freemium strategy is a good alternative; that has always been the business model here on Stratechery, both for my writing and my podcasts. Indeed, that really is the real name for the “Unified Content Model”: everything, in the end, is on its way to freemium. The right place to draw the line will and should differ based on whether a product is niche or scaled, and on the cost structure of the content producer; what seems less necessary, given the need to both leverage content costs and acquire customers effectively, is being religious about only making money in one specific way.

AI, NIL, and Zero Trust Authenticity

Drake and The Weeknd collaborated on a new song and I am told it is a banger:

This video blew up on social media the day after Drake declared in a soon-deleted Instagram Story — written in response to another AI-generated song — that “This is the final straw AI”:

Drake's Instagram story expressing displeasure with AI

This is, needless to say, not the final straw, nor should Drake want it to be: he may be one of the biggest winners of the AI revolution.

The Music Value Question

The video above is both more and less amazing than it seems: the AI component is the conversion of someone’s voice to sound like Drake and The Weeknd, respectively; the music was made by a human. This isn’t pure AI generation, although services like Uberduck are working on that. That, though, is the amazing part: whoever made this video was talented enough to be able to basically create a Drake song but for the particularly sound of their voice, which happens to be exactly what current AI technology is capable of recreating.

This raises an interesting question as to where the value is coming from. We know there is no value in music simply for existing: like any piece of digital media the song is nothing more than a collection of bits, endlessly copied at zero marginal cost. This was the lesson of the shift from CDs to mp3s: it turned out record labels were not selling music, but rather plastic discs, and when the need for plastic discs went away, so did their business model.

What saved the music industry was working with the Internet instead of against it: if it was effectively free to distribute music, then why not distribute all of it, and charge a monthly price for convenience? Of course it took Daniel Ek and Spotify to drag the music industry to an Internet-first business model, but all’s well that ends well, at least as far as record labels are concerned:

U.S. music revenue

Still, artists continue to grumble about Spotify, in part because per-stream payments seem low; that metric, though, is first and foremost a function of how much music is listened to: the revenue pot is set by the number of subscribers (and music-related ad revenue), which means that the per-stream payout is a function of how many streams there are. In other words, a lower per-stream number is a good thing, because that means there was more listening overall, which is a positive as far as factors like churn or non-streaming revenue generation opportunities are concerned.

Of course the other factor driving artist earnings is competition: music streaming is a zero sum game — when you’re listening to one song, you can’t listen to another — which is precisely why Drake can be so successful churning out so many albums that, to this old man, seem to mostly sound the same. Not only do listeners have access to nearly all recorded music, but the barrier to entry for new music is basically non-existent, which means Spotify’s library is rapidly increasing in size; in this world of overwhelming content it’s easy to default to music from an artist you already know and have some affinity for.

This, then, answers the question of value: as talented as the maker of this song might be, the value is, without question, Drake’s voice, not for its intrinsic musical value, but because it’s Drake.

NIL: Name, Image, and Likeness

In 1995, Ed O’Bannon led the UCLA Bruins to a national basketball championship; a decade later O’Bannon was the lead plaintiff in O’Bannon v. NCAA after his image was used on the cover of NCAA Basketball 09, a video game from EA Sports. O’Bannon alleged that the NCAA’s restrictions on a collegiate athlete’s ability to monetize their name, image, and likeness (NIL) was a violation of his publicity rights and an illegal restraint of trade under the Sherman Antitrust Act. O’Bannon and his fellow athletes won that case and a number of cases that followed, culminating in a unanimous Supreme Court decision in National Collegiate Athletic Association v. Alston that by-and-large affirmed the underlying arguments in O’Bannon’s case.

Most of the specifics in O’Bannon’s case have to do with the peculiarities of the American collegiate sports system, which is right now in a weird state of flux as athletic programs figure out how to navigate a world in which athletes have the right to benefit from NIL: ideally this means something like local endorsements, although it’s easy to see how NIL becomes a shortcut for effectively paying athletes to attend a particular university. The relative morality of that question is beyond the purview of a blog about sports and technology, other than to observe the value in terms of college athletics is mostly about athletic performance; NIL is a way to pay for that value without being explicit about it.

Notice both the similarities and differences between Drake and O’Bannon: O’Bannon was doing something that was unique and valuable (playing basketball), and seeking compensation for that activity, which he — and now many more college athletes — received in the form of compensation for his name, image, and likeness. For Drake, though, it is precisely his name, image, and likeness that lends value to what he does, or at least in the case of this video, could realistically be assumed to have done.

I am of course overstating things: just as a popular college athlete could absolutely provide value as an endorser, certainly Drake or any other artists’ music has value in its own right. The relative contribution of value, though, continues to tip away from a record itself towards the recording artists NIL — which is precisely why Drake could be such a big winner from AI: imagine a future where Drake licenses his voice, and gets royalties or the rights to songs from anyone who uses it.

This isn’t as far-fetched as it might seem; Drake has openly admitted that at this stage in his career songwriting is a collective process. From an interview in Fader after Meek Mill accused the star of not writing his own lyrics:

The fact that most of Drake’s fans seemed not to care about the particulars of how his songs were made proved something important: that Drake was no longer just operating as a popular rapper, but as a pop star, full stop, in a category with Beyoncé, Kanye West, Taylor Swift, and the many boundary-pushing mainstream acts from the past that transcended their genres and reached positions of historic influence in culture. At that altitude, it’s well known that the vast majority of great songs are cooked in groups and workshopped before being brought to life by one singular talent. That is the altitude where Drake lives now.

“I need, sometimes, individuals to spark an idea so that I can take off running,” he says. “I don’t mind that. And those recordings—they are what they are. And you can use your own judgment on what they mean to you.” “There’s not necessarily a context to them,” he adds, when I ask him to provide some. “And I don’t know if I’m really here to even clarify it for you.” Instead, he tells me he is ready and willing to be the flashpoint for a debate about originality in hip-hop. “If I have to be the vessel for this conversation to be brought up—you know, God forbid we start talking about writing and references and who takes what from where—I’m OK with it being me,” he says.

He then makes a bigger point—one that sums up why the experience of being publicly targeted left him in a position of greater strength than he went into it with: “It’s just, music at times can be a collaborative process, you know? Who came up with this, who came up with that—for me, it’s like, I know that it takes me to execute every single thing that I’ve done up until this point. And I’m not ashamed.”

Other stars are even less involved in the songwriting process, relying on songwriting camps to put an album together; from Vulture:

The camps, or at least the collaborative songwriting process, have fundamentally changed the way pop music sounds — Beyoncé’s Lemonade was a strikingly personal album, full of scorned-lover songs, but it was conceived by teams of writers (with the singer’s input and oversight). Key moments came from indie rockers, including Father John Misty, who fleshed out “Hold Up” after Beyoncé sent him the hook. Similarly, West’s Ye deals with mental illness and other intimate themes, but numerous writers, from Benny Blanco to Ty Dolla $ign, helped him turn those issues into songs. (Father John Misty, Parker, Vernon, Koenig, and other indie-rock stars refused interview requests.)

“Those artists still have a heavy hand in what songs they pick,” says Ingrid Andress, a Nashville singer-songwriter who is readying new solo material and regularly attends camps for pop stars. “But people forget that not just Beyoncé feels like Beyoncé. I guarantee all the people who wrote for Beyoncé’s record are coming from a place of also being cheated on, or angry, or wanting to find redemption in their culture.”

The value of a Beyoncé song comes first and foremost from the fact it is a Beyoncé song, not a Father John Misty song; there’s no reason the principle wouldn’t extend to AI: the more abundance there is, the more value accrues to whatever it is that can break through — and superstars can break through more than anyone.

Musical Chimeras

The record labels, unsurprisingly, learned nothing from the evolution of digital music: their first instinct is to reach for the ban hammer. Today that means leaning on centralized services like Spotify; from the Financial Times:

Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from scraping melodies and lyrics from their copyrighted songs, according to emails viewed by the Financial Times. UMG, which controls about a third of the global music market, has become increasingly concerned about AI bots using their songs to train themselves to churn out music that sounds like popular artists. AI-generated songs have been popping up on streaming services and UMG has been sending takedown requests “left and right”, said a person familiar with the matter.

The company is asking streaming companies to cut off access to their music catalogue for developers using it to train AI technology. “We will not hesitate to take steps to protect our rights and those of our artists,” UMG wrote to online platforms in March, in emails viewed by the FT. “This next generation of technology poses significant issues,” said a person close to the situation. “Much of [generative AI] is trained on popular music. You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.” 

In case it’s not clear, I’m not exactly an expert on pop music, but I’m going to put my money on that Swift-Mars-Styles song being a dud on Spotify, no matter how good it may end up sounding, for one very obvious reason: Spotify will not allow the song to be labeled as being written by Taylor Swift, sung by Bruno Mars, with a Harry Styles theme, because it’s not. Sure, someone may be able to create such a chimera, but it will, like the video above, be a novelty item, the interest in which will decrease as similar chimeras flood the Internet. To put it a different way, as AI-generated content proliferates, authenticity will matter all the more, both commercially (because the AI-generated content won’t be commercializable) and in terms of popular valence.

This is a good thing, because it points to a solution that is aligned with the reality of AI, just as streaming was aligned with the Internet: call it Zero Trust Authenticity.

Zero Trust Authenticy

Back in 2020, when COVID emerged, I wrote an Article entitled Zero Trust Information that built on the ideas behind zero trust networking.

The problem, though, was the Internet: connecting any one computer on the local area network to the Internet effectively connected all of the computers and servers on the local area network to the Internet. The solution was perimeter-based security, aka the “castle-and-moat” approach: enterprises would set up firewalls that prevented outside access to internal networks. The implication was binary: if you were on the internal network, you were trusted, and if you were outside, you were not.

A drawing of Castle and Moat Network Security

This, though, presented two problems: first, if any intruder made it past the firewall, they would have full access to the entire network. Second, if any employee were not physically at work, they were blocked from the network. The solution to the second problem was a virtual private network, which utilized encryption to let a remote employee’s computer operate as if it were physically on the corporate network, but the larger point is the fundamental contradiction represented by these two problems: enabling outside access while trying to keep outsiders out.

These problems were dramatically exacerbated by the three great trends of the last decade: smartphones, software-as-a-service, and cloud computing. Now instead of the occasional salesperson or traveling executive who needed to connect their laptop to the corporate network, every single employee had a portable device that was connected to the Internet all of the time; now, instead of accessing applications hosted on an internal network, employees wanted to access applications operated by a SaaS provider; now, instead of corporate resources being on-premises, they were in public clouds run by AWS or Microsoft. What kind of moat could possibly contain all of these use cases?

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

A drawing of Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

  • If there is no internal network, there is no longer the concept of an outside intruder, or remote worker
  • Individual-based authentication scales on the user side across devices and on the application side across on-premises resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

The point of that Article was to argue that trying to censor misinformation and disinformation was to fruitlessly pursue a castle-and-moat strategy that was not only doomed to fail, but which would actually make the problem worse: the question of “Who decides?” looms over every issue subject to top-down control, and the risk of inevitably getting fraught questions wrong is to empower bad actors who have some morsels of truth that were wrongly censored mixed in with more malicious falsehoods.

A better solution is Zero Trust Information: as I documented in that Article young people are by-and-large appropriately skeptical of what they read online; what they need are trusted resources that do their best to get things right and, critically, take accountability and explain themselves when they change their mind. That is the only way to harvest the massive benefits of the “information superhighway” that is the Internet while avoiding roads to nowhere, or worse.

A similar principle is the way forward for content as well: one can make the case that most of the Internet, given the zero marginal cost of distribution, ought already be considered fake; once content creation itself is a zero marginal cost activity almost all of it will be. The solution isn’t to try to eliminate that content, but rather to find ways to verify that which is still authentic. As I noted above I expect Spotify to do just that with regards to music: now the value of the service won’t simply be convenience, but also the knowledge that if a song on Spotify is labeled “Drake” it will in fact be by Drake (or licensed by him!).

This will present a challenge for sites like YouTube that are further towards the user-generated content end of the spectrum: right now you can upload a video that says whatever you want in its title and description; YouTube could screen for trademarked names and block known rip-offs, but that’s going to be hard to scale as celebrities-within-their-ever-smaller-niches proliferate, and it’s going to have a lot of false positives. What seems better is leaning heavily into verified profiles, artist names, etc.: it should be clear at a glance if a video is authentic or not, because only the authentic person or their representatives could have put it there.

What YouTube does deserve credit for is how it ultimately solved the licensed music problem: it used to be that user-generated content that included licensed music was subject to takedown notices, and eventually YouTube would unilaterally remove them once it gained the ability to scan everything for known music signatures. Today, though, the videos can stay on YouTube: any monetization of said videos, though, goes to the record labels. This is very much in-line with what I am proposing: in this case the authenticity is the music itself, which YouTube ascertains and compensates accordingly, while in the future the authenticity will be name, image, and likeness or artists and creators.

What is compelling about this model of affirmatively asserting authenticity is the room it leaves for innovation and experimentation and, should a similar attribution/licensing regime be worked out, even greater benefits to those with the name, image, and likeness capable of breaking through the noise. What would be far less lucrative — and, for society broadly, far more destructive — is believing that scrambling to stop the free creation of content by AI will somehow go better than the same failed approaches to stopping free distribution on the Internet.

I wrote a follow-up to this Article in this Daily Update.

ChatGPT Gets a Computer

Ten years ago (from last Saturday) I launched Stratechery with an image of sailboats:

A picture of Sailboats

A simple image. Two boats, and a big ocean. Perhaps it’s a race, and one boat is winning — until it isn’t, of course. Rest assured there is breathless coverage of every twist and turn, and skippers are alternately held as heroes and villains, and nothing in between.

Yet there is so much more happening. What are the winds like? What have they been like historically, and can we use that to better understand what will happen next? Is there a major wave just off the horizon that will reshape the race? Are there fundamental qualities in the ships themselves that matter far more than whatever skipper is at hand? Perhaps this image is from the America’s Cup, and the trailing boat is quite content to mirror the leading boat all the way to victory; after all, this is but one leg in a far larger race.

It’s these sorts of questions that I’m particularly keen to answer about technology. There are lots of (great!) sites that cover the day-to-day. And there are some fantastic writers who divine what it all means. But I think there might be a niche for context. What is the historical angle on today’s news? What is happening on the business side? Where is value being created? How does this translate to normals?

ChatGPT seems to affirm that I have accomplished my goal; Mike Conover ran an interesting experiment where he asked ChatGPT to identify the author of my previous Article, The End of Silicon Valley (Bank), based solely on the first four paragraphs:1

Conover asked ChatGPT to expound on its reasoning:

ChatGPT was not, of course, expounding on its reasoning, at least in a technical sense: ChatGPT has no memory; rather, when Conover asked the bot to explain what it meant his question included all of the session’s previous questions and answers, which provided the context necessary for the bot to simulate an ongoing conversation, and then statistically predict the answer, word-by-word, that satisfied the query.

This observation of how ChatGPT works is often wielded by those skeptical about assertions of intelligence; sure, the prediction is impressive, and nearly always right, but it’s not actually thinking — and besides, it’s sometimes wrong.

Prediction and Hallucination

In 2004, Jeff Hawkins, who was at that point most well-known for being the founder of Palm and Handspring, released a book with Sandra Blakeslee called On Intelligence; the first chapter is about Artificial Intelligence, which Hawkins declared to be a flawed construct:

Computers and brains are built on completely different principles. One is programmed, one is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, one has no centralized control. The list of differences goes on and on. The biggest reason I thought computers would not be intelligent is that I understood how computers worked, down to the level of the transistor physics, and this knowledge gave me a strong intuitive sense that brains and computers were fundamentally different. I couldn’t prove it, but I knew it as much as one can intuitively know anything.

Over the rest of book Hawkins laid out a theory of intelligence that he has continued to develop over the last two decades; last year he published A Thousand Brains: A New Theory of Intelligence, that distilled the theory to its essence:

The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

Hawkins theory is not, to the best of my knowledge, accepted fact, in large part because it’s not even clear how it would be proven experimentally. It is notable, though, that the go-to dismissal of ChatGPT’s intelligence is, at least in broad strokes, exactly what Hawkins says intelligence actually is: the ability to make predictions.

Moreover, as Hawkins notes, this means sometimes getting things wrong. Hawkins writes in A Thousand Brains:

The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…

A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)

This is an example of “a perception in the absence of an external stimulus that has the qualities of a real perception”; that quote is from the Wikipedia page for hallucination. “Hallucination (artificial intelligence)” has its own Wikipedia entry:

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla’s revenue might internally pick a random number (such as “$13.6 billion”) that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla’s revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination.

Such phenomena are termed “hallucinations”, in analogy with the phenomenon of hallucination in human psychology. Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to “sociopathically” and pointlessly embed plausible-sounding random falsehoods within its generated content. Another example of hallucination in artificial intelligence is when the AI or chatbot forget that they are one and claim to be human.

Like Sydney, for example.

The Sydney Surprise

It has been six weeks now, and I still maintain that my experience with Sydney was the most remarkable computing experience of my life; what made my interaction with Sydney so remarkable was that it didn’t feel like I was interacting with a computer at all:

I am totally aware that this sounds insane. But for the first time I feel a bit of empathy for Lemoine. No, I don’t think that Sydney is sentient, but for reasons that are hard to explain, I feel like I have crossed the Rubicon. My interaction today with Sydney was completely unlike any other interaction I have had with a computer, and this is with a primitive version of what might be possible going forward.

Here is another way to think about hallucination: if the goal is to produce a correct answer like a better search engine, then hallucination is bad. Think about what hallucination implies though: it is creation. The AI is literally making things up. And, in this example with LaMDA, it is making something up to make the human it is interacting with feel something. To have a computer attempt to communicate not facts but emotions is something I would have never believed had I not experienced something similar.

Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

The reason why these 1s and 0s can manifest themselves in your reading this Article has its roots in philosophy, as explained in this wonderful 2016 article by Chris Dixon entitled How Aristotle Created the Computer:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon’s article is about the history of mathematical logic; Dixon notes:

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

It is mathematical logic that reduces all of math to a series of logical statements, which allows them to be computed using transistors; again from Dixon:

[George] Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y”…

[Claude] Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.” He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

Claude Shannon's circuit interpretation table

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

Claude Shannon's circuit design

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

The implication of this approach is that computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing). Sydney, though, was not at all what Microsoft intended.

ChatGPT’s Computer

I’ve already mentioned Bing Chat and ChatGPT; on March 14 Anthropic released another AI assistant named Claude: while the announcement doesn’t say so explicitly, I assume the name is in honor of the aforementioned Claude Shannon.

This is certainly a noble sentiment — Shannon’s contributions to information theory broadly extend far beyond what Dixon laid out above — but it also feels misplaced: while technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations; Sydney was interesting, but what about homework?

Here are three questions that GPT4 got wrong:

Questions GPT4 got wrong

All three of these examples come from Stephen Wolfram, who noted that there are some kinds of questions that large language models just aren’t well-suited to answer:

Machine learning is a powerful method, and particularly over the past decade, it’s had some remarkable successes—of which ChatGPT is the latest. Image recognition. Speech to text. Language translation. In each of these cases, and many more, a threshold was passed—usually quite suddenly. And some task went from “basically impossible” to “basically doable”.

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine…

And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either.

And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”.

That’s exactly what OpenAI has done. From The Verge:

OpenAI is adding support for plug-ins to ChatGPT — an upgrade that massively expands the chatbot’s capabilities and gives it access for the first time to live data from the web.

Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. In an announcement post, the company says it’s almost like letting other services be ChatGPT’s “eyes and ears.”

Stephen Wolfram’s Wolfram|Alpha is one of the official plugins, and now ChatGPT gets the above answers right — and quickly:2

ChatGPT gets the right answer from Wolfram|Alpha

Wolfram wrote in the post that requested this integration:

For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.

The fact this works so well is itself a testament to what Assistant AI’s are, and are not: they are not computing as we have previously understood it; they are shockingly human in their way of “thinking” and communicating. And frankly, I would have had a hard time solving those three questions as well — that’s what computers are for! And now ChatGPT has a computer of its own.

Opportunity and Risk

One implication of this plug-in architecture is that someone needs to update Wikipedia: the hallucination example above is now moot, because ChatGPT isn’t making up revenue numbers — it’s using its computer:

Tesla's revenue in ChatGPT

This isn’t perfect — for some reason Wolfram|Alpha’s data is behind, but it did get the stock price correct:

Tesla's stock price in ChatGPT

Wolfram|Alpha isn’t the only plugin, of course: right now there are 11 plugins in categories like Travel (Expedia and Kayak), restaurant reservations (OpenTable), and Zapier, which opens the door to 5,000+ other apps (the plugin to search the web isn’t currently available); they are all presented in what is being called the “Plugin store.” The Instacart integration was particularly delightful:

ChatGPT adds a shopping list to Instacart

Here’s where the link takes you:

My ChatGPT-created shopping cart

ChatGPT isn’t actually delivering me groceries — but it’s not far off! One limitation is I actually had to select the Instacart plugin; you can only have 3 loaded at a time. Still, that is a limitation that will be overcome, and it seems certain that there will be many more plugins to come; one could certainly imagine OpenAI both allowing customers to choose and also selling default plugin status for certain categories on an auction basis, using the knowledge it gains about users.

This is also rather scary, and here I hope that Hawkins is right in his theory. He writes in A Thousand Brains in the context of AI risk:

Intelligence is the ability of a system to learn a model of the world. However, the resulting model by itself is valueless, emotionless, and has no goals. Goals and values are provided by whatever system is using the model. It’s similar to how the explorers of the sixteenth through the twentieth centuries worked to create an accurate map of Earth. A ruthless military general might use the map to plan the best way to surround and murder an opposing army. A trader could use the exact same map to peacefully exchange goods. The map itself does not dictate these uses, nor does it impart any value to how it is used. It is just a map, neither murderous nor peaceful. Of course, maps vary in detail and in what they cover. Therefore, some maps might be better for war and others better for trade. But the desire to wage war or trade comes from the person using the map.

Similarly, the neocortex learns a model of the world, which by itself has no goals or values. The emotions that direct our behaviors are determined by the old brain. If one human’s old brain is aggressive, then it will use the model in the neocortex to better execute aggressive behavior. If another person’s old brain is benevolent, then it will use the model in the neocortex to better achieve its benevolent goals. As with maps, one person’s model of the world might be better suited for a particular set of aims, but the neocortex does not create the goals.

The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.

To which I say, we shall see! I agree with Tyler Cowen’s argument about Existential Risk, AI, and the Inevitable Turn in Human History: AI is coming, and we simply don’t know what the outcomes will be, so our duty is to push for the positive outcome in which AI makes life markedly better. We are all, whether we like it or not, enrolled in something like the grand experiment Hawkins has long sought — the sailboats are on truly uncharted seas — and whether or not he is right is something we won’t know until we get to whatever destination awaits.

The follow-up to this Article analyzing the strategic implications of ChatGPT Plugins is in this Update, which is free-to-read.


  1. GPT-4 was trained on Internet data up to 2021, so did not include this Article 

  2. The Mercury question is particularly interesting; you can see the “conversation” between ChatGPT and Wolfram|Alpha here, here, here, and here as it negotiates exactly what it is asking for.