The Four Horsemen of the Tech Recession

Stephanie Palazzolo wrote on Twitter:

It really was jarring to see those employment figures the same week that tech company after tech company reported mostly disappointing earnings, and worse forecasts, all on the heels of layoffs. Even Meta, which saw a massive uptick in its stock, reported revenue that was down 4% year-over year; the stock increase was a special case where too many investors bought into Meta Myths that convinced them a company with a still strong and growing core business was somehow doomed.

That’s not to say that tech is an echo chamber: all tech companies are facing unique headwinds that don’t affect most of the economy; let’s call them the four horsemen of the tech recession.

The Four Horsemen of the Apocalypse, as imagined by MidJourney
The Four Horsemen of the Apocalypse, as imagined by MidJourney

The four horsemen, for those who didn’t grow up Christian or weren’t paying attention in Sunday School, come from the Book of Revelations sixth chapter, which opens:

And I saw when the Lamb opened one of the seals, and I heard, as it were the noise of thunder, one of the four beasts saying, Come and see.

I trust it’s not sacrilegious to have a bit of fun with the four horseman prophecy and use them to explain exactly why the tech industry is in a funk.

The COVID Hangover

The first horse was white:

And I saw, and behold a white horse: and he that sat on him had a bow; and a crown was given unto him: and he went forth conquering, and to conquer.

To quote Wikipedia, for reasons that aren’t entirely clear, in popular culture the white horseman “is called Pestilence and is associated with infectious disease and plague.” I’m not here to parse Scripture, so I’m going to go ahead and run with it, and for good reason: COVID is the single biggest issue facing tech companies.

Now that may seem like a bit of an odd statement given that COVID is for all intents and purposes over in most of the world. To state the obvious, COVID obviously still exists (and will forever), but it isn’t the dominant factor in the economy. That’s good for the vast majority of businesses — and by extension the broader economy — which were decimated by COVID.

Remember, though, that tech didn’t just survive COVID: it thrived. Consumers with no way to spend discretionary income and flush with stimulus checks bought new devices; people stuck at home subscribed to streaming services and ordered e-commerce; businesses thrust into remote work subscribed to SaaS services that promised to make the experience bearable; and all of this ran on the cloud.

That last paragraph actually touches on a couple of the horses I’ll get to in a moment, but two of the most important ones are e-commerce and cloud computing, which first and foremost means Amazon. The Wall Street Journal reported:

Amazon.com Inc. warned of a period of reduced growth and signaled the difficult economic environment is denting the performance of its cloud-computing business that has been a profit engine for the company. “We do expect to see some slower growth rates for the next few quarters,” Brian Olsavsky, Amazon’s chief financial officer, said Thursday on a call with reporters. The guidance, he said, reflects the uncertainty the company continues to have about both consumer and corporate spending in the U.S. and overseas.

CFO Brian Olsavsky said on the company’s earnings call:

By and large, what we’re seeing is just an interest and a priority by our customers to get their spend down as they enter an economic downturn. We’re doing the same thing at Amazon, questioning our infrastructure expenses as well as everything else…I think that’s what we’re seeing. And as I said, we’re working with our customers to help them do that.

CEO Andy Jassy added:

It’s one of the advantages that we’ve talked about since we launched AWS in 2006 of the cloud, which is that when it turns out you have a lot more demand than you anticipated, you can seamlessly scale up. But if it turns out that you don’t need as much demand as you had, you can give it back to us and stop paying for it. And that elasticity is very unusual. It’s something you can’t do on-premises, which is one of the many reasons why the cloud is and AWS are very effective for customers.

This is certainly true, and predictable; I wrote about this dynamic in an Update last year:

This approach mirrors the overall business model of cloud computing, wherein Amazon and Microsoft are spending billions of dollars in capital expenditures to build out a globe-spanning network of data centers with the goal of selling access to those data centers on an as-needed basis; it’s arbitraging time, up-front cash, and scale. The selling point for their customers is that not only is it much easier to get started with a new company or new line of business when you can rent instead of buy, but that you also have flexibility as the business environment changes.

For most of the history of cloud computing, that flexibility has been valuable in terms of scaling quickly: instead of buying and provisioning servers to meet growing demand, companies could simply rent more server capacity with the click of a button. That promise of flexibility, though, also included big slowdowns; that certainly has included microeconomic slowdowns in the context of an individual business, but what is very interesting to observe right now is a macroeconomic slowdown in the context of the broader economy.

Remember, AWS didn’t launch S3 until 2006; when the Great Recession rolled around two years later Amazon was still busily harvesting the low-hanging fruit that was available to the company who was first in the space. AWS also benefited from the launch of the iPhone in 2007 and the App Store a year later: cloud computing has grown hand-in-hand with mobile computing, and just as Apple didn’t really feel the Great Recession, neither did AWS.

Today, though, is a different story: while AWS and Azure (and GCP) are still growing strongly, that growth is much more centered in the sort of businesses that are heavily impacted by recessions; moreover, all of those companies that grew up on cloud computing are much more exposed as well. What that means is that the same time, cash, and scale arbitrage play is going to reverse itself for the next little bit: AWS and Azure are going to bear some of the pain of this slowdown on behalf of their customers.

What is notable about this analysis is that it assumes that we are in for a broad-based economic slowdown; that, though, takes things back to Palazzolo’s observation: it sure doesn’t seem like there is much of a recession in the broader economy. This, in turn, brings back the concept of a COVID hangover.

Go back to work-from-home, and the flexibility of cloud computing. When corporations the world over were forced literally overnight to transition to an entirely new way of working they needed to scale their server capabilities immediately: that was only realistically possible using cloud computing. This in turn likely accelerated investments that companies were planning on making in cloud computing at some point in the future. Now, some aspect of this investment was certainly inefficient, which aligns with both Amazon and Microsoft attributing their cloud slowdowns to companies optimizing their spend; it’s fair to wonder, though, how much of the slowdown in growth is a function of pulling forward demand.

Amazon’s e-commerce business is, as Olsavsky noted, facing many of the same sort of challenges, albeit on a far greater scale than just about anyone else. Jassy explained:

I think probably the #1 priority that I spent time with the team on is reducing our cost to serve in our operations network. And as Brian touched on, it’s important to remember that over the last few years, we took a fulfillment center footprint that we’ve built over 25 years and doubled it in just a couple of years. And then we, at the same time, built out a transportation network for last mile roughly the size of UPS in a couple of years. And so when you do both of those things to meet the huge surge in demand, just to get those functional, it took everything we had. And so there’s a lot to figure out how to optimize and how to make more efficient and more productive.

The problem for Amazon is that not only did they (inevitably) build inefficiently, but they almost certainly overbuilt, with the assumption that the surge in e-commerce unleashed by the pandemic would be permanent. Olsavsky said on Amazon’s first quarter earnings call:

The last issue relates to our fixed cost leverage. Despite still seeing strong customer demand and expansion of our FBA business, we currently have excess capacity in our fulfillment and transportation network. Capacity decisions are made years in advance, and we made conscious decisions in 2020 and early 2021 to not let space be a constraint on our business. During the pandemic, we were facing not only unprecedented demand, but also extended lead times on new capacity, and we built towards the high end of a very volatile demand outlook.

That high end did not materialize: when I covered that earnings call Amazon’s retail growth had pretty much reverted to what it was trending towards pre-pandemic; the last two quarters have slowed further.1

In addition, I suspect that part of the challenge for both Amazon.com and especially AWS is that they are also exposed to the other three horsemen.

The Hardware Cycle

The second horse was red:

And when he had opened the second seal, I heard the second beast say, Come and see. And there went out another horse that was red: and power was given to him that sat thereon to take peace from the earth, and that they should kill one another: and there was given unto him a great sword.

The easy analogy here would be the Ukraine War, but I don’t think that is particularly relevant to tech company earnings. Rather, when you think of war it is very zero sum: you either control territory, or you don’t. You either live, or you are captured, or dead. The analogy here — and I admit, this is a bit of a stretch — is to the hardware cycle. If you have a new PC, you’re not going to buy one for a while. This applies to all consumer electronics and, in the case of Amazon.com, applies to a whole host of durable consumer goods.

The most obvious victim of the hardware cycle was Apple, whose revenue was down 5%, despite the company benefiting from a 14-week quarter. The biggest impact on the company’s revenue was the COVID-related slowdowns in iPhone production in China: a phone not made is a phone not sold, a zero-sum game in its own right. Mac and Wearable, Home, and Accessories revenue, though, was down even more, which makes sense give how much both categories, particularly the former, exploded during COVID.

Of course Apple has plenty of countervailing factors, including pent-up demand for the company’s Apple Silicon-based processors, that was largely sated over the last two years; the obliteration of the PC market, though, is an even better example of COVID’s impact. Microsoft reported that Windows OEM sales were down 39%, which particularly impacted Microsoft’s long-time strategic partner Intel. Even mighty TSMC is forecasting a decline in revenue, and is struggling to fill advanced-but-not-cutting-edge nodes like 7nm.

The good thing for all of these companies is twofold: first, hardware has always been cyclical, and the implication of a downward cycle is that an upwards one will come eventually, particularly as those year-over-year comparisons become easier to beat. Secondly, Microsoft released some encouraging data that suggested that PC usage — and in the long run, sales — may be up permanently. I suspect this applies broadly: the COVID pull-forward was massive, but underneath the inevitable hangover there was a meaningful long-term shift to digital broadly.

The End of Zero Interest Rates

The third horse was black:

And when he had opened the third seal, I heard the third beast say, Come and see. And I beheld, and lo a black horse; and he that sat on him had a pair of balances in his hand. And I heard a voice in the midst of the four beasts say, A measure of wheat for a penny, and three measures of barley for a penny; and see thou hurt not the oil and the wine.

This is another horseman the meaning of which is under some dispute; I’m going to interpret the pair of balances as investors discovering that the cost of capital input in their equations can be something other than zero, and the price they are wiling to pay for growth without profitability is falling through the floor.

SaaS was actually the first sector in tech to crash, back in late 2021; a driver was likely another manifestation of the COVID hangover. High-flying stocks like Zoom that exploded during lockdown were the first to slowdown significantly, and the realization that COVID wouldn’t be a persistent economic force soon spread to SaaS companies of all types.

The real problem, though, was increased interest rates. The SaaS model, as I have documented, entails operating unprofitably up-front to acquire customers, with the assumption being that those customers will pay out subscription fees like an annuity; moreover, the assumption was that that annuity would actually increase over time as companies used their initial product as a beachhead to both increase seats and average revenue per user.

This is fine as far as it goes, but the challenge from a valuation perspective is that it is difficult to model those annuities far into the future. First off, predicting the future is hard! Second, one of the biggest lessons to Microsoft’s dismantling of Slack is that it is problematic to extrapolate “big enough to get the attention of Microsoft” growth rates from “popular with startups and media” growth rates. Third, any valuation of long-term revenue streams is subject to a discount rate — money now is worth more than money in the future — and rising interest rates increased the discount rate, which is to say it devalued long-term revenue. This in turn reduced the current valuation of SaaS companies across the board, no matter how strong their moat or large their addressable market.

This devaluation has had the most visible impact on public companies, but the true famine — one of the interpretations of what the black horseman represents — will likely be amongst startups. Companies without clear product-market fit won’t be given time to find one, while those who have it will face much more skepticism about just how much that market is worth and, crucially, when it will be worth it.

What is notable is how this blows back onto the public clouds: those SaaS companies mostly run on AWS (Microsoft is much more exposed to corporate pullbacks), and to the extent they slowdown their spend or curtail their loss-driving growth AWS will feel the pain.

The ATT Recession

The final horse was pale:

And when he had opened the fourth seal, I heard the voice of the fourth beast say, Come and see. And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him. And power was given unto them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth.

This sounds like the most dramatic analogy, but it is arguably the most apt: I have been arguing for two years that Apple’s App Tracking Transparency (ATT) initiative was a big deal, and I may have been understating the impact.

Every company that relies on performance marketing, from Snap to YouTube to Meta to Shopify has seen its revenue growth crash from the moment ATT came into force in late 2021, even as companies and products that were isolated from its effects, from Amazon to Google to Apple advertising has seen growth. Notably, this crash preceded and continued through the Ukraine War, the hike in interest rates, and this very weird recession where the economy is in fact adding record jobs. That’s why Eric Seufert coined the term The App Tracking Transparency Recession; he writes in the introduction:

One might assume that the economy has utterly imploded from reading the Q3 earnings call transcripts of various social media platforms. Alphabet, Meta, and Snap, in particular, cited macroeconomic weakness, headwinds, uncertainty, challenges, etc. in their Q3 earnings calls…

But aside from various corners of the economy that are particularly sensitive to interest rate increases, such as Big-T Tech, homebuilding, and finance, much of the consumer economy is robust. Nike reported 17% year-over-year revenue growth in its most recent earnings release last month; Costco reported year-over-year sales growth for December of 7% on January 5th; Walmart’s 3Q 2023 results, reported in November 2022, saw the retailer grow year-over-year sales by 8.2%; and overall US holiday retail spending increased by 7.6% year-over-year in 2022, beating expectations. Of course, these numbers are nominal and not real, but for comparison: holiday retail sales in 2008 were down between 5.5 and 8% on a year-over-year basis, and the unemployment rate in December 2008 stood at 7.3%. And as I’ll unpack later in the piece, many participants in the broader digital advertising ecosystem saw strong revenue growth in 2022 through Q3.

So what’s the source of the pain for the largest social media advertising platforms?

Apple introduced a new privacy policy called App Tracking Transparency (ATT) to iOS in 2021; with iOS version 14.6, that policy reached a majority scale of iOS devices at the end of Q2 2021, in mid-June. ATT fundamentally disrupts what I call the “hub-and-spoke” model of digital advertising, which allows for behavioral profiles of individual users to be developed through a feedback loop of conversion events (eg. eCommerce purchases) between ad platforms and advertisers. In this feedback loop, ad platforms receive conversions data from their advertising clients, they use that data to enrich the behavioral profiles of the users on their platform, and they target ads to those users (and similar users) through those profiles. I’ve written extensively about how ATT disrupts the digital advertising ecosystem, but the disturbance is most pronounced for social media platforms as I’ll describe later in the piece. The shocks of ATT became discernible in Q3 2021 (the quarter after ATT was rolled out to a majority of iOS devices) but were substantially troublesome for Meta in particular in Q4 2021. The disruptive forces of ATT have compounded over time.

My general belief is that the impact of ATT has been underestimated; ascribing the advertising revenue headwinds being felt most profoundly by social media platforms and other consumer tech categories with substantial exposure to ATT to macroeconomic factors is misguided.

Seufert’s piece is well-argued and a must-read. I’m biased, to be sure: the piece aligns with my own views on the significant impact of ATT. Moreover, to double-down on Seufert’s point, the impact goes far beyond Meta: every company that sells on Meta was impacted, which in turn means that cloud providers like AWS were as well. Jassy noted that one of the headwinds for AWS was “things tied to advertising, as there’s lower advertising spend, there’s less analytics and compute on advertising spend as well.” As Seufert notes, though, most advertising was fine: all of the pain is in industries impacted by ATT.

This is not, to be clear, an argument that ATT was bad, or good. I personally think it was solving a problem that largely doesn’t exist and hurting small businesses more than it was helping end users, but I understand and respect arguments on the other side (even if most of them don’t realize that they’re actually opposed to tracking in all forms, which means Apple isn’t necessarily an ally). What this is is an acknowledgment that ATT, which happened to land right in the midst of the pandemic, rivals said pandemic in its contribution to the disconnect between tech earnings and layoffs and the broader economy.


I’m not a macroeconomist: I am certainly cheering for a soft landing, and have always attributed more weight than most to the idea that the impact of the COVID shutdowns was so great that it would in fact take years to unwind. One thing that is certain is that the surest way to be wrong about what would happen with the economy is to put a prediction down in writing.

To that end, I do rue my prediction that the pandemic would permanently pull forward certain behaviors that were already on the increase, particularly e-commerce. This prediction wasn’t totally wrong — e-commerce is meaningfully up, but it is down from pandemic highs, and it pains me to see so many companies citing optimism about maintaining COVID highs in their layoff letters.

What I do feel justified about are my predictions about ATT: what made digital advertising, particularly of the Facebook variety, so compelling is that most advertisers were entirely new to the space. Facebook and other performance advertisers weren’t so much stealing advertising dollars as they were creating the conditions for entirely new businesses; the viability of those businesses took a major hit with Apple’s changes, and every dollar in reduced revenue for Facebook ultimately means that many more dollars in foregone e-commerce or app sales, corresponding spend on cloud providers, and overall fewer only-possible-on-the-Internet jobs as it became that much harder to find niche audiences in a worldwide addressable market.

At the same time, it is precisely because these jobs — and similarly, many of the COVID-specific workloads like work-from-home and e-commerce — were digital that it is tech that is in a mini-recession even as the so-called “real” economy is doing better than ever. Perhaps that is for the greater good; at a minimum the increasing distinction between the digital and analog is exactly what Palazzolo is missing.


  1. Note: this Article previously included a chart of Amazon’s Net Sales; the accurate chart should be gross merchant volume which Amazon only reports occasionally. I have removed the chart and apologize for the error. 

Stratechery Plus Adds Greatest Of All Talk

Last September I announced that a Stratechery subscription was being rebranded to Stratechery Plus, and launched a new podcast: Sharp Tech with Ben Thompson. Sharp Tech joined the Stratechery Update, Stratechery Interviews, and Dithering to form the Stratechery Plus bundle.

In November Stratechery Plus added Sharp China with Bill Bishop, a collaboration with Bill Bishop of Sinocism. I am pleased that every podcast in the Stratechery Plus bundle has over 10,000 paid listeners, and thousands of more listeners on the free feeds, which feature clips and occasional full episodes.

Today I am excited to announce that the Stratechery Plus bundle is expanding in a fun new direction with the addition of Greatest of All Talk, a podcast about basketball, life, and national parks:

The Greatest Of All Talk joins Stratechery Plus

My initial relationship with my Sharp Tech co-host Andrew Sharp was a one-way one: I was an ardent listener of the Sports Illustrated Open Floor podcast he hosted with fellow writer Ben Golliver. Open Floor was both quirky and knowledgable, with the sort of conversational tone I like in podcasts, and it rewarded loyal listeners with ongoing gags and inside jokes.

Fast forward a couple of years — at which point I had had the chance to become real-life friends with Sharp — and new Sports Illustrated owner Maven laid off half of the staff, including Sharp, which meant the end of my favorite podcast. That’s when I had the chance to meet Golliver, and pushed the two of them to launch an independent for-pay podcast called The Greatest Of All Talk, or GOAT for short. That was four years ago, and the GOAT became my new favorite podcast, for all of the same reasons — but now with no ads, and a thriving community to boot.

Over that period Sharp left journalism for the law, but when he decided he wanted to come back to media, I jumped at the opportunity to work with him here at Stratechery, first on Sharp Tech, and then Sharp China. And now, to bring things full circle, I’m thrilled to add GOAT to the Stratechery Plus bundle. If you like basketball, acerbic humor, great chemistry, and the reward of being a loyal listener, GOAT is, well, a GOAT-level podcast.

A quick note for anyone who is already a GOAT-listener: GOAT will remain an independent entity that listeners can subscribe to directly; however, it is now available to all Stratechery Plus subscribers as well. If you are a subscriber to both and don’t want to double-pay, you can cancel your subscription on GOAT’s hosting service and add a new feed on Passport. If you’re a subscriber to neither, there is no better time to subscribe for just $12/month.

I hope you enjoy the show as much as I do.

Netflix’s New Chapter

Netflix’s moment of greatest peril is, in retrospect, barely visible in the company’s stock chart:

Netflix's all-time stock chart

I’m referring to 2004-2007 and the company’s battle with Blockbuster:

Netflix's stock during its battle with Blockbuster

The simplified story of Netflix’s founding starts with Reed Hastings grumbling over a $40 late charge from Blockbuster, and ends with the brick-and-mortar giant going bankrupt as customers came to prefer online rentals from Netflix, with streaming providing the final coup de grâce.

Neither are quite right.

The Blockbuster Fight

Netflix was the idea of Marc Randolph, Netflix’s actual founder and first CEO; Randolph was eager to do something in e-commerce, and it was the just-emerging DVD form factor that sold Hastings on the idea. He would fund Randolph’s new company and be chairman, eventually taking over as CEO once he determined that Randolph was not up to the task of scaling the new company.

Blockbuster, meanwhile, mounted a far more serious challenge to Netflix than many people remember; the company started with Blockbuster Online, an entity that was completely separate from Blockbuster’s retail business for reasons of both technology and culture: Blockbuster’s stores were not even connected to the Internet, and store managers and franchisees hated having an online service cannibalize their sales. Still, when a test version went live on July 15, 2004 — the same day as Netflix’s quarterly earnings call — Netflix’s stock suffered its first Blockbuster-inspired plunge.

Three months later Netflix cut prices and referred to Amazon’s assumed imminent entry to the space; Netflix’s stock slid again. Hastings, though, said the increased competition and looming price war was actually a good thing. Gina Keating relayed Hastings’ view on that quarter’s earnings call in Netflixed:

“Look, everyone, I know the Amazon entry is a bitter and surprising pill for those of you that are long in our stock,” he told investors on the earnings conference call. “This is going to be a very large market, and we’re going to execute very hard to make this back for our shareholders, including ourselves.” The $8 billion in U.S. store rentals would pour into online rentals, setting off a grab for subscribers, he said. The ensuing growth of online rentals would cannibalize video stores faster and faster, until they collapsed. As video store revenue dropped sharply, Blockbuster would struggle to fund its online operation, he concluded. “The prize is huge, the stakes high, and we intend to win.”

Blockbuster responded by pricing Blockbuster Online 50 cents cheaper, accelerating Netflix’s stock slide. Netflix, though, knew that Blockbuster was carrying $1 billion in debt from its spin-off from Viacom, and decided to wait it out; Blockbuster cut the price again, taking an increasing share of new subscribers, and still Netflix waited. Again from Keating:

Hastings agonized over whether to drop prices further to meet Blockbuster’s $14.99 holiday price cut, but McCarthy steadfastly objected. With Blockbuster losing even more on every subscriber, relief from its advertising juggernaut was even closer at hand. Kirincich checked his models again—and the outcome was the same. Blockbuster would have to raise prices by summertime. Because Netflix was still growing solidly, McCarthy wanted to sit tight and wait until the inevitable happened. “They can continue to bleed at this rate of $14.99, given the usage patterns that we know exist early in the life of the customer, until the end of the second quarter,” Kirincich told the executives.

Netflix was right:

By summertime [Blockbuster CEO John Antioco could no longer shield the online program from the company’s financial difficulties. Blockbuster’s financial crisis unfolded just as McCarthy and Kirincich’s models had predicted. The year’s DVD releases had performed woefully so far, and box office revenue — a fair indicator of rental revenue — was down by 5 percent over 2004. It was clear that Blockbuster would miss its earnings targets, meaning that it was in danger of violating its debt covenants. Antioco directed Zine to again press Blockbuster’s creditors for relaxed repayment terms, and broke the news to Evangelist that he would have to suspend marketing spending for a few months, and possibly raise prices to match Netflix’s…

The flood of marketing dollars that Antioco had committed to Blockbuster Online was crucial to keeping subscriber growth clicking along at record rates, and Cooper feared that cutting off that lifeblood would stop the momentum in its tracks. He was disappointed to be right. The result of the deep cuts to marketing was the same as letting up on a throttle. New subscriber additions barely kept up with cancellations, leaving Blockbuster Online treading water after a few weeks. While Netflix had zoomed past three million subscribers in March, Blockbuster had to abandon its goal of signing up two million by year’s end.

Still, Netflix wasn’t yet out of the woods: in 2006 Blockbuster launched Total Access, which let subscribers rent from either online or Blockbuster stores; the stores were still not connected to the Internet, so subscribers received an in-store rental in exchange for returning their online rental, which also triggered a new online rental to be sent to them. In other words, they were getting two rentals every time they visited a store. Customers loved it; Keating again:

Nearly a million new subscribers joined Blockbuster Online in the two months after Total Access launched, and market research showed consumer opinion nearly unanimous on one important point — the promotion was better than anything Netflix had to offer. Hastings figured he had three months before public awareness of Total Access began to pull in 100 percent of new online subscribers to Blockbuster Online, and even to lure away some of Neflix’s loyal subscribers. Hastings had derided Blockbuster Online as “technologically inferior” to Netflix in conversations with Wall Street financial analysts and journalists, and he was right. But the young, hard-driving MBAs running Blockbuster Online from a Dallas warehouse had found the one thing that trumped elegant technology with American consumers — a great bargain.

His momentary and grudging admiration for Antioco for finally figuring out how to use his seven thousand–plus stores to promote Blockbuster Online had turned to panic. The winter holidays, when Netflix normally enjoyed robust growth, turned sour, as Hastings and his executive team—McCarthy, Kilgore, Ross, and chief technology officer Neil Hunt—pondered countermoves.

Netflix would go on to offer to buy Blockbuster Online; Antioco turned the company down, assuming he could get a better price once Netflix’s growth turned upside down. Carl Icahn, though, who owned a major chunk of Blockbuster and had long feuded with Antioco, finally convinced him to resign that very same quarter; Antioco’s replacement took money away from Total Access and funneled it back to the stores, and Netflix escaped (Hastings would later tell Shane Evangelist, the head of Blockbuster Online, that Blockbuster had Netflix in checkmate). Blockbuster went bankrupt two years later.

Netflix’s Competition

I suspect, for the record, that Hastings overstated the situation just a tad; his admission to Evangelist sounds like the words of a gracious winner. The fact of the matter is that Netflix’s analysis of Blockbuster was correct: giving movies away was a great way to grow the business, but a completely unsustainable approach for a company saddled with debt whose core business was in secular decline — thanks in large part to Netflix.

Still, the fact remains that Q2 2007 was one of the few quarters that Netflix ever lost subscribers; it would happen again in 2011, but that would be it until last year, when Netflix’s user base declined two quarters in a row. This time, though, Netflix wasn’t the upstart fighting the brand everyone recognized; it was the dominant player, facing the prospect of saturation and stiff competition as everyone in Hollywood jumped into streaming.

What was surprising at the time was how surprised Netflix itself seemed to be; this is how the company opened the 1Q 2022 Letter to Shareholders:

Our revenue growth has slowed considerably as our results and forecast below show. Streaming is winning over linear, as we predicted, and Netflix titles are very popular globally. However, our relatively high household penetration – when including the large number of households sharing accounts – combined with competition, is creating revenue growth headwinds. The big COVID boost to streaming obscured the picture until recently.

That Netflix would soon be facing saturation was in fact apparent for years; it also shouldn’t have been a surprise that competition from other streaming services, which Netflix finally admitted existed in that same shareholder letter, would be a challenge, at least in the short-term. I wrote in a 2019 Daily Update:

That is not to say that this miss is not reason for concern: Netflix growing into its valuation depends on both increasing subscribers and increasing price, and this last quarter (again) suggests that the former is not inevitable and that the latter is not without cost. And yes, while Netflix may have not yet lost popular shows like Friends and The Office, both were reasons for subscribers to stick around; their exit will make retention in particular that much more difficult.

That will put more pressure on Netflix’s original content: not only must it attract new users, it also has to retain old ones — at least for now. I do think this will be a challenge: I wouldn’t be surprised if the next five years or so are much more challenging for Netflix as far as subscriber growth, and there may very well be a lot of volatility in the stock price (which, to be fair, has always been the case with Netflix).

COVID screwed up the timing: everyone being stuck at home re-ignited Netflix subscriber growth, but the underlying challenges remained, and hit all at once over the last year. That same Daily Update, though, ended with a note of optimism:

Note that time horizon though: as I have argued at multiple points I believe there will be a shakeout in streaming; most content companies simply don’t have the business model or stomach for building a sustainable streaming service, and will eventually go back to licensing their content to the highest bidder, and there Netflix has a massive advantage thanks to the user base it already has. To use an entertainment industry analogy, we are entering the time period of The Empire Strikes Back, but the big difference is that it is Netflix that owns the Death Star.

Fast forward to last fall’s earnings, and Netflix seemed to have arrived at the same conclusion; my biggest takeaway from the company’s pronouncements was the confidence on display, and the reason called back to the battle with Blockbuster. From the company’s Letter to Shareholders:

As it’s become clear that streaming is the future of entertainment, our competitors – including media companies and tech players – are investing billions of dollars to scale their new services. But it’s hard to build a large and profitable streaming business – our best estimate is that all of these competitors are losing money on streaming, with aggregate annual direct operating losses this year alone that could be well in excess of $10 billion, compared with our +$5-$6 billion of annual operating profit. For incumbent entertainment companies, this high level of investment is understandable given the accelerating decline of linear TV, which currently generates the bulk of their profit.

Ultimately though, we believe some of our competitors will seek to build sustainable, profitable businesses in streaming – either on their own or through continued industry consolidation. While it’s early days, we’re starting to see this increased profit focus – with some raising prices for their streaming services, some reigning in content spending, and some retrenching around traditional operating models which may dilute their direct-to-consumer offering. Amidst this formidable, diverse set of competitors, we believe our focus as a pure-play streaming business is an advantage. Our aim remains to be the first choice in entertainment, and to continue to build an amazingly successful and profitable business.

The fact that Netflix is now profitable — and, more importantly, generating positive free cash flow — wasn’t the only reason for optimism: Netflix had the good fortune of funding its expansion into content production in the most favorable interest rate environment imaginable; Netflix noted in this past quarter’s Letter to Shareholders:

We don’t have any scheduled debt maturities in FY23 and only $400M of debt maturities in FY24. All of our debt is fixed rate.

That debt totals $14 billion; Warner Bros. Discovery, meanwhile, has $50.4 billion in debt, Disney has $45 billion, Paramount has $15.6 billion, and Comcast, the owner of Peacock, has $90 billion. None of them — again, in contrast to Netflix — are making money on streaming, and cash flow is negative. Moreover, like Blockbuster and renting DVDs from stores, the actual profitable parts of their businesses are shrinking, thanks to the streaming revolution that Netflix pioneered.

Warner Bros. Discovery and Disney are almost certainly pot-committed to streaming, but Warner Bros. Discovery in particular has talked about the importance of profitability, and Disney just brought back Bob Iger after massive streaming losses helped doom his predecessor née successor; it seems likely their competitive threat will decrease, either because of higher prices, less aggressive bidding for content, or both. Meanwhile, it’s still not clear to me why Paramount+ and Peacock exist; perhaps they will not, sooner rather than later.

When and if that happens Netflix will be ready to stream their content, at a price that makes sense for Netflix, and not a penny more.

Netflix’s Creativity Imperative

That’s not to say that everything at Netflix is rosy: the other thing that was striking about the company’s earnings last week was the degree to which management gave credence to various aspects of the bear case against the company.

First, Netflix gets less leverage off of its international content than it once hoped for. One of the bullish arguments for Netflix is that it could create content in one part of the world and then stream it elsewhere, and while that is true technically, it doesn’t really move the needle in terms of engagement. Co-CEO Ted Sarandos said on last week’s earnings interview:

Watching where viewing is growing and where it’s suffering and where we are under programming and over programming around the world is a big task of the job. Spence and his team support Bella and her team in making those allocations, figuring out between film and television, between local language — and what’s really interesting is there aren’t that many global hits, meaning that everyone in the world watches the same thing. Squid Game was very rare in that way. And Wednesday looks like one of those too, very rare in that way. There are countries like Japan, as an example, or even Mexico that have a real preference for local content, even when we have our big local hits.

This means that Netflix has less leverage that you might think, and that said leverage varies by market; to put it another way, the company spends a lot on content, but that spend is distributed much more than people like me once theorized it might be.

Second, Netflix gets less value from its older content than bulls once assumed — or than its amortization schedule suggests (which is why the company’s profit number is misleading). Sarandos said in response to a question about how Netflix would manage churn in the face of cracking down on account sharing and raising prices:

I would just say that it’s the must-seeness of the content that will make the paid sharing initiative work. That will make the advertising launch work, that will make continuing to grow revenue work. And so it’s across film, across television. It’s the content that people must see and then it’s on Netflix that gives us the ability to do that. And we’re super proud of the team and their ability to keep delivering on that month in and month out and quarter in and quarter out and continuing to grow in all these different market segments that our consumers really care about. So that, to me, is core to all these initiatives working, and we’ve got the wind at our back on that right now.

If Netflix’s old content held its value in the way I once assumed then you could make a case that the company’s customer acquisition costs were actually decreasing over time as the value of its offering increased; it turns out, though, that Netflix gets and keeps customers with new shows that people talk about, while most of its old content is ignored (and perhaps ought be monetized on services like Roku and other free ad-supported TV networks).

From Spock to Kirk

Reed Hastings has certainly earned the right to step up — and back — to executive chairman; last Thursday was his last earnings interview. It’s interesting, though, to go back to his initial move from chairman to the CEO role. Randolph writes in the first chapter of his book That Will Never Work:

Behind his back, I’ve heard people compare Reed to Spock. I don’t think they mean it as a compliment, but they should. In Star Trek, Spock is almost always right. And Reed is, too. If he thinks something won’t work, it probably won’t.

Unfortunately for Randolph, it didn’t take long for Spock to evaluate his performance as CEO; Randolph recounted the conversation:

“Marc,” Reed said, “we’re headed for trouble, and I want you to recognize as a shareholder that there is enough smoke at this small business size that fire at a larger size is likely. Ours is an execution play. We have to move fast and almost flawlessly. The competition will be direct and strong. Yahoo! went from a grad school project to a six-billion-dollar company on awesome execution. We have to do the same thing. I’m not sure we can if you’re the only one in charge.”

He paused, then looked down, as if trying to gain the strength to do something difficult. He looked up again, right at me. I remember thinking: He’s looking me in the eye. “So I think the best possible outcome would be if I joined the company full-time and we ran it together. Me as CEO, you as president.”

Things changed quickly; Keating writes:

Hastings now held Netflix’s reins firmly in hand, and the VC money gave him the power to begin shifting the company’s culture away from Randolph’s family of creators toward a top-down organization led by executives with proven corporate records and, preferably, strong engineering and mathematics backgrounds.

Randolph ultimately left the company in 2002; again from Keating:

The last year or so of Randolph’s career at Netflix was a time of indecision — stay or go? He had resigned from the board of directors before the IPO, in part so that investors would not view his desire to cash out some of his equity as a vote of no confidence in the newly public company. Randolph landed in product development while trying to find a role for himself at Netflix, and dove into Lowe’s kiosk project and a video-streaming application that the engineers were beginning to develop. But after seven years of lavishing time and attention on his start-up, Randolph needed a break. Netflix had changed around him, from his collective of dreamers trying to change the world into Hastings’ hypercompetitive team of engineers and right-brained marketers whose skills intimidated him slightly. He no longer fit in.

To say that Hastings excelled at execution is a dramatic understatement; indeed, the speed with which the company rolled out its advertising product in 2022 (better late than never) is a testament that Hastings’ imprint on the company’s ability to execute remains. And again, that ability to execute was essential for much of Hastings tenure, particularly when Netflix was shipping DVDs: acquiring customers efficiently and delivering them what was essentially a commodity product was all about execution, as was the initial buildout of Netflix’s streaming service.

What is notable, though, is that the chief task for Netflix going forward is not necessarily execution, at least in terms of product or technology. While Hastings has left Netflix in a very good spot relative to its competitors, the long-term success of the company will ultimately be about creativity. Specifically, can Netflix produce compelling content at scale? Matthew Ball observed in a Stratechery Interview last summer:

Netflix is, in some regard, a sobering story. What do I mean by that? First mover advantages matter a lot, scale matters a lot, their product and technology investments matter a lot. Reed [Hastings] saw the future for global content services and scale that span every market, every genre, every person, truly years before any competitor did. I think we see pretty intense competition right now, but it’s remarkable when you actually look at the corporate histories of all of the competitors, most have changed leadership at the CEO level twice, at the DTC level three to four times, Hulu is on its fifth or sixth CEO and so we have to give incredible plaudits to all of that.

Yet what I find so important here, is at the end of the day, all of those things only matter for a while. Content matters, that’s the product that they’re selling, it’s entertainment. The thing that has surprised me most about Netflix is their struggles to get better at it. When I was at Amazon Studios and we were competing with them day in and day out, the assumption you would’ve made in 2015, ’16, ’17 would be that the Netflix of 2022 would be much better at making content than it seems to be. That their batting average would be much higher. Why? Because they’ve spent $70 or $80 billion since and I think we’re starting to feel the consequences of [not being as far ahead as expected].

It’s impossible to not dive into the history of Netflix and not come away with a deep appreciation for everything Hastings accomplished. I’m not sure there is any company of Netflix’s size that has ever been so frequently doubted and written off. To have built it to a state where simply having the best content is paramount is a massive triumph. And that, perhaps, is another way of saying that Spock’s job is finished: Netflix’s future is about creativity and humanity; it’s time for a Captain Kirk.

AI and the Big Five

The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.

To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.

Consider previous tech epochs:

  • The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.
  • The Internet was almost entirely new market innovation, and thus defined by completely new companies that, to the extent they disrupted incumbents, did so in industries far removed from technology, particularly those involving information (i.e. the media). This was the era of Google, Facebook, online marketplaces and e-commerce, etc. All of these applications ran on PCs powered by Windows and Intel.
  • Cloud computing is arguably part of the Internet, but I think it deserves its own category. It was also extremely disruptive: commodity x86 architecture swept out dedicated server hardware, and an entire host of SaaS startups peeled off features from incumbents to build companies. What is notable is that the core infrastructure for cloud computing was primarily built by the winners of previous epochs: Amazon, Microsoft, and Google. Microsoft is particularly notable because the company also transitioned its traditional software business to a SaaS service, in part because the company had already transitioned said software business to a subscription model.
  • Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.

What is notable about this history is that the supposition I stated above isn’t quite right; disruptive innovations do consistently come from new entrants in a market, but those new entrants aren’t necessarily startups: some of the biggest winners in previous tech epochs have been existing companies leveraging their current business to move into a new space. At the same time, the other tenets of Christensen’s theory hold: Microsoft struggled with mobile because it was disruptive, but SaaS was ultimately sustaining because its business model was already aligned.


Given the success of existing companies with new epochs, the most obvious place to start when thinking about the impact of AI is with the big five: Apple, Amazon, Facebook, Google, and Microsoft.

Apple

I already referenced one of the most famous books about tech strategy; one of the most famous essays was Joel Spolsky’s Strategy Letter V, particularly this famous line:

Smart companies try to commoditize their products’ complements.

Spolsky wrote this line in the context of explaining why large companies would invest in open source software:

Debugged code is NOT free, whether proprietary or open source. Even if you don’t pay cash dollars for it, it has opportunity cost, and it has time cost. There is a finite amount of volunteer programming talent available for open source work, and each open source project competes with each other open source project for the same limited programming resource, and only the sexiest projects really have more volunteer developers than they can use. To summarize, I’m not very impressed by people who try to prove wild economic things about free-as-in-beer software, because they’re just getting divide-by-zero errors as far as I’m concerned.

Open source is not exempt from the laws of gravity or economics. We saw this with Eazel, ArsDigita, The Company Formerly Known as VA Linux and a lot of other attempts. But something is still going on which very few people in the open source world really understand: a lot of very large public companies, with responsibilities to maximize shareholder value, are investing a lot of money in supporting open source software, usually by paying large teams of programmers to work on it. And that’s what the principle of complements explains.

Once again: demand for a product increases when the price of its complements decreases. In general, a company’s strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the “commodity price” — the price that arises when you have a bunch of competitors offering indistinguishable goods. So, smart companies try to commoditize their products’ complements. If you can do this, demand for your product will increase and you will be able to charge more and make more.

Apple invests in open source technologies, most notably the Darwin kernel for its operating systems and the WebKit browser engine; the latter fits Spolsky’s prescription as ensuring that the web works well with Apple devices makes Apple’s devices more valuable.

Apple’s efforts in AI, meanwhile, have been largely proprietary: traditional machine learning models are used for things like recommendations and photo identification and voice recognition, but nothing that moves the needle for Apple’s business in a major way. Apple did, though, receive an incredible gift from the open source world: Stable Diffusion.

Stable Diffusion is remarkable not simply because it is open source, but also because the model is surprisingly small: when it was released it could already run on some consumer graphics cards; within a matter of weeks it had been optimized to the point where it could run on an iPhone.

Apple, to its immense credit, has seized this opportunity, with this announcement from its machine learning group last month:

Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices…

One of the key questions for Stable Diffusion in any app is where the model is running. There are a number of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. First, the privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. Second, after initial download, users don’t require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs…

Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon. This release comprises a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.

It’s important to note that this announcement came in two parts: first, Apple optimized the Stable Diffusion model itself (which it could do because it was open source); second, Apple updated its operating system, which thanks to Apple’s integrated model, is already tuned to Apple’s own chips.

Moreover, it seems safe to assume that this is only the beginning: while Apple has been shipping its so-called “Neural Engine” on its own chips for years now, that AI-specific hardware is tuned to Apple’s own needs; it seems likely that future Apple chips, if not this year then probably next year, will be tuned for Stable Diffusion as well. Stable Diffusion itself, meanwhile, could be built into Apple’s operating systems, with easily accessible APIs for any app developer.

This raises the prospect of “good enough” image generation capabilities being effectively built-in to Apple’s devices, and thus accessible to any developer without the need to scale up a back-end infrastructure of the sort needed by the viral hit Lensa. And, by extension, the winners in this world end up looking a lot like the winners in the App Store era: Apple wins because its integration and chip advantage are put to use to deliver differentiated apps, while small independent app makers have the APIs and distribution channel to build new businesses.

The losers, on the other hand, would be centralized image generation services like Dall-E or MidJourney, and the cloud providers that undergird them (and, to date, undergird the aforementioned Stable Diffusion apps like Lensa). Stable Diffusion on Apple devices won’t take over the entire market, to be sure — Dall-E and MidJourney are both “better” than Stable Diffusion, at least in my estimation, and there is of course a big world outside of Apple devices, but built-in local capabilities will affect the ultimate addressable market for both centralized services and centralized compute.

Amazon

Amazon, like Apple, uses machine learning across its applications; the direct consumer use cases for things like image and text generation, though, seem less obvious. What is already important is AWS, which sells access to GPUs in the cloud.

Some of this is used for training, including Stable Diffusion, which according to the founder and CEO of Stability AI Emad Mostaque used 256 Nvidia A100s for 150,000 hours for a market-rate cost of $600,000 (which is surprisingly low!). The larger use case, though, is inference, i.e. the actual application of the model to produce images (or text, in the case of ChatGPT). Every time you generate an image in MidJourney, or an avatar in Lensa, inference is being run on a GPU in the cloud.

Amazon’s prospects in this space will depend on a number of factors. First, and most obvious, is just how useful these products end up being in the real world. Beyond that, though, Apple’s progress in building local generation techniques could have a significant impact. Amazon, though, is a chip maker in its own right: while most of its efforts to date have been focused on its Graviton CPUs, the company could build dedicated hardware of its own for models like Stable Diffusion and compete on price. Still, AWS is hedging its bets: the cloud service is a major partner when it comes to Nvidia’s offerings as well.

The big short-term question for Amazon will be gauging demand: not having enough GPUs will be leaving money on the table; buying too many that sit idle, though, would be a major cost for a company trying to limit them. At the same time, it wouldn’t be the worst error to make: one of the challenges with AI is the fact that inference costs money; in other words, making something with AI has marginal costs.

This issue of marginal costs is, I suspect, an under-appreciated challenge in terms of developing compelling AI products. While cloud services have always had costs, the discrete nature of AI generation may make it challenging to fund the sort of iteration necessary to achieve product-market fit; I don’t think it’s an accident that ChatGPT, the biggest breakout product to-date, was both free to end users and provided by a company in OpenAI that both built its own model and has a sweetheart deal from Microsoft for compute capacity. If AWS had to sell GPUs for cheap that could spur more use in the long run.

That noted, these costs should come down over time: models will become more efficient even as chips become faster and more efficient in their own right, and there should be returns to scale for cloud services once there are sufficient products in the market maximizing utilization of their investments. Still, it is an open question as to how much full stack integration will make a difference, in addition to the aforementioned possibility of running inference locally.

Meta

I already detailed in Meta Myths why I think that AI is a massive opportunity for Meta and worth the huge capital expenditures the company is making:

Meta has huge data centers, but those data centers are primarily about CPU compute, which is what is needed to power Meta’s services. CPU compute is also what was necessary to drive Meta’s deterministic ad model, and the algorithms it used to recommend content from your network.

The long-term solution to ATT, though, is to build probabilistic models that not only figure out who should be targeted (which, to be fair, Meta was already using machine learning for), but also understanding which ads converted and which didn’t. These probabilistic models will be built by massive fleets of GPUs, which, in the case of Nvidia’s A100 cards, cost in the five figures; that may have been too pricey in a world where deterministic ads worked better anyways, but Meta isn’t in that world any longer, and it would be foolish to not invest in better targeting and measurement.

Moreover, the same approach will be essential to Reels’ continued growth: it is massively more difficult to recommend content from across the entire network than only from your friends and family, particularly because Meta plans to recommend not just video but also media of all types, and intersperse it with content you care about. Here too AI models will be the key, and the equipment to build those models costs a lot of money.

In the long run, though, this investment should pay off. First, there are the benefits to better targeting and better recommendations I just described, which should restart revenue growth. Second, once these AI data centers are built out the cost to maintain and upgrade them should be significantly less than the initial cost of building them the first time. Third, this massive investment is one no other company can make, except for Google (and, not coincidentally, Google’s capital expenditures are set to rise as well).

That last point is perhaps the most important: ATT hurt Meta more than any other company, because it already had by far the largest and most finely-tuned ad business, but in the long run it should deepen Meta’s moat. This level of investment simply isn’t viable for a company like Snap or Twitter or any of the other also-rans in digital advertising (even beyond the fact that Snap relies on cloud providers instead of its own data centers); when you combine the fact that Meta’s ad targeting will likely start to pull away from the field (outside of Google), with the massive increase in inventory that comes from Reels (which reduces prices), it will be a wonder why any advertiser would bother going anywhere else.

An important factor in making Meta’s AI work is not simply building the base model but also tuning it to individual users on an ongoing basis; that is what will take such a large amount of capacity and it will be essential for Meta to figure out how to do this customization cost-effectively. Here, though, it helps that Meta’s offering will probably be increasingly integrated: while the company may have committed to Qualcomm for chips for its VR headsets, Meta continues to develop its own server chips; the company has also released tools to abstract away Nvidia and AMD chips for its workloads, but it seems likely the company is working on its own AI chips as well.

What will be interesting to see is how things like image and text generation impact Meta in the long run: Sam Lessin has posited that the end-game for algorithmic timelines is AI content; I’ve made the same argument when it comes to the Metaverse. In other words, while Meta is investing in AI to give personalized recommendations, that idea, combined with 2022’s breakthroughs, is personalized content, delivered through Meta’s channels.

For now it will be interesting to see how Meta’s advertising tools develop: the entire process of both generating and A/B testing copy and images can be done by AI, and no company is better than Meta at making these sort of capabilities available at scale. Keep in mind that Meta’s advertising is primarily about the top of the funnel: the goal is to catch consumers’ eyes for a product or service or app they did not know previously existed; this means that there will be a lot of misses — the vast majority of ads do not convert — but that also means there is a lot of latitude for experimentation and iteration. This seems very well suited to AI: yes, generation may have marginal costs, but those marginal costs are drastically lower than a human.

Google

The Innovator’s Dilemma was published in 1997; that was the year that Eastman Kodak’s stock reached its highest price of $94.25, and for seemingly good reason: Kodak, in terms of technology, was perfectly placed. Not only did the company dominate the current technology of film, it had also invented the next wave: the digital camera.

The problem came down to business model: Kodak made a lot of money with very good margins providing silver halide film; digital cameras, on the other hand, were digital, which means they didn’t need film at all. Kodak’s management was thus very incentivized to convince themselves that digital cameras would only ever be for amateurs, and only when they became drastically cheaper, which would certainly take a very long time.

In fact, Kodak’s management was right: it took over 25 years from the time of the digital camera’s invention for digital camera sales to surpass film camera sales; it took longer still for digital cameras to be used in professional applications. Kodak made a lot of money in the meantime, and paid out billions of dollars in dividends. And, while the company went bankrupt in 2012, that was because consumers had access to better products: first digital cameras, and eventually, phones with cameras built in.

The idea that this is a happy ending is, to be sure, a contrarian view: most view Kodak as a failure, because we expect companies to live forever. In this view Kodak is a cautionary tale of how an innovative company can allow its business model to lead it to its eventual doom, even if said doom was the result of consumers getting something better.

And thus we arrive at Google and AI. Google invented the transformer, the key technology undergirding the latest AI models. Google is rumored to have a conversation chat product that is far superior to ChatGPT. Google claims that its image generation capabilities are better than Dall-E or anyone else on the market. And yet, these claims are just that: claims, because there aren’t any actual products on the market.

This isn’t a surprise: Google has long been a leader in using machine learning to make its search and other consumer-facing products better (and has offered that technology as a service through Google Cloud). Search, though, has always depended on humans as the ultimate arbiter: Google will provide links, but it is the user that decides which one is the correct one by clicking on it. This extended to ads: Google’s offering was revolutionary because instead of charging advertisers for impressions — the value of which was very difficult to ascertain, particularly 20 years ago — it charged for clicks; the very people the advertisers were trying to reach would decide whether their ads were good enough.

I wrote about the conundrum this presented for Google’s business in a world of AI seven years ago in Google and the Limits of Strategy:

In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014, declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.

It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.

This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?

That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.

Microsoft

Microsoft, meanwhile, seems the best placed of all. Like AWS it has a cloud service that sells GPU; it is also the exclusive cloud provider for OpenAI. Yes, that is incredibly expensive, but given that OpenAI appears to have the inside track to being the AI epoch’s addition to this list of top tech companies, that means that Microsoft is investing in the infrastructure of that epoch.

Bing, meanwhile, is like the Mac on the eve of the iPhone: yes it contributes a fair bit of revenue, but a fraction of the dominant player, and a relatively immaterial amount in the context of Microsoft as a whole. If incorporating ChatGPT-like results into Bing risks the business model for the opportunity to gain massive market share, that is a bet well worth making.

The latest report from The Information, meanwhile, is that GPT is eventually coming to Microsoft’s productivity apps. The trick will be to imitate the success of AI-coding tool GitHub Copilot (which is built on GPT), which figured out how to be a help instead of a nuisance (i.e. don’t be Clippy!).

What is important is that adding on new functionality — perhaps for a fee — fits perfectly with Microsoft’s subscription business model. It is notable that the company once thought of as a poster child for victims of disruption will, in the full recounting, not just be born of disruption, but be well-placed to reach greater heights because of it.


There is so much more to write about AI’s potential impact, but this Article is already plenty long. OpenAI is obviously the most interesting from a new company perspective: it is possible that OpenAI will become the platform on which all other AI companies are built, which would ultimately mean the economic value of AI outside of OpenAI may be fairly modest; this is also the bull case for Google, as they would be the most well-placed to be the Microsoft Azure to OpenAI’s AWS.

There is another possibility where open source models proliferate in the text generation space in addition to image generation. In this world AI becomes a commodity: this is probably the most impactful outcome for the world but, paradoxically, the most muted in terms of economic impact for individual companies (I suspect the biggest opportunities will be in industries where accuracy is essential: incumbents will therefore underinvest in AI, a la Kodak under-investing in digital, forgetting that technology gets better).

Indeed, the biggest winners may be Nvidia and TSMC. Nvidia’s investment in the CUDA ecosystem means the company doesn’t simply have the best AI chips, but the best AI ecosystem, and the company is investing in scaling that ecosystem up. That, though, has and will continue to spur competition, particularly in terms of internal chip efforts like Google’s TPU; everyone, though, will make their chips at TSMC, at least for the foreseeable future.

The biggest impact of all though, though, is probably off our radar completely. Just before the break Nat Friedman told me in a Stratechery Interview about Riffusion, which uses Stable Diffusion to generate music from text via visual sonograms, which makes me wonder what else is possible when images are truly a commodity. Right now text is the universal interface, because text has been the foundation of information transfer since the invention of writing; humans, though, are visual creatures, and the availability of AI for both the creation and interpretation of images could fundamentally transform what it means to convey information in ways that are impossible to predict.

For now, our predictions must be much more time-constrained, and modest. This may be the beginning of the AI epoch, but even in tech, epochs take a decade or longer to transform everything around them.

I wrote a follow-up to this Article in this Daily Update.

Holiday Break: December 26th to January 5th

Stratechery is on holiday from December 26, 2022 to January 5, 2023; the next Stratechery Update will be on Monday, January 9.

In addition, the next episode of Sharp Tech will be on Monday, January 9, and the next episode of Dithering will be on Tuesday, January 10. Sharp China will return the week of January 2.

The full Stratechery posting schedule is here.

The 2022 Stratechery Year in Review

It was only a year ago that I opened the 2021 Year in Review by noting that the news felt like a bit of a drag; the contrast to 2022 has been stark. The biggest story in tech not just this year but, I would argue, since the advent of mobile and cloud computing, was the emergence of AI. AI looms large not simply in terms of products, but also its connection to the semiconductor industry; that means the impact is not only a question of technology and society, but also geopolitics and, potentially, war. War, meanwhile, came to Europe, while inflation came to the world; tech valuations collapsed and the crypto bubble burst, and brought to light one of the largest frauds in history. All of this was discussed on Twitter, even as Twitter itself came to dominate the conversation, thanks to its purchase by Elon Musk.

"Paperboy on a bike" with Midjourney V3 and V4

Stratechery, meanwhile, entering its 10th year of publishing, underwent major changes of its own; a subscription to the Daily Update newsletter transformed into a subscription to the Stratechery Plus bundle, including:

Stratechery Interviews, meanwhile, became its own distinct brand, befitting its weekly schedule and increased prominence in Stratechery’s offering. I am excited to see Stratechery Plus continue to expand in 2023.

This year Stratechery published 33 free Weekly Articles, 111 subscriber Updates, and 36 Interviews. Today, as per tradition, I summarize the most popular and most important posts of the year on Stratechery.

You can find previous years here: 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013

On to 2022:

The Five Most-Viewed Articles

The five most-viewed articles on Stratechery according to page views:

  1. AI Homework — It seems appropriate that this article, written after the launch of ChatGPT, was the most popular of the year because AI is, in my estimation, the most important story of the year. This article used homework as a way to discuss how verifying and editing information will not only be essential in the future, but already are. I wrote two other articles about AI:
    • DALL-E, the Metaverse, and Zero Marginal Content — Machine-learning generated content has major implications on the Metaverse, because it brings the marginal cost of production to zero.
    • The AI Unbundling — AI is starting to unbundle the final part of the idea propagation value chain: idea creation and substantiation. The impacts will be far-reaching.
  2. Meta Myths — Meta deserves a bit of a discount off of its recent highs, but a number of myths about its business have caused the market to over-react. See also:
  3. Shopify’s Evolution — Shopify should build an advertising business to complement Shop Pay and the Shopify Fulfillment Network. An additional challenge for Shopify is the changing nature of Amazon’s moat:
  4. Digital Advertising in 2022 — The advertising has shifted from a Google-Facebook duopoly to one where Amazon and potentially Apple are major forces. Speaking of Apple:
  5. Nvidia In the Valley — Nvidia is in the valley in terms of gaming, the data center, and the omniverse; if it makes it to future heights its margins will be well-earned.

A drawing of Shopify with Integrated Payments, Fulfillment, and Advertising

Semiconductors and Geopolitics

Geopolitics, including the Russian invasion of Ukraine and relations with China, were major stories this year; semiconductors figured prominently in both.

A drawing of Google, Amazon, and Facebook's Ad Business

Aggregators and Platforms

A central theme on Stratechery has always been platforms and Aggregators.

A drawing of The Stripe Thin Platform

These themes inevitably lead to questions of antitrust, and I disagree with the biggest FTC action of the year:

A drawing of Microsoft Game Pass

Streaming

This year saw a lot of upheavel in the streaming space; some of these outlooks have already came true (Netflix and ads), remain to be seen (Warner Bros. Discovery), or aren’t looking too good (consolidation may happen in streaming, but cable is looking like a weak player).

A drawing of The Big Ten's Accrual

Tech and Society

The intersection between tech and society has never been more clear than over the last few months as Twitter, a relatively small and unimportant company in business terms, has dominated the news, thanks to its societal impact.

The 2x2 graph in 2022, with challenges from Amazon and Apple

Other Company Coverage

Microsoft continues to show strength, Apple didn’t raise prices (although, in retrospect, the below Article overstates the case), Meta continues to pursue the Metaverse, and what a private Twitter might have been.

A drawing of Twitter's Architecture

Stratechery Interviews

This year Stratechery Interviews became a standard weekly item, with three distinct categories:

Public Executive Interviews

Startup Executive Series

This was a new type of interview I launched this year: given that it is impossible to cover startups objectively through data, I asked founders to give their subjective view of their businesses and long-term prospects.

Analyst Interviews

  • Jay Goldberg: January about Intel, Nvidia, and ARM; and August about AI and the CHIPS Act
  • Bill Bishop about China’s COVID outbreak, the Ukraine war, and Substack
  • Dan Wang, from Gavekal Dragonomics: April about China’s Shanghai lockdown and response to Ukraine; and October about the China chip ban
  • Tony Fadell about his career in tech, including at Apple, and the future of ARM
  • Eric Seufert: May, about the post-ATT landscape; and August, about the future of digital advertising
  • Michael Nathanson about streaming and digital advertising
  • Matthew Ball about the metaverse and Netflix
  • Michael Mignano about podcasts, standards, and recommendation media
  • Daniel Gross and Nat Friedman about the democratization of AI
  • Eugene Wei about streaming and social media
  • Gregory C. Allen about the past, present, and future of the China chip ban

A drawing of Activision's Modularity

The Year in Stratechery Updates

Some of my favorite Stratechery Updates:


I am so grateful to the subscribers that make it possible for me to do this as a job. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2023!

Consoles and Competition

This Article is available as a video essay on YouTube


The first video game was a 1952 research product called OXO — tic-tac-toe played on a computer the size of a large room:

The EDSAC computer
Copyright Computer Laboratory, University of Cambridge, CC BY 2.0

Fifteen years later Ralph Baer produced “The Brown Box”; Magnavox licensed Baer’s device and released it as the Odyssey five years later — it was the first home video game console:

The Magnavox Odyssey

The Odyssey made Magnavox a lot of money, but not through direct sales: the company sued Atari for ripping off one of the Odyssey’s games to make “Pong”, the company’s first arcade game and, in 1975, first home video game, eventually reaping over $100 million in royalties and damages. In other words, arguments about IP and control have been part of the industry from the beginning.

In 1977 Atari released the 2600, the first console I ever owned:1

The Atari 2600

All of the games for the Atari were made by Atari, because of course they were; IBM had unbundled mainframe software and hardware in 1969 in an (unsuccessful) attempt to head off an antitrust case, but video games barely existed as a category in 1977. Indeed, it was only four years earlier when Steve Wozniak had partnered with Steve Jobs to design a circuit board for Atari’s Breakout arcade game; this story is most well-known for the fact that Jobs lied to Wozniak about the size of the bonus he earned, but the pertinent bit for this Article is that video game development was at this point intrinsically tied to hardware.

That, though, was why the 2600 was so unique: games were not tied to hardware but rather self-contained in cartridges, meaning players would use the same system to play a whole bunch of different games:

Atari cartridges
Nathan King, CC BY 2.0

The implications of this separation did not resonate within Atari, which had been sold by founder Nolan Bushnell to Warner Communications in 1976, in an effort to get the 2600 out the door. Game Informer explains what happened:

In early 1979, Atari’s marketing department issued a memo to its programing staff that listed all the games Atari had sold the previous year. The list detailed the percentage of sales each game had contributed to the company’s overall profits. The purpose of the memo was to show the design team what kinds of games were selling and to inspire them to create more titles of a similar breed…David Crane, Larry Kaplan, Alan Miller, and Bob Whitehead were four of Atari’s superstar programmers. Collectively, the group had been responsible for producing many of Atari’s most critical hits…

“I remember looking at that memo with those other guys,” recalls Crane, “and we realized that we had been responsible for 60 percent of Atari’s sales in the previous year – the four of us. There were 35 people in the department, but the four of us were responsible for 60 percent of the sales. Then we found another announcement that [Atari] had done $100 million in cartridge sales the previous year, so that 60 percent translated into ­$60 ­million.”

These four men may have produced $60 million in profit, but they were only making about $22,000 a year. To them, the numbers seemed astronomically disproportionate. Part of the problem was that when the video game industry was founded, it had molded itself after the toy industry, where a designer was paid a fixed salary and everything that designer produced was wholly owned by the company. Crane, Kaplan, Miller, and Whitehead thought the video game industry should function more like the book, music, or film industries, where the creative talent behind a project got a larger share of the profits based on its success.

The four walked into the office of Atari CEO Ray Kassar and laid out their argument for programmer royalties. Atari was making a lot of money, but those without a corner office weren’t getting to share the wealth. Kassar – who had been installed as Atari’s CEO by parent company Warner Communications – felt obligated to keep production costs as low as possible. Warner was a massive c­orporation and everyone helped contribute to the ­company’s ­success. “He told us, ‘You’re no more important to those projects than the person on the assembly line who put them together. Without them, your games wouldn’t have sold anything,’” Crane remembers. “He was trying to create this corporate line that it was all of us working together that make games happen. But these were creative works, these were authorships, and he didn’t ­get ­it.”

“Kassar called us towel designers,” Kaplan told InfoWorld magazine back in 1983, “He said, ‘I’ve dealt with your kind before. You’re a dime a dozen. You’re not unique. Anybody can do ­a ­cartridge.’”

That “anyone” included the so-called “Gang of Four”, who decided to leave Atari and form the first 3rd-party video game company; they called it Activision.

3rd-Party Software

Activision represented the first major restructuring of the video game value chain; Steve Wozniak’s Breakout was fully integrated in terms of hardware and software:

The first Atari equipment was fully integrated

The Atari 2600 with its cartridge-based system modularized hardware and software:2

The Atari 2600 was modular

Activision took that modularization to its logical (and yet, at the time, unprecedented) extension, by being a different company than the one that made the hardware:

Activision capitalized on the modularity

Activision, which had struggled to raise money given the fact it was targeting a market that didn’t yet exist, and which faced immediate lawsuits from Atari, was a tremendous success; now venture capital was eager to fund the market, leading to a host of 3rd-party developers, few of whom had the expertise or skill of Activision. The result was a flood of poor quality games that soured consumers on the entire market, leading to the legendary video game crash of 1983: industry revenue plummeted from $3.2 billion in 1983 to a mere $100 million in 1985. Activision survived, but only by pivoting to making games for the nascent personal computing market.

The personal computer market was modular from the start, and not just in terms of software. Compaq’s success in reverse-engineering the IBM PC’s BIOS created a market for PC-compatible computers, all of which ran the increasingly ubiquitous Microsoft operating system (first DOS, then Windows). This meant that developers like Activision could target Windows and benefit from competition in the underlying hardware.

Moreover, there were so many more use cases for the personal computer, along with a burgeoning market in consumer-focused magazines that reviewed software, that the market was more insulated from the anarchy that all but destroyed the home console market.

That market saw a rebirth with Nintendo’s Famicom system, christened the “Nintendo Entertainment System” for the U.S. market (Nintendo didn’t want to call it a console to avoid any association with the 1983 crash, which devastated not just video game makers but also retailers). Nintendo created its own games like Super Mario Bros. and Zelda, but also implemented exacting standards for 3rd-party developers, requiring them to pass a battery of tests and pay a 30% licensing fee for a maximum of five games a year; only then could they receive a dedicated chip for their cartridge that allowed it to work in the NES.

Nintendo controlled its ecosystem

Nintendo’s firm control of the third-party developer market may look familiar: it was an early precedent for the App Store battles of the last decade. Many of the same principles were in play:

  • Nintendo had a legitimate interest in ensuring quality, not simply for its own sake but also on behalf of the industry as a whole; similarly, the App Store, following as it did years of malware and viruses in the PC space, restored customer confidence in downloading third-party software.
  • It was Nintendo that created the 30% share for the platform owner that all future console owners would implement, and which Apple would set as the standard for the App Store.
  • While Apple’s App Store lockdown is rooted in software, Nintendo had the same problem that Atari had in terms of the physical separation of hardware and software; this was overcome by the aforementioned lockout chips, along with branding the Nintendo “Seal of Quality” in an attempt to fight counterfeit lockout chips.

Nintendo’s strategy worked, but it came with long-term costs: developers, particularly in North America, hated the company’s restrictions, and were eager to support a challenger; said challenger arrived in the form of the Sega Genesis, which launched in the U.S. in 1989. Sega initially followed Nintendo’s model of tight control, but Electronic Arts reverse-engineered Sega’s system, and threatened to create their own rival licensing program for the Genesis if Sega didn’t dramatically loosen their controls and lower their royalties; Sega acquiesced and went on to fight the Super Nintendo, which arrived in the U.S. in 1991, to a draw, thanks in part to a larger library of third-party games.

Sony’s Emergence

The company that truly took the opposite approach to Nintendo was Sony; after being spurned by Nintendo in humiliating fashion — Sony announced the Play Station CD-ROM add-on at CES in 1991, only for Nintendo to abandon the project the next day — the electronics giant set out to create their own console which would focus on 3D-graphics and package games on CD-ROMs instead of cartridges. The problem was that Sony wasn’t a game developer, so it started out completely dependent on 3rd-party developers.

One of the first ways that Sony addressed this was by building an early partnership with Namco, Sega’s biggest rival in terms of arcade games. Coin-operated arcade games were still a major market in the 1990s, with more revenue than the home market for the first half of the decade. Arcade games had superior graphics and control systems, and were where new games launched first; the eventual console port was always an imitation of the original. The problem, however, is that it was becoming increasingly expensive to build new arcade hardware, so Sony proposed a partnership: Namco could use modified PlayStation hardware as the basis of its System 11 arcade hardware, which would make it easy to port its games to PlayStation. Namco, which also rebuilt its more powerful Ridge Racer arcade game for the PlayStation, took Sony’s offer: Ridge Racer launched with the Playstation, and Tekken was a massive hit given its near perfect fidelity to the arcade version.

Sony was much better for 3rd-party developers in other ways, as well: while the company maintained a licensing program, its royalty rates were significantly lower than Nintendo’s, and the cost of manufacturing CD-ROMs was much lower than manufacturing cartridges; this was a double whammy for the Nintendo 64 because while cartridges were faster and offered the possibility of co-processor add-ons, what developers really wanted was the dramatically increased amount of storage CD-ROMs afforded. The Playstation was also the first console to enable development on the PC in a language (C) that was well-known to existing developers. In the end, despite the fact that the Nintendo 64 had more capable hardware than the PlayStation, it was the PlayStation that won the generation thanks to a dramatically larger game library, the vast majority of which were third-party games.

Sony extended that advantage with the PlayStation 2, which was backwards compatible with the PlayStation, meaning it had a massive library of 3rd-party games immediately; the newly-launched Xbox, which was basically a PC, and thus easy to develop for, made a decent showing, while Nintendo struggled with the Gamecube, which had both a non-standard controller and non-standard microdisks that once again limited the amount of content relative to the DVDs used for PlayStation 2 and Xbox (and it couldn’t function as a DVD player, either).

The peak of 3rd-party based competition

This period for video games was the high point in terms of console competition for 3rd-party developers for two reasons:

  • First, there were still meaningful choices to be made in terms of hardware and the overall development environment, as epitomized by Sony’s use of CD-ROMs instead of cartridges.
  • Second, developers were still constrained by the cost of developing for distinct architectures, which meant it was important to make the right choice (which dramatically increased the return of developing for the same platform as everyone else).

It was the Sony-Namco partnership, though, that was a harbinger of the future: it behooved console makers to have similar hardware and software stacks to their competitors, so that developers would target them; developers, meanwhile, were devoting an increasing share of their budget to developing assets, particularly when the PS3/Xbox 360 generation targeted high definition, which increased their motivation to be on multiple platforms to better leverage their investments. It was Sony that missed this shift: the PS3 had a complicated Cell processor that was hard to develop for, and a high price thanks to its inclusion of a Blu-Ray player; the Xbox 360 had launched earlier with a simpler architecture, and most developers built for the Xbox first and Playstation 3 second (even if they launched at the same time).

The real shift, though, was the emergence of game engines as the dominant mode of development: instead of building a game for a specific console, it made much more sense to build a game for a specific engine which abstracted away the underlying hardware. Sometimes these game engines were internally developed — Activision launched its Call of Duty franchise in this time period (after emerging from bankruptcy under new CEO Bobby Kotick) — and sometimes they were licensed (i.e. Epic’s Unreal Engine). The impact, though, was in some respects similar to cartridges on the Atari 2600:

Consoles became a commodity in the PS3/Xbox 360 generation

In this new world it was the consoles themselves that became modularized: consumers picked out their favorite and 3rd-party developers delivered their games on both.

Nintendo, meanwhile, dominated the generation with the Nintendo Wii. What was interesting, though, is that 3rd-party support for the Wii was still lacking, in part because of the underpowered hardware (in contrast to previous generations): the Wii sold well because of its unique control method — which most people used to play Wii Sports — and Nintendo’s first-party titles. It was, in many respects, Nintendo’s most vertically-integrated console yet, and was incredibly successful.

Sony Exclusives

Sony’s pivot after the (relatively) disappointing PlayStation 3 was brilliant: if the economic imperative for 3rd-party developers was to be on both Xbox and PlayStation (and the PC), and if game engines made that easy to implement, then there was no longer any differentiation to be had in catering to 3rd-party developers.

Instead Sony beefed up its internal game development studios and bought up several external ones, with the goal of creating PlayStation 4 exclusives. Now some portion of new games would not be available on Xbox not because it had crappy cartridges or underpowered graphics, but because Sony could decide to limit its profit on individual titles for the sake of the broader PlayStation 4 ecosystem. After all, there would still be a lot of 3rd-party developers; if Sony had more consoles than Microsoft because of its exclusives, then it would harvest more of those 3rd-party royalty fees.

Those fees, by the way, started to head back up, particularly for digital-only versions, which returned to that 30% cut that Nintendo had pioneered many years prior; this is the downside of depending on universal abstractions like game engines while bearing high development costs: you have no choice but to be on every platform no matter how much it costs.

Sony's exclusive strategy gave it the edge in the PS4 generation

Sony bet correctly: the PS4 dominated its generation, helped along by Microsoft making a bad bet of its own by packing in the Kinect with the Xbox One. It was a repeat of Sony’s mistake with the PS3, in that it was a misguided attempt to differentiate in hardware when the fundamental value chain had long since dictated that the console was increasingly a commodity. Content is what mattered — at least as long as the current business model persisted.

Nintendo, meanwhile, continued to march to its own vertically-integrated drum: after the disastrous Wii U the company quickly pivoted to the Nintendo Switch, which continues to leverage its truly unique portable form factor and Nintendo’s first-party games to huge sales. Third party support, though, remains extremely tepid; it’s just too underpowered, and the sort of person that cares about third-party titles like Madden or Call of Duty has long since bought a PlayStation or Xbox.

The FTC vs. Microsoft

Forty years of context may seem like overkill when it comes to examining the FTC’s attempt to block Microsoft’s acquisition of Activision, but I think it is essential for multiple reasons.

First, the video game market has proven to be extremely dynamic, particularly in terms of 3rd-party developers:

  • Atari was vertically integrated
  • Nintendo grew the market with strict control of 3rd-party developers
  • Sony took over the market by catering to 3rd-party developers and differentiating on hardware
  • Xbox’s best generation leaned into increased commodification and ease-of-development
  • Sony retook the lead by leaning back into vertical integration

That is quite the round trip, and it’s worth pointing out that attempting to freeze the market in its current iteration at any point over the last forty years would have foreclosed future changes.

At the same time, Sony’s vertical integration seems more sustainable than Atari’s. First, Sony owns the developers who make the most compelling exclusives for its consoles; they can’t simply up-and-leave like the Gang of Four. Second, the costs of developing modern games has grown so high that any 3rd-party developer has no choice but to develop for all relevant consoles. That means that there will never be a competitor who wins by offering 3rd-party developers a better deal; the only way to fight back is to have developers of your own, or a completely different business model.

The first fear raised by the FTC is that Microsoft, by virtue of acquiring Activision, is looking to fight its own exclusive war, and at first blush it’s a reasonable concern. After all, Activision has some of the most popular 3rd-party games, particularly the aforementioned Call of Duty franchise. The problem with this reasoning, though, is that the price Microsoft paid for Activision was a multiple of Activision’s current revenues, which include billions of dollars for games sold on Playstation. To suddenly cut Call of Duty (or Activision’s other multi-platform titles) off from Playstation would be massively value destructive; no wonder Microsoft said it was happy to sign a 10-year deal with Sony to keep Call of Duty on PlayStation.

Just for clarity’s sake, the distinction here from Sony’s strategy is the fact that Microsoft is acquiring these assets. It’s one thing to develop a game for your own platform — you’re building the value yourself, and choosing to harvest it with an ecosystem strategy as opposed to maximizing that games’ profit. An acquirer, though, has to pay for the business model that already exists.

At the same time, though, it’s no surprise that Microsoft has taken in-development assets from its other acquisition like ZeniMax and made them exclusives; that is the Sony strategy, and Microsoft was very clear when it acquired ZeniMax that it would keep cross-platform games cross-platform but may pursue a different strategy for new intellectual property. CEO of Microsoft Gaming Phil Spencer told Bloomberg at the time:

In terms of other platforms, we’ll make a decision on a case-by-case basis.

Given this, it’s positively bizarre that the FTC also claims that Microsoft lied to the E.U. with regards to its promises surrounding the ZeniMax acquisition: the company was very clear that existing cross-platform games would stay cross-platform, and made no promises about future IP. Indeed, the FTC’s claims were so off-base that the European Commission felt the need to clarify that Microsoft didn’t mislead the E.U.; from Mlex:

Microsoft didn’t make any “commitments” to EU regulators not to release Xbox-exclusive content following its takeover of ZeniMax Media, the European Commission has said. US enforcers yesterday suggested that the US tech giant had misled the regulator in 2021 and cited that as a reason to challenge its proposed acquisition of Activision Blizzard. “The commission cleared the Microsoft/ZeniMax transaction unconditionally as it concluded that the transaction would not raise competition concerns,” the EU watchdog said in an emailed statement.

The absence of competition concerns “did not rely on any statements made by Microsoft about the future distribution strategy concerning ZeniMax’s games,” said the commission, which itself has opened an in-depth probe into the Activision Blizzard deal and appears keen to clarify what happened in the previous acquisition. The EU agency found that even if Microsoft were to restrict access to ZeniMax titles, it wouldn’t have a significant impact on competition because rivals wouldn’t be denied access to an “essential input,” and other consoles would still have a “large array” of attractive content.

The FTC’s concerns about future IP being exclusive ring a bit hypocritical given the fact that Sony has been pursuing the exact same strategy — including multiple acquisitions — without any sort of regulatory interference; more than that, though, to effectively make up a crime is disquieting. To be fair, those Sony acquisitions were a lot smaller than Activision, but this goes back to the first point: the entire reason Activision is expensive is because of its already-in-market titles, which Microsoft has every economic incentive to keep cross-platform (and which it is willing to commit to contractually).

Whither Competition

It’s the final FTC concern, though, that I think is dangerous. From the complaint:

These effects are likely to be felt throughout the video gaming industry. The Proposed Acquisition is reasonably likely to substantially lessen competition and/or tend to create a monopoly in both well-developed and new, burgeoning markets, including highperformance consoles, multi-game content library subscription services, and cloud gaming subscription services…

Multi-Game Content Library Subscription Services comprise a Relevant Market. The anticompetitive effects of the Proposed Acquisition also are reasonably likely to occur in any relevant antitrust market that contains Multi-Game Content Library Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

Cloud Gaming Subscription Services are a Relevant Market. The anticompetitive effects of the Proposed Acquisition alleged in this complaint are also likely to occur in any relevant antitrust market that contains Cloud Gaming Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

“Multi-Game Content Library Subscription Services” and “Cloud Gaming Subscription Services” are, indeed, the reason why Microsoft wants to do this deal. I explained the rationale when Microsoft acquired ZeniMax:

A huge amount of discussion around this acquisition was focused on Microsoft needing its own stable of exclusives in order to compete with Sony, but it’s important to note that making all of ZeniMax’s games exclusives would be hugely value destructive, at least in the short-to-medium term. Microsoft is paying $7.5 billion for a company that currently makes money selling games on PC, Xbox, and PS5, and simply cutting off one of those platforms — particularly when said platform is willing to pay extra for mere timed exclusives, not all-out exclusives — is to effectively throw a meaningful chunk of that value away. That certainly doesn’t fit with Nadella’s statement that “each layer has to stand on its own for what it brings”…

Microsoft isn’t necessarily buying ZeniMax to make its games exclusive, but rather to apply a new business model to them — specifically, the Xbox Game Pass subscription. This means that Microsoft could, if it chose, have its cake and eat it too: sell ZeniMax games at their usual $60~$70 price on PC, PS5, Xbox, etc., while also making them available from day one to Xbox Game Pass subscribers. It won’t take long for gamers to quickly do the math: $180/year — i.e. three games bought individually — gets you access to all of the games, and not just on one platform, but on all of them, from PC to console to phone.

Sure, some gamers will insist on doing things the old way, and that’s fine: Microsoft can make the same money ZeniMax would have as an independent company. Everyone else can buy into Microsoft’s model, taking advantage of the sort of win-win-win economics that characterize successful bundles. And, if they have a PS5 and thus can’t get access to Xbox Game Pass on their TVs, an Xbox is only an extra $10/month away.

Microsoft is willing to cannibalize itself to build a new business model for video games, and it’s a business model that is pretty darn attractive for consumers. It’s also a business model that Activision wouldn’t pursue on its own, because it has its own profits to protect. Most importantly, though, it’s a business model that is anathema to Sony: making titles broadly available to consumers on a subscription basis is the exact opposite of the company’s exclusive strategy, which is all about locking consumers into Sony’s platform.

Microsoft's Xbox Game Pass strategy is orthogonal to Sony's

Here’s the thing: isn’t this a textbook example of competition? The FTC is seeking to preserve a model of competition that was last relevant in the PS2/Xbox generation, but that plane of competition has long since disappeared. The console market as it is today is one that is increasingly boring for consumers, precisely because Sony has won. What is compelling about Microsoft’s approach is that they are making a bet that offering consumers a better deal is the best way to break up Sony’s dominance, and this is somehow a bad thing?

What makes this determination to outlaw future business models particularly frustrating is that the real threat to gaming today is the dominance of storefronts that exact their own tax while contributing nothing to the development of the industry. The App Store and Google Play leverage software to extract 30% from mobile games just because they can — and sure, go ahead and make the same case about Microsoft and Sony. If the FTC can’t be bothered to check the blatant self-favoring inherent in these models, at the minimum it seems reasonable to give a chance to a new kind of model that could actual push consumers to explore alternative ways to game on their devices.

For the record, I do believe this acquisition demands careful overview, and it’s completely appropriate to insist that Microsoft continue to deliver Activision titles to other platforms, even if it wouldn’t make economic sense to do anything but. It’s increasingly difficult, though, to grasp any sort of coherent theory to the FTC’s antitrust decisions beyond ‘big tech bad’. There are real antitrust issues in the industry, but that requires actually understanding the industry to tease them out; that sort of understanding applied to this case would highlight Sony’s actual dominance and that having multiple compelling platforms with different business models is the essence of competition.



  1. Ten years later, as a hand-me-down from a relative 

  2. The Fairchild Channel F, which was released in 1976, was the actual first console-based video game system, but the 2600 was by far the most popular. 

AI Homework

It happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had just been announced by OpenAI a few hours earlier:

A wrong answer from ChatGPT about Thomas Hobbes

This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch. James Madison, while writing the U.S. Constitution, adopted an evolved proposal from Charles Montesquieu that added a judicial branch as a check on the other two.

The ChatGPT Product

It was dumb luck that my first ChatGPT query ended up being something the service got wrong, but you can see how it might have happened: Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet. Those assignments — by virtue of being on the Internet — are probably some of the grist of the GPT-3 language model that undergirds ChatGPT; ChatGPT applies a layer of Reinforcement Learning from Human Feedback (RLHF) to create a new model that is presented in an intuitive chat interface with some degree of memory (which is achieved by resending previous chat interactions along with the new prompt).

What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3. The critical factor is, I suspect, that ChatGPT is easy to use, and it’s free: it is one thing to read examples of AI output, like we saw when GPT-3 was first released; it’s another to generate those outputs yourself; indeed, there was a similar explosion of interest and awareness when Midjourney made AI-generated art easy and free (and that interest has taken another leap this week with an update to Lensa AI to include Stable Diffusion-driven magic avatars).

More broadly, this is a concrete example of the point former GitHub CEO Nat Friedman made to me in a Stratechery interview about the paucity of real-world AI applications beyond Github Copilot:

I left GitHub thinking, “Well, the AI revolution’s here and there’s now going to be an immediate wave of other people tinkering with these models and developing products”, and then there kind of wasn’t and I thought that was really surprising. So the situation that we’re in now is the researchers have just raced ahead and they’ve delivered this bounty of new capabilities to the world in an accelerating way, they’re doing it every day. So we now have this capability overhang that’s just hanging out over the world and, bizarrely, entrepreneurs and product people have only just begun to digest these new capabilities and to ask the question, “What’s the product you can now build that you couldn’t build before that people really want to use?” I think we actually have a shortage.

Interestingly, I think one of the reasons for this is because people are mimicking OpenAI, which is somewhere between the startup and a research lab. So there’s been a generation of these AI startups that style themselves like research labs where the currency of status and prestige is publishing and citations, not customers and products. We’re just trying to, I think, tell the story and encourage other people who are interested in doing this to build these AI products, because we think it’ll actually feed back to the research world in a useful way.

OpenAI has an API that startups could build products on; a fundamental limiting factor, though, is cost: generating around 750 words using Davinci, OpenAI’s most powerful language model, costs 2 cents; fine-tuning the model, with RLHF or anything else, costs a lot of money, and producing results from that fine-tuned model is 12 cents for ~750 words. Perhaps it is no surprise, then, that it was OpenAI itself that came out with the first widely accessible and free (for now) product using its latest technology; the company is certainly getting a lot of feedback for its research!

OpenAI has been the clear leader in terms of offering API access to AI capabilities; what is fascinating is about ChatGPT is that it establishes OpenAI as a leader in terms of consumer AI products as well, along with MidJourney. The latter has monetized consumers directly, via subscriptions; it’s a business model that makes sense for something that has marginal costs in terms of GPU time, even if it limits exploration and discovery. That is where advertising has always shined: of course you need a good product to drive consumer usage, but being free is a major factor as well, and text generation may end up being a better match for advertising, given its utility — and thus opportunity to collect first party data — is likely going to be higher than image generation for most people.

Deterministic vs. Probabilistic

It is an open question as to what jobs will be the first to be disrupted by AI; what became obvious to a bunch of folks this weekend, though, is that there is one universal activity that is under serious threat: homework.

Go back to the example of my daughter I noted above: who hasn’t had to write an essay about a political philosophy, or a book report, or any number of topics that are, for the student assigned to write said paper theoretically new, but in terms of the world generally simply a regurgitation of what has been written a million times before. Now, though, you can write something “original” from the regurgitation, and, for at least the next few months, you can do it for free.

The obvious analogy to what ChatGPT means for homework is the calculator: instead of doing tedious math calculations students could simply punch in the relevant numbers and get the right answer, every time; teachers adjusted by making students show their work.

That there, though, also shows why AI-generated text is something completely different; calculators are deterministic devices: if you calculate 4,839 + 3,948 - 45 you get 8,742, every time. That’s also why it is a sufficient remedy for teachers to requires students show their work: there is one path to the right answer and demonstrating the ability to walk down that path is more important than getting the final result.

AI output, on the other hand, is probabilistic: ChatGPT doesn’t have any internal record of right and wrong, but rather a statistical model about what bits of language go together under different contexts. The base of that context is the overall corpus of data that GPT-3 is trained on, along with additional context from ChatGPT’s RLHF training, as well as the prompt and previous conversations, and, soon enough, feedback from this week’s release. This can result in some truly mind-blowing results, like this Virtual Machine inside ChatGPT:

Did you know, that you can run a whole virtual machine inside of ChatGPT?

Making a virtual machine in ChatGPT

Great, so with this clever prompt, we find ourselves inside the root directory of a Linux machine. I wonder what kind of things we can find here. Let’s check the contents of our home directory.

Making a virtual machine in ChatGPT

Hmmm, that is a bare-bones setup. Let’s create a file here.

Making a virtual machine in ChatGPT

All the classic jokes ChatGPT loves. Let’s take a look at this file.

Making a virtual machine in ChatGPT

So, ChatGPT seems to understand how filesystems work, how files are stored and can be retrieved later. It understands that linux machines are stateful, and correctly retrieves this information and displays it.

What else do we use computers for. Programming!

Making a virtual machine in ChatGPT

That is correct! How about computing the first 10 prime numbers:

Making a virtual machine in ChatGPT

That is correct too!

I want to note here that this codegolf python implementation to find prime numbers is very inefficient. It takes 30 seconds to evaluate the command on my machine, but it only takes about 10 seconds to run the same command on ChatGPT. So, for some applications, this virtual machine is already faster than my laptop.

The difference is that ChatGPT is not actually running python and determining the first 10 prime numbers deterministically: every answer is a probabilistic result gleaned from the corpus of Internet data that makes up GPT-3; in other words, ChatGPT comes up with its best guess as to the result in 10 seconds, and that guess is so likely to be right that it feels like it is an actual computer executing the code in question.

This raises fascinating philosophical questions about the nature of knowledge; you can also simply ask ChatGPT for the first 10 prime numbers:

ChatGPT listing the first 10 prime numbers

Those weren’t calculated, they were simply known; they were known, though, because they were written down somewhere on the Internet. In contrast, notice how ChatGPT messes up the far simpler equation I mentioned above:

ChatGPT doing math wrong

For what it’s worth, I had to work a little harder to make ChatGPT fail at math: the base GPT-3 model gets basic three digit addition wrong most of the time, while ChatGPT does much better. Still, this obviously isn’t a calculator: it’s a pattern matcher — and sometimes the pattern gets screwy. The skill here is in catching it when it gets it wrong, whether that be with basic math or with basic political theory.

Interrogating vs. Editing

There is one site already on the front-lines in dealing with the impact of ChatGPT: Stack Overflow. Stack Overflow is a site where developers can ask questions about their code or get help in dealing with various development issues; the answers are often code themselves. I suspect this makes Stack Overflow a goldmine for GPT’s models: there is a description of the problem, and adjacent to it code that addresses that problem. The issue, though, is that the correct code comes from experienced developers answering questions and having those questions upvoted by other developers; what happens if ChatGPT starts being used to answer questions?

It appears it’s a big problem; from Stack Overflow Meta:

Use of ChatGPT generated text for posts on Stack Overflow is temporarily banned.

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need the volume of these posts to reduce and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts. So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.

There are a few fascinating threads to pull on here. One is about the marginal cost of producing content: Stack Overflow is about user-generated content; that means it gets its content for free because its users generate it for help, generosity, status, etc. This is uniquely enabled by the Internet.

AI-generated content is a step beyond that: it does, especially for now, cost money (OpenAI is bearing these costs for now, and they’re | substantial), but in the very long run you can imagine a world where content generation is free not only from the perspective of the platform, but also in terms of user’s time; imagine starting a new forum or chat group, for example, with an AI that instantly provides “chat liquidity”.

For now, though, probabilistic AI’s seem to be on the wrong side of the Stack Overflow interaction model: whereas deterministic computing like that represented by a calculator provides an answer you can trust, the best use of AI today — and, as Noah Smith and roon argue, the future — is providing a starting point you can correct:

What’s common to all of these visions is something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.

The sandwich workflow is very different from how people are used to working. There’s a natural worry that prompting and editing are inherently less creative and fun than generating ideas yourself, and that this will make jobs more rote and mechanical. Perhaps some of this is unavoidable, as when artisanal manufacturing gave way to mass production. The increased wealth that AI delivers to society should allow us to afford more leisure time for our creative hobbies…

We predict that lots of people will just change the way they think about individual creativity. Just as some modern sculptors use machine tools, and some modern artists use 3d rendering software, we think that some of the creators of the future will learn to see generative AI as just another tool – something that enhances creativity by freeing up human beings to think about different aspects of the creation.

In other words, the role of the human in terms of AI is not to be the interrogator, but rather the editor.

Zero Trust Homework

Here’s an example of what homework might look like under this new paradigm. Imagine that a school acquires an AI software suite that students are expected to use for their answers about Hobbes or anything else; every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system. Moreover, instead of futilely demanding that students write essays themselves, teachers insist on AI. Here’s the thing, though: the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.

What is compelling about this new skillset is that it isn’t simply a capability that will be increasingly important in an AI-dominated world: it’s a skillset that is incredibly valuable today. After all, it is not as if the Internet is, as long as the content is generated by humans and not AI, “right”; indeed, one analogy for ChatGPT’s output is that sort of poster we are all familiar with who asserts things authoritatively regardless of whether or not they are true. Verifying and editing is an essential skillset right now for every individual.

It’s also the only systematic response to Internet misinformation that is compatible with a free society. Shortly after the onset of COVID I wrote Zero Trust Information that made the case that the only solution to misinformation was to adopt the same paradigm behind Zero Trust Networking:

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

A drawing of Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications…In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

I argued that young people were already adapting to this new paradigm in terms of misinformation:

To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.

The biggest mistake in that article was the assumption that the distribution of information is a normal one; in fact, as I noted in Defining Information, there is a lot more bad information for the simple reason that it is cheaper to generate. Now the deluge of information is going to become even greater thanks to AI, and while it will often be true, it will sometimes be wrong, and it will be important for individuals to figure out which is which.

The solution will be to start with Internet assumptions, which means abundance, and choosing Locke and Montesquieu over Hobbes: instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out. In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.

I wrote a follow-up to this Article in this Daily Update.

Narratives

Two pieces of news dominated the tech industry last week: Elon Musk and Twitter, and Sam Bankman-Fried and FTX. Both showed how narratives can lead people astray. Another piece of news, though, flew under the radar: yet another development in AI, which is a reminder that the only narratives that last are rooted in product.

Twitter and the Wrong Narrative

I did give Elon Musk the benefit of the doubt.

Back in 2016 I wrote It’s a Tesla, marveling at the way Musk had built a brand that transcended far beyond a mere car company; what was remarkable about Musk’s approach is that said brand was a prerequisite to Tesla, in contrast to a company like Apple, the obvious analog as far as customer devotion goes. From 2021’s Mistakes and Memes:

This comparison works as far as it goes, but it doesn’t tell the entire story: after all, Apple’s brand was derived from decades building products, which had made it the most profitable company in the world. Tesla, meanwhile, always seemed to be weeks from going bankrupt, at least until it issued ever more stock, strengthening the conviction of Tesla skeptics and shorts. That, though, was the crazy thing: you would think that issuing stock would lead to Tesla’s stock price slumping; after all, existing shares were being diluted. Time after time, though, Tesla announcements about stock issuances would lead to the stock going up. It didn’t make any sense, at least if you thought about the stock as representing a company.

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

TSLA is not at the level it was during the heights of the bull market, but Tesla is a real company, with real cars, and real profits; last quarter the electric car company made more money than Toyota (thanks in part to a special charge for Toyota; Toyota’s operating profit was still greater). SpaceX is a real company, with real rockets that land on real rafts, and while the company is not yet profitable, there is certainly a viable path to making money; the company’s impact on both humanity’s long-term potential and the U.S.’s national security is already profound.

Twitter, meanwhile, is a real product that has largely failed as company; I wrote earlier this year when Musk first made a bid:

Twitter has, over 19 different funding rounds (including pre-IPO, IPO, and post-IPO), raised $4.4 billion in funding; meanwhile the company has lost a cumulative $861 million in its lifetime as a public company (i.e. excluding pre-IPO losses). During that time the company has held 33 earnings calls; the company reported a profit in only 14 of them.

Given this financial performance it is kind of amazing that the company was valued at $30 billion the day before Musk’s investment was revealed; such is the value of Twitter’s social graph and its cultural impact: despite there being no evidence that Twitter can even be sustainably profitable, much less return billions of dollars to shareholders, hope springs eternal that the company is on the verge of unlocking its potential. At the same time, these three factors — Twitter’s financials, its social graph, and its cultural impact — get at why Musk’s offer to take Twitter private is so intriguing.

Stop right there: can you see where I opened the door for an error of omission as far as my analysis is concerned? Yes, Musk has successfully built two companies, and yes, Twitter is not a successful company; what followed in that Article, though, was my own vision of what Twitter might become. I should have taken the time to think more critically about Musk’s vision…which doesn’t appear to exist.

Oh sure, Musk and his coterie of advisors have narratives: bots are bad and blue checks are about status. And, to be fair, both are true as far as it goes. The problem with bots is self-explanatory, while those who actually need blue checks — brands, celebrities, and reliable news breakers — likely care about them the least; the rest of us were happy to get our checkmark despite the fact there was no real risk of anyone impersonating us in any damaging way just because it made us feel special (speaking for myself anyway: I don’t much care about it now, but I was pretty delighted when I got it back in 2014 or so).

Of course Musk felt these problems more acutely than most: his high profile, active usage of Twitter, and popularity in crypto communities meant Musk tweets were the most likely place to encounter bots on the service; meanwhile Musk’s own grievances with journalists generally could, one imagine, engender a certain antipathy for “Bluechecks”, given that the easiest way to get one was to work for a media organization. The problem, though, is that Musk’s Twitter experience — thought to be an asset, including by yours truly — isn’t really relevant to the actual day-to-day reality of the site as experience by Twitter’s actual users.

And so we got last week’s verified disaster, where Musk could have his revenge on bluechecks by selling them to everyone, with the most eager buyers being those eager to impersonate brands, celebrities, and Musk himself. It was certainly funny, and I believe Musk that Twitter usage was off the charts, but it wasn’t a particularly prudent move for a company reliant on brand advertising in the middle of an economic slowdown.

This is not, to be clear, to criticize Musk for acting, or even for acting quickly: Twitter needed a kick in the pants (and, even had the company not been sold, was almost certainly in line for significant layoffs), and it’s understandable that mistakes will be made; the point of rapid iteration is to learn more quickly, which is to say that Twitter has, for years, not been learning very much at all. Rather, what was concerning about this mistake in particular is the degree to which it was so clearly rooted in Musk’s personal grievances, which (1) were knowable before he acted and (2) were not the biggest problems facing Twitter. That was knowable by me as an analyst, and I regret not pointing them out.

Indeed, these aren’t the only Musk narratives that have bothered me; here is his letter to advertisers posted on his first day on the job:

I wanted to reach out personally to share my motivation in acquiring Twitter. There has been much speculation about why I bought Twitter and what I think about advertising. Most of it has been wrong.

The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.

In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.

This is why I bought Twitter. I didn’t do it because it would be easy. I didn’t do it to make more money. I did it to try to help humanity, whom I love. And I do so with humility, recognizing that failure in pursuing this goal, despite our best efforts, is a very real possibility.

That said, Twitter obviously cannnot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all, where you can choose your desired experience according to your preferences, just as you can choose, for example, to see movies or play video games ranging from all ages to mature.

I also very much believe that advertising, when done right, can delight, entertain and inform you; it can show you a service or product or medical treatment that you never knew existed, but is right for you. For this to be true, it is essential to show Twitter users advertising that is as relevant as possible to their needs. Low relevancy ads are spam, but highly relevant ads are actually content!

Fundamentally, Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. To everyone who has partnered with us, I thank you. Let us build something extraordinary together.

All of this sounds good, and on closer examination is mostly wrong. Obviously relevant ads are better, but Twitter’s problem is not just poor execution in terms of its ad product but also that it’s a terrible place for ads. I do agree that giving users more control is a better approach to content moderation, but the obsession with the doing-it-for-the-clicks narrative ignores the nichification of the media. And, when it comes to the good of humanity, I think the biggest learning from Twitter is that putting together people who disagree with each other is actually a terrible idea; yes, it is why Twitter will never be replicated, but also why it has likely been a net negative for society. The digital town square is the Internet broadly; Twitter is more akin to a digital cage match, perhaps best monetized on a pay-per-view basis.

In short, it seems clear that Musk has the wrong narrative, and that’s going to mean more mistakes. And, for my part, I should have noted that sooner.

FTX and the Diversionary Narrative

Eric Newcomer wrote on Twitter with regards to the FTX blow-up:1

There are a few different ways to interpret Sam Bankman-Fried’s political activism:2

  • That he believed in the causes he supported sincerely and made a mistake with his business.
  • That he supported the causes cynically as a way to curry favor and hide his fraud.
  • That he believed he was some sort of savior gripped with an ends-justify-the-means mindset that led him to believe fraud was actually the right course of action.

In the end, whichever explanation is true doesn’t really matter: the real world impact was that customers lost around $10 billion in assets, and counting. What is interesting is that all of the explanations are an outgrowth of the view that business ought to be about more than business: to simply want to make money is somehow wrong; business is only good insofar as it is dedicated to furthering goals that don’t have anything to do with the business in question.

To put it another way, there tends to be cynicism about the idea of changing the world by building a business; entrepreneurs are judged by whether their intentions beyond business are sufficiently large and politically correct. That, though, is precisely why Bankman-Fried was viewed with such credulousness: he had the “right” ambitions and the “right” politics, so of course he was running the “right” business; he wasn’t one of those “true believers” who simply wanted to get rich off of blockchains.

In the end, though, the person who arguably comes out of this disaster looking the best is Changpeng Zhao (CZ), the founder and CEO of Binance, and the person whose tweet started the run that revealed FTX’s insolvency.3 No one, as far as I know, holds up CZ as any sort of activist or political actor for anything outside of crypto; isn’t that better? Perhaps had Bankman-Fried done nothing but run Alameda Research and FTX there would have been more focus on his actual business; too many folks, though, including journalists and venture capitalists, were too busy looking at things everyone claims are important but which were, in the end, a diversion from massive fraud.

Crypto and the Theory Narrative

I never wrote about Bankman-Fried; for what it’s worth, I always found his narrative suspect (this isn’t a brag, as I will explain in a moment). More broadly, I never wrote much about crypto-currency based financial applications either, beyond this article which was mostly about Bitcoin,4 and this article that argued that digital currencies didn’t make sense in the physical world but had utility in a virtual one.5 This was mostly a matter of uncertainty: yes, many of the financial instruments on exchanges like FTX were modeled on products that were first created on Wall Street, but at the end of the day Wall Street is undergirded by actual companies building actual products (and even then, things can quite obviously go sideways). Crypto-currency financial applications were undergirded by electricity and collective belief, and nothing more, and yet so many smart people seemed on-board.

What I did write about was Technological Revolutions and the possibility that crypto was the birth of something new; why Aggregation Theory would apply to crypto not despite, but because, of its decentralization; and Internet 3.0 and the theory that political considerations would drive decentralization. That last Article was explicitly not about cryptocurrencies, but it certainly fit the general crypto narrative of decentralization being an important response to the increased centralization of the Web 2.0 era.

What was weird in retrospect is that the Internet 3.0 Article was written a week after the article about Aggregation Theory and OpenSea, where I wrote:

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply…What is striking is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtuous cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Of the three Articles I listed, this one seems to be the most correct, and I think the reason is obvious: that was the only Article written about an actual product — OpenSea — while the other ones were about theory and narrative. When that narrative was likely wrong — that crypto is the foundation of a new technological revolution, for example — then the output that resulted was wrong, not unlike Musk’s wrong narrative leading to major mistakes at Twitter.

What I regret more, though, was keeping quiet about my uncertainty about what exactly all of these folks were creating these complex financial products out of: here I suffered from my own diversionary narrative, paying too much heed to the reputation and viewpoint of people certain that there was a there there, instead of being honest that while I could see the utility of a blockchain as a distributed-but-very-slow database, all of these financial instruments seemed to be based on, well, nothing.

The FTX case is not, technically speaking, about cryptocurrency utility; it is a pretty straight-forward case of fraud. Moreover, it was, as I noted in passing in that OpenSea article, a problem of centralization, as opposed to true DeFi. Such disclaimers do, though, have a whiff of “communism just hasn’t been done properly”: I already made the case that centralization is an inevitability at scale, and in terms of utility, that’s the entire problem. An entire financial ecosystem with a void in terms of underlying assets may not be fraud in a legal sense, but it sure seems fraudulent in terms of intrinsic value. I am disappointed in myself for not saying so before.

AI and the Product Narrative

Peter Thiel said in a 2018 debate with Reid Hoffman:

One axis that I am struck by is the centralization versus decentralization axis…for example, two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. Even though I think these things are under-determined, I do think these two map in a way politically very tightly on this centralization-decentralization thing. Crypto is decentralizing, AI is centralizing, or if you want to frame in a more ideologically, you could say crypto is libertarian, and AI is communist…

AI is communist in the sense it’s about big data, it’s about big governments controlling all the data, knowing more about you than you know about yourself, so a bureaucrat in Moscow could in fact set the prices of potatoes in Leningrad and hold the whole system together. If you look at the Chinese Communist Party, it loves AI and it hates crypto, so it actually fits pretty closely on that level, and I think that’s a purely technological version of this debate. There probably are ways that AI could be libertarian and there are ways that crypto could be communist, but I think that’s harder to do.

This is a narrative that makes all kind of sense in theory; I just noted, though, that my crypto Article that holds up the best is based on a realized product, and my takeaway was the opposite: crypto in practice and at scale tends towards centralization. What has been an even bigger surprise, though, is the degree to which it is AI that appears to have the potential for far more decentralization than anyone thought. I wrote earlier this fall in The AI Unbundling:

This, by extension, hints at an even more surprising takeaway: the widespread assumption — including by yours truly — that AI is fundamentally centralizing may be mistaken. If not just data but clean data was presumed to be a prerequisite, then it seemed obvious that massively centralized platforms with the resources to both harvest and clean data — Google, Facebook, etc. — would have a big advantage. This, I would admit, was also a conclusion I was particularly susceptible to, given my focus on Aggregation Theory and its description of how the Internet, contrary to initial assumptions, leads to centralization.

The initial roll-out of large language models seemed to confirm this point of view: the two most prominent large language models have come from OpenAI and Google; while both describe how their text (GPT and GLaM, respectively) and image (DALL-E and Imagen, respectively) generation models work, you either access them through OpenAI’s controlled API, or in the case of Google don’t access them at all. But then came this summer’s unveiling of the aforementioned Midjourney, which is free to anyone via its Discord bot. An even bigger surprise was the release of Stable Diffusion, which is not only free, but also open source — and the resultant models can be run on your own computer…

What is important to note, though, is the direction of each project’s path, not where they are in the journey. To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.

Just as the theory of crypto was decentralization but the product manifestation tended towards centralization, the theory of AI was centralization but a huge amount of the product excitement over the last few months has been decentralized and open source. This does, in retrospect, make sense: the malleability of software, combined with the free corpus of data that is the Internet, is much more accessible and flexible than blockchains that require network effects to be valuable, and where a single coding error results in the loss of money.

The relevance to this Article and introspection, though, is that this realization about AI is rooted in a product-based narrative, not theory. To that end, the third piece of news that happened last week was the release of Midjourney V4; the jump in quality and coherence is remarkable, even if the Midjourney aesthetic that was a hallmark of V3 is less distinct. Here is the image I used in The AI Unbundling, and a new version made with V4:

"Paperboy on a bike" with Midjourney V3 and V4

One of the things I found striking about my interview with MidJourney founder and CEO David Holz was how Midjourney came out of a process of exploration and uncertainty:

I had this goal, which was we needed to somehow create a more imaginative world. I mean, one of the biggest risks in the world I think is a collapse in belief, a belief in ourselves, a belief in the future. And part of that I think comes from a lack of imagination, a lack of imagination of what we can be, lack of imagination of what the future can be. And so this imagination thing I think is an important pillar of something that we need in the world. And I was thinking about this and I saw this, I’m like, “I can turn this into a force that can expand the imagination of the human species.” It was what we put on our company thing now. And that felt realistic. So that was really exciting.

Well, your prompt is, “/Imagine”, which is perfect.

So that was kind of the vision. But I mean, there is a lot of stuff we didn’t know. We didn’t know, how do people interact with this? What do they actually want out of it? What is the social thing? What is that? And there’s a lot of things. What are the mechanisms? What are the interfaces? What are the components that you build this experiences through? And so we kind of just have to go into that without too many opinions and just try things. And I kind of used a lot of lessons from Leap here, which was that instead of trying to go in and design a whole experience out of nothing, presupposing that you can somehow see 10 steps into the future, just make a bunch of things and see what’s cool and what people like. And then take a few of those and put them together.

It’s amazing how you try 10 things and you find the three coolest pieces, and you put them together, it feels like a lot more than three things. It kind of multiplies out in complexity and detail and it feels like it has depth, even though it doesn’t seem like a lot. And so yeah, there’s something magic about finding three cool things and then starting to build a product out of that.

In the end, the best way of knowing is starting by consciously not-knowing. Narratives are tempting but too often they are wrong, a diversion, or based on theory without any tether to reality. Narratives that are right, on the other hand, follow from products, which means that if you want to control the narrative in the long run, you have to build the product first, whether that be a software product, a publication, or a company.

That does leave open the question of Musk, and the way he seemed to meme Tesla into existence, while building a rocket ship on the side. I suspect the distinction is that both companies are rooted in the physical world: physics has a wonderful grounding effect on the most fantastical of narratives. Digital services like Twitter, though, built as they are on infinitely malleable software, are ultimately about people and how they interact with each other. The paradox is that this makes narratives that much more alluring, even — especially! — if they are wrong.


  1. I gave a 15 minute overview of the FTX blow-up on Friday’s Dithering

  2. Beyond the conspiracy theories that he was actually some sort of secret agent sent to destroy crypto, a close cousin of the conspiracy theory that Musk’s goal is to actually destroy Twitter; I mean, you can make a case for both! 

  3. Past performance is no guarantee of future results! 

  4. Given Bitcoin’s performance in a high inflationary environment the argument that it is a legitimate store of value looks quite poor 

  5. TBD 

Stratechery Plus Adds Sharp China with Sinocism’s Bill Bishop

In September I announced Sharp Tech and Stratechery Plus:

I am very pleased to announce the latest addition to the Stratechery Plus bundle: Sharp China with Sinocism’s Bill Bishop:

Sharp China with Sinocism's Bill Bishop

Sharp China with Sinocism’s Bill Bishop is a collaboration between Stratechery and Sinocism. Sharp China is, like Sharp Tech, hosted by Andrew Sharp;1 just as Sharp Tech seeks to provide a better understanding of the tech industry through an engaging and approachable conversational format, Sharp China seeks to do the same with everything China-related, and there is no better person to provide this understanding than Sinocism’s Bill Bishop.

Bill Bishop is an entrepreneur and former media executive with more than a decade’s experience living and working in China. Since leaving Beijing in 2015, he has lived in Washington DC. Bishop previously wrote the Axios China weekly newsletter and the China Insider column for the New York Times Dealbook and, in the late 1990s, co-founded MarketWatch.com.

Bishop founded Sinocism in 2012 to provide investors, policymakers, executives, analysts, diplomats, journalists, scholars and others a comprehensive overview of what is happening in China; Bishop reads Chinese fluently, and provides summaries of reports from not just the U.S. but China as well. I personally find Sinocism essential, but what I have always hoped for were more of Bishop’s opinions on the news: I’m excited that Sharp China will give him room for just that.

While Sharp China launched in beta last week for Stratechery Plus and Sinocism subscribers, today we are announcing it to everyone, and making the latest episode about The State of Dynamic Zero-COVID free to listen to. In addition, you can listen to excerpts from the first two shows.

To add the show to your podcast player, please log in to your member account, or listen in Spotify. Sharp China will publish most weeks going forward. You can also email questions for Bill to email@sharpchina.fm; I’ve been really pleased with the mailbag segments of Sharp Tech, and I look forward to listening to them on Sharp China as well.2

Once again, to receive every episode of Sharp China, along with Stratechery Updates and Interviews, Sharp Tech, and Dithering, subscribe to Stratechery Plus. I look forward to continuing to make your subscription more valuable.


  1. Sharp China is the first addition to the Stratechery Plus bundles that I do not personally appear on regularly 

  2. If you have any issues adding Sharp China to your podcast player please email support@stratechery.com