Netflix’s New Chapter

Netflix’s moment of greatest peril is, in retrospect, barely visible in the company’s stock chart:

Netflix's all-time stock chart

I’m referring to 2004-2007 and the company’s battle with Blockbuster:

Netflix's stock during its battle with Blockbuster

The simplified story of Netflix’s founding starts with Reed Hastings grumbling over a $40 late charge from Blockbuster, and ends with the brick-and-mortar giant going bankrupt as customers came to prefer online rentals from Netflix, with streaming providing the final coup de grâce.

Neither are quite right.

The Blockbuster Fight

Netflix was the idea of Marc Randolph, Netflix’s actual founder and first CEO; Randolph was eager to do something in e-commerce, and it was the just-emerging DVD form factor that sold Hastings on the idea. He would fund Randolph’s new company and be chairman, eventually taking over as CEO once he determined that Randolph was not up to the task of scaling the new company.

Blockbuster, meanwhile, mounted a far more serious challenge to Netflix than many people remember; the company started with Blockbuster Online, an entity that was completely separate from Blockbuster’s retail business for reasons of both technology and culture: Blockbuster’s stores were not even connected to the Internet, and store managers and franchisees hated having an online service cannibalize their sales. Still, when a test version went live on July 15, 2004 — the same day as Netflix’s quarterly earnings call — Netflix’s stock suffered its first Blockbuster-inspired plunge.

Three months later Netflix cut prices and referred to Amazon’s assumed imminent entry to the space; Netflix’s stock slid again. Hastings, though, said the increased competition and looming price war was actually a good thing. Gina Keating relayed Hastings’ view on that quarter’s earnings call in Netflixed:

“Look, everyone, I know the Amazon entry is a bitter and surprising pill for those of you that are long in our stock,” he told investors on the earnings conference call. “This is going to be a very large market, and we’re going to execute very hard to make this back for our shareholders, including ourselves.” The $8 billion in U.S. store rentals would pour into online rentals, setting off a grab for subscribers, he said. The ensuing growth of online rentals would cannibalize video stores faster and faster, until they collapsed. As video store revenue dropped sharply, Blockbuster would struggle to fund its online operation, he concluded. “The prize is huge, the stakes high, and we intend to win.”

Blockbuster responded by pricing Blockbuster Online 50 cents cheaper, accelerating Netflix’s stock slide. Netflix, though, knew that Blockbuster was carrying $1 billion in debt from its spin-off from Viacom, and decided to wait it out; Blockbuster cut the price again, taking an increasing share of new subscribers, and still Netflix waited. Again from Keating:

Hastings agonized over whether to drop prices further to meet Blockbuster’s $14.99 holiday price cut, but McCarthy steadfastly objected. With Blockbuster losing even more on every subscriber, relief from its advertising juggernaut was even closer at hand. Kirincich checked his models again—and the outcome was the same. Blockbuster would have to raise prices by summertime. Because Netflix was still growing solidly, McCarthy wanted to sit tight and wait until the inevitable happened. “They can continue to bleed at this rate of $14.99, given the usage patterns that we know exist early in the life of the customer, until the end of the second quarter,” Kirincich told the executives.

Netflix was right:

By summertime [Blockbuster CEO John Antioco could no longer shield the online program from the company’s financial difficulties. Blockbuster’s financial crisis unfolded just as McCarthy and Kirincich’s models had predicted. The year’s DVD releases had performed woefully so far, and box office revenue — a fair indicator of rental revenue — was down by 5 percent over 2004. It was clear that Blockbuster would miss its earnings targets, meaning that it was in danger of violating its debt covenants. Antioco directed Zine to again press Blockbuster’s creditors for relaxed repayment terms, and broke the news to Evangelist that he would have to suspend marketing spending for a few months, and possibly raise prices to match Netflix’s…

The flood of marketing dollars that Antioco had committed to Blockbuster Online was crucial to keeping subscriber growth clicking along at record rates, and Cooper feared that cutting off that lifeblood would stop the momentum in its tracks. He was disappointed to be right. The result of the deep cuts to marketing was the same as letting up on a throttle. New subscriber additions barely kept up with cancellations, leaving Blockbuster Online treading water after a few weeks. While Netflix had zoomed past three million subscribers in March, Blockbuster had to abandon its goal of signing up two million by year’s end.

Still, Netflix wasn’t yet out of the woods: in 2006 Blockbuster launched Total Access, which let subscribers rent from either online or Blockbuster stores; the stores were still not connected to the Internet, so subscribers received an in-store rental in exchange for returning their online rental, which also triggered a new online rental to be sent to them. In other words, they were getting two rentals every time they visited a store. Customers loved it; Keating again:

Nearly a million new subscribers joined Blockbuster Online in the two months after Total Access launched, and market research showed consumer opinion nearly unanimous on one important point — the promotion was better than anything Netflix had to offer. Hastings figured he had three months before public awareness of Total Access began to pull in 100 percent of new online subscribers to Blockbuster Online, and even to lure away some of Neflix’s loyal subscribers. Hastings had derided Blockbuster Online as “technologically inferior” to Netflix in conversations with Wall Street financial analysts and journalists, and he was right. But the young, hard-driving MBAs running Blockbuster Online from a Dallas warehouse had found the one thing that trumped elegant technology with American consumers — a great bargain.

His momentary and grudging admiration for Antioco for finally figuring out how to use his seven thousand–plus stores to promote Blockbuster Online had turned to panic. The winter holidays, when Netflix normally enjoyed robust growth, turned sour, as Hastings and his executive team—McCarthy, Kilgore, Ross, and chief technology officer Neil Hunt—pondered countermoves.

Netflix would go on to offer to buy Blockbuster Online; Antioco turned the company down, assuming he could get a better price once Netflix’s growth turned upside down. Carl Icahn, though, who owned a major chunk of Blockbuster and had long feuded with Antioco, finally convinced him to resign that very same quarter; Antioco’s replacement took money away from Total Access and funneled it back to the stores, and Netflix escaped (Hastings would later tell Shane Evangelist, the head of Blockbuster Online, that Blockbuster had Netflix in checkmate). Blockbuster went bankrupt two years later.

Netflix’s Competition

I suspect, for the record, that Hastings overstated the situation just a tad; his admission to Evangelist sounds like the words of a gracious winner. The fact of the matter is that Netflix’s analysis of Blockbuster was correct: giving movies away was a great way to grow the business, but a completely unsustainable approach for a company saddled with debt whose core business was in secular decline — thanks in large part to Netflix.

Still, the fact remains that Q2 2007 was one of the few quarters that Netflix ever lost subscribers; it would happen again in 2011, but that would be it until last year, when Netflix’s user base declined two quarters in a row. This time, though, Netflix wasn’t the upstart fighting the brand everyone recognized; it was the dominant player, facing the prospect of saturation and stiff competition as everyone in Hollywood jumped into streaming.

What was surprising at the time was how surprised Netflix itself seemed to be; this is how the company opened the 1Q 2022 Letter to Shareholders:

Our revenue growth has slowed considerably as our results and forecast below show. Streaming is winning over linear, as we predicted, and Netflix titles are very popular globally. However, our relatively high household penetration – when including the large number of households sharing accounts – combined with competition, is creating revenue growth headwinds. The big COVID boost to streaming obscured the picture until recently.

That Netflix would soon be facing saturation was in fact apparent for years; it also shouldn’t have been a surprise that competition from other streaming services, which Netflix finally admitted existed in that same shareholder letter, would be a challenge, at least in the short-term. I wrote in a 2019 Daily Update:

That is not to say that this miss is not reason for concern: Netflix growing into its valuation depends on both increasing subscribers and increasing price, and this last quarter (again) suggests that the former is not inevitable and that the latter is not without cost. And yes, while Netflix may have not yet lost popular shows like Friends and The Office, both were reasons for subscribers to stick around; their exit will make retention in particular that much more difficult.

That will put more pressure on Netflix’s original content: not only must it attract new users, it also has to retain old ones — at least for now. I do think this will be a challenge: I wouldn’t be surprised if the next five years or so are much more challenging for Netflix as far as subscriber growth, and there may very well be a lot of volatility in the stock price (which, to be fair, has always been the case with Netflix).

COVID screwed up the timing: everyone being stuck at home re-ignited Netflix subscriber growth, but the underlying challenges remained, and hit all at once over the last year. That same Daily Update, though, ended with a note of optimism:

Note that time horizon though: as I have argued at multiple points I believe there will be a shakeout in streaming; most content companies simply don’t have the business model or stomach for building a sustainable streaming service, and will eventually go back to licensing their content to the highest bidder, and there Netflix has a massive advantage thanks to the user base it already has. To use an entertainment industry analogy, we are entering the time period of The Empire Strikes Back, but the big difference is that it is Netflix that owns the Death Star.

Fast forward to last fall’s earnings, and Netflix seemed to have arrived at the same conclusion; my biggest takeaway from the company’s pronouncements was the confidence on display, and the reason called back to the battle with Blockbuster. From the company’s Letter to Shareholders:

As it’s become clear that streaming is the future of entertainment, our competitors – including media companies and tech players – are investing billions of dollars to scale their new services. But it’s hard to build a large and profitable streaming business – our best estimate is that all of these competitors are losing money on streaming, with aggregate annual direct operating losses this year alone that could be well in excess of $10 billion, compared with our +$5-$6 billion of annual operating profit. For incumbent entertainment companies, this high level of investment is understandable given the accelerating decline of linear TV, which currently generates the bulk of their profit.

Ultimately though, we believe some of our competitors will seek to build sustainable, profitable businesses in streaming – either on their own or through continued industry consolidation. While it’s early days, we’re starting to see this increased profit focus – with some raising prices for their streaming services, some reigning in content spending, and some retrenching around traditional operating models which may dilute their direct-to-consumer offering. Amidst this formidable, diverse set of competitors, we believe our focus as a pure-play streaming business is an advantage. Our aim remains to be the first choice in entertainment, and to continue to build an amazingly successful and profitable business.

The fact that Netflix is now profitable — and, more importantly, generating positive free cash flow — wasn’t the only reason for optimism: Netflix had the good fortune of funding its expansion into content production in the most favorable interest rate environment imaginable; Netflix noted in this past quarter’s Letter to Shareholders:

We don’t have any scheduled debt maturities in FY23 and only $400M of debt maturities in FY24. All of our debt is fixed rate.

That debt totals $14 billion; Warner Bros. Discovery, meanwhile, has $50.4 billion in debt, Disney has $45 billion, Paramount has $15.6 billion, and Comcast, the owner of Peacock, has $90 billion. None of them — again, in contrast to Netflix — are making money on streaming, and cash flow is negative. Moreover, like Blockbuster and renting DVDs from stores, the actual profitable parts of their businesses are shrinking, thanks to the streaming revolution that Netflix pioneered.

Warner Bros. Discovery and Disney are almost certainly pot-committed to streaming, but Warner Bros. Discovery in particular has talked about the importance of profitability, and Disney just brought back Bob Iger after massive streaming losses helped doom his predecessor née successor; it seems likely their competitive threat will decrease, either because of higher prices, less aggressive bidding for content, or both. Meanwhile, it’s still not clear to me why Paramount+ and Peacock exist; perhaps they will not, sooner rather than later.

When and if that happens Netflix will be ready to stream their content, at a price that makes sense for Netflix, and not a penny more.

Netflix’s Creativity Imperative

That’s not to say that everything at Netflix is rosy: the other thing that was striking about the company’s earnings last week was the degree to which management gave credence to various aspects of the bear case against the company.

First, Netflix gets less leverage off of its international content than it once hoped for. One of the bullish arguments for Netflix is that it could create content in one part of the world and then stream it elsewhere, and while that is true technically, it doesn’t really move the needle in terms of engagement. Co-CEO Ted Sarandos said on last week’s earnings interview:

Watching where viewing is growing and where it’s suffering and where we are under programming and over programming around the world is a big task of the job. Spence and his team support Bella and her team in making those allocations, figuring out between film and television, between local language — and what’s really interesting is there aren’t that many global hits, meaning that everyone in the world watches the same thing. Squid Game was very rare in that way. And Wednesday looks like one of those too, very rare in that way. There are countries like Japan, as an example, or even Mexico that have a real preference for local content, even when we have our big local hits.

This means that Netflix has less leverage that you might think, and that said leverage varies by market; to put it another way, the company spends a lot on content, but that spend is distributed much more than people like me once theorized it might be.

Second, Netflix gets less value from its older content than bulls once assumed — or than its amortization schedule suggests (which is why the company’s profit number is misleading). Sarandos said in response to a question about how Netflix would manage churn in the face of cracking down on account sharing and raising prices:

I would just say that it’s the must-seeness of the content that will make the paid sharing initiative work. That will make the advertising launch work, that will make continuing to grow revenue work. And so it’s across film, across television. It’s the content that people must see and then it’s on Netflix that gives us the ability to do that. And we’re super proud of the team and their ability to keep delivering on that month in and month out and quarter in and quarter out and continuing to grow in all these different market segments that our consumers really care about. So that, to me, is core to all these initiatives working, and we’ve got the wind at our back on that right now.

If Netflix’s old content held its value in the way I once assumed then you could make a case that the company’s customer acquisition costs were actually decreasing over time as the value of its offering increased; it turns out, though, that Netflix gets and keeps customers with new shows that people talk about, while most of its old content is ignored (and perhaps ought be monetized on services like Roku and other free ad-supported TV networks).

From Spock to Kirk

Reed Hastings has certainly earned the right to step up — and back — to executive chairman; last Thursday was his last earnings interview. It’s interesting, though, to go back to his initial move from chairman to the CEO role. Randolph writes in the first chapter of his book That Will Never Work:

Behind his back, I’ve heard people compare Reed to Spock. I don’t think they mean it as a compliment, but they should. In Star Trek, Spock is almost always right. And Reed is, too. If he thinks something won’t work, it probably won’t.

Unfortunately for Randolph, it didn’t take long for Spock to evaluate his performance as CEO; Randolph recounted the conversation:

“Marc,” Reed said, “we’re headed for trouble, and I want you to recognize as a shareholder that there is enough smoke at this small business size that fire at a larger size is likely. Ours is an execution play. We have to move fast and almost flawlessly. The competition will be direct and strong. Yahoo! went from a grad school project to a six-billion-dollar company on awesome execution. We have to do the same thing. I’m not sure we can if you’re the only one in charge.”

He paused, then looked down, as if trying to gain the strength to do something difficult. He looked up again, right at me. I remember thinking: He’s looking me in the eye. “So I think the best possible outcome would be if I joined the company full-time and we ran it together. Me as CEO, you as president.”

Things changed quickly; Keating writes:

Hastings now held Netflix’s reins firmly in hand, and the VC money gave him the power to begin shifting the company’s culture away from Randolph’s family of creators toward a top-down organization led by executives with proven corporate records and, preferably, strong engineering and mathematics backgrounds.

Randolph ultimately left the company in 2002; again from Keating:

The last year or so of Randolph’s career at Netflix was a time of indecision — stay or go? He had resigned from the board of directors before the IPO, in part so that investors would not view his desire to cash out some of his equity as a vote of no confidence in the newly public company. Randolph landed in product development while trying to find a role for himself at Netflix, and dove into Lowe’s kiosk project and a video-streaming application that the engineers were beginning to develop. But after seven years of lavishing time and attention on his start-up, Randolph needed a break. Netflix had changed around him, from his collective of dreamers trying to change the world into Hastings’ hypercompetitive team of engineers and right-brained marketers whose skills intimidated him slightly. He no longer fit in.

To say that Hastings excelled at execution is a dramatic understatement; indeed, the speed with which the company rolled out its advertising product in 2022 (better late than never) is a testament that Hastings’ imprint on the company’s ability to execute remains. And again, that ability to execute was essential for much of Hastings tenure, particularly when Netflix was shipping DVDs: acquiring customers efficiently and delivering them what was essentially a commodity product was all about execution, as was the initial buildout of Netflix’s streaming service.

What is notable, though, is that the chief task for Netflix going forward is not necessarily execution, at least in terms of product or technology. While Hastings has left Netflix in a very good spot relative to its competitors, the long-term success of the company will ultimately be about creativity. Specifically, can Netflix produce compelling content at scale? Matthew Ball observed in a Stratechery Interview last summer:

Netflix is, in some regard, a sobering story. What do I mean by that? First mover advantages matter a lot, scale matters a lot, their product and technology investments matter a lot. Reed [Hastings] saw the future for global content services and scale that span every market, every genre, every person, truly years before any competitor did. I think we see pretty intense competition right now, but it’s remarkable when you actually look at the corporate histories of all of the competitors, most have changed leadership at the CEO level twice, at the DTC level three to four times, Hulu is on its fifth or sixth CEO and so we have to give incredible plaudits to all of that.

Yet what I find so important here, is at the end of the day, all of those things only matter for a while. Content matters, that’s the product that they’re selling, it’s entertainment. The thing that has surprised me most about Netflix is their struggles to get better at it. When I was at Amazon Studios and we were competing with them day in and day out, the assumption you would’ve made in 2015, ’16, ’17 would be that the Netflix of 2022 would be much better at making content than it seems to be. That their batting average would be much higher. Why? Because they’ve spent $70 or $80 billion since and I think we’re starting to feel the consequences of [not being as far ahead as expected].

It’s impossible to not dive into the history of Netflix and not come away with a deep appreciation for everything Hastings accomplished. I’m not sure there is any company of Netflix’s size that has ever been so frequently doubted and written off. To have built it to a state where simply having the best content is paramount is a massive triumph. And that, perhaps, is another way of saying that Spock’s job is finished: Netflix’s future is about creativity and humanity; it’s time for a Captain Kirk.

AI and the Big Five

The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.

To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.

Consider previous tech epochs:

  • The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.
  • The Internet was almost entirely new market innovation, and thus defined by completely new companies that, to the extent they disrupted incumbents, did so in industries far removed from technology, particularly those involving information (i.e. the media). This was the era of Google, Facebook, online marketplaces and e-commerce, etc. All of these applications ran on PCs powered by Windows and Intel.
  • Cloud computing is arguably part of the Internet, but I think it deserves its own category. It was also extremely disruptive: commodity x86 architecture swept out dedicated server hardware, and an entire host of SaaS startups peeled off features from incumbents to build companies. What is notable is that the core infrastructure for cloud computing was primarily built by the winners of previous epochs: Amazon, Microsoft, and Google. Microsoft is particularly notable because the company also transitioned its traditional software business to a SaaS service, in part because the company had already transitioned said software business to a subscription model.
  • Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.

What is notable about this history is that the supposition I stated above isn’t quite right; disruptive innovations do consistently come from new entrants in a market, but those new entrants aren’t necessarily startups: some of the biggest winners in previous tech epochs have been existing companies leveraging their current business to move into a new space. At the same time, the other tenets of Christensen’s theory hold: Microsoft struggled with mobile because it was disruptive, but SaaS was ultimately sustaining because its business model was already aligned.


Given the success of existing companies with new epochs, the most obvious place to start when thinking about the impact of AI is with the big five: Apple, Amazon, Facebook, Google, and Microsoft.

Apple

I already referenced one of the most famous books about tech strategy; one of the most famous essays was Joel Spolsky’s Strategy Letter V, particularly this famous line:

Smart companies try to commoditize their products’ complements.

Spolsky wrote this line in the context of explaining why large companies would invest in open source software:

Debugged code is NOT free, whether proprietary or open source. Even if you don’t pay cash dollars for it, it has opportunity cost, and it has time cost. There is a finite amount of volunteer programming talent available for open source work, and each open source project competes with each other open source project for the same limited programming resource, and only the sexiest projects really have more volunteer developers than they can use. To summarize, I’m not very impressed by people who try to prove wild economic things about free-as-in-beer software, because they’re just getting divide-by-zero errors as far as I’m concerned.

Open source is not exempt from the laws of gravity or economics. We saw this with Eazel, ArsDigita, The Company Formerly Known as VA Linux and a lot of other attempts. But something is still going on which very few people in the open source world really understand: a lot of very large public companies, with responsibilities to maximize shareholder value, are investing a lot of money in supporting open source software, usually by paying large teams of programmers to work on it. And that’s what the principle of complements explains.

Once again: demand for a product increases when the price of its complements decreases. In general, a company’s strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the “commodity price” — the price that arises when you have a bunch of competitors offering indistinguishable goods. So, smart companies try to commoditize their products’ complements. If you can do this, demand for your product will increase and you will be able to charge more and make more.

Apple invests in open source technologies, most notably the Darwin kernel for its operating systems and the WebKit browser engine; the latter fits Spolsky’s prescription as ensuring that the web works well with Apple devices makes Apple’s devices more valuable.

Apple’s efforts in AI, meanwhile, have been largely proprietary: traditional machine learning models are used for things like recommendations and photo identification and voice recognition, but nothing that moves the needle for Apple’s business in a major way. Apple did, though, receive an incredible gift from the open source world: Stable Diffusion.

Stable Diffusion is remarkable not simply because it is open source, but also because the model is surprisingly small: when it was released it could already run on some consumer graphics cards; within a matter of weeks it had been optimized to the point where it could run on an iPhone.

Apple, to its immense credit, has seized this opportunity, with this announcement from its machine learning group last month:

Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices…

One of the key questions for Stable Diffusion in any app is where the model is running. There are a number of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. First, the privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. Second, after initial download, users don’t require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs…

Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon. This release comprises a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models.

It’s important to note that this announcement came in two parts: first, Apple optimized the Stable Diffusion model itself (which it could do because it was open source); second, Apple updated its operating system, which thanks to Apple’s integrated model, is already tuned to Apple’s own chips.

Moreover, it seems safe to assume that this is only the beginning: while Apple has been shipping its so-called “Neural Engine” on its own chips for years now, that AI-specific hardware is tuned to Apple’s own needs; it seems likely that future Apple chips, if not this year then probably next year, will be tuned for Stable Diffusion as well. Stable Diffusion itself, meanwhile, could be built into Apple’s operating systems, with easily accessible APIs for any app developer.

This raises the prospect of “good enough” image generation capabilities being effectively built-in to Apple’s devices, and thus accessible to any developer without the need to scale up a back-end infrastructure of the sort needed by the viral hit Lensa. And, by extension, the winners in this world end up looking a lot like the winners in the App Store era: Apple wins because its integration and chip advantage are put to use to deliver differentiated apps, while small independent app makers have the APIs and distribution channel to build new businesses.

The losers, on the other hand, would be centralized image generation services like Dall-E or MidJourney, and the cloud providers that undergird them (and, to date, undergird the aforementioned Stable Diffusion apps like Lensa). Stable Diffusion on Apple devices won’t take over the entire market, to be sure — Dall-E and MidJourney are both “better” than Stable Diffusion, at least in my estimation, and there is of course a big world outside of Apple devices, but built-in local capabilities will affect the ultimate addressable market for both centralized services and centralized compute.

Amazon

Amazon, like Apple, uses machine learning across its applications; the direct consumer use cases for things like image and text generation, though, seem less obvious. What is already important is AWS, which sells access to GPUs in the cloud.

Some of this is used for training, including Stable Diffusion, which according to the founder and CEO of Stability AI Emad Mostaque used 256 Nvidia A100s for 150,000 hours for a market-rate cost of $600,000 (which is surprisingly low!). The larger use case, though, is inference, i.e. the actual application of the model to produce images (or text, in the case of ChatGPT). Every time you generate an image in MidJourney, or an avatar in Lensa, inference is being run on a GPU in the cloud.

Amazon’s prospects in this space will depend on a number of factors. First, and most obvious, is just how useful these products end up being in the real world. Beyond that, though, Apple’s progress in building local generation techniques could have a significant impact. Amazon, though, is a chip maker in its own right: while most of its efforts to date have been focused on its Graviton CPUs, the company could build dedicated hardware of its own for models like Stable Diffusion and compete on price. Still, AWS is hedging its bets: the cloud service is a major partner when it comes to Nvidia’s offerings as well.

The big short-term question for Amazon will be gauging demand: not having enough GPUs will be leaving money on the table; buying too many that sit idle, though, would be a major cost for a company trying to limit them. At the same time, it wouldn’t be the worst error to make: one of the challenges with AI is the fact that inference costs money; in other words, making something with AI has marginal costs.

This issue of marginal costs is, I suspect, an under-appreciated challenge in terms of developing compelling AI products. While cloud services have always had costs, the discrete nature of AI generation may make it challenging to fund the sort of iteration necessary to achieve product-market fit; I don’t think it’s an accident that ChatGPT, the biggest breakout product to-date, was both free to end users and provided by a company in OpenAI that both built its own model and has a sweetheart deal from Microsoft for compute capacity. If AWS had to sell GPUs for cheap that could spur more use in the long run.

That noted, these costs should come down over time: models will become more efficient even as chips become faster and more efficient in their own right, and there should be returns to scale for cloud services once there are sufficient products in the market maximizing utilization of their investments. Still, it is an open question as to how much full stack integration will make a difference, in addition to the aforementioned possibility of running inference locally.

Meta

I already detailed in Meta Myths why I think that AI is a massive opportunity for Meta and worth the huge capital expenditures the company is making:

Meta has huge data centers, but those data centers are primarily about CPU compute, which is what is needed to power Meta’s services. CPU compute is also what was necessary to drive Meta’s deterministic ad model, and the algorithms it used to recommend content from your network.

The long-term solution to ATT, though, is to build probabilistic models that not only figure out who should be targeted (which, to be fair, Meta was already using machine learning for), but also understanding which ads converted and which didn’t. These probabilistic models will be built by massive fleets of GPUs, which, in the case of Nvidia’s A100 cards, cost in the five figures; that may have been too pricey in a world where deterministic ads worked better anyways, but Meta isn’t in that world any longer, and it would be foolish to not invest in better targeting and measurement.

Moreover, the same approach will be essential to Reels’ continued growth: it is massively more difficult to recommend content from across the entire network than only from your friends and family, particularly because Meta plans to recommend not just video but also media of all types, and intersperse it with content you care about. Here too AI models will be the key, and the equipment to build those models costs a lot of money.

In the long run, though, this investment should pay off. First, there are the benefits to better targeting and better recommendations I just described, which should restart revenue growth. Second, once these AI data centers are built out the cost to maintain and upgrade them should be significantly less than the initial cost of building them the first time. Third, this massive investment is one no other company can make, except for Google (and, not coincidentally, Google’s capital expenditures are set to rise as well).

That last point is perhaps the most important: ATT hurt Meta more than any other company, because it already had by far the largest and most finely-tuned ad business, but in the long run it should deepen Meta’s moat. This level of investment simply isn’t viable for a company like Snap or Twitter or any of the other also-rans in digital advertising (even beyond the fact that Snap relies on cloud providers instead of its own data centers); when you combine the fact that Meta’s ad targeting will likely start to pull away from the field (outside of Google), with the massive increase in inventory that comes from Reels (which reduces prices), it will be a wonder why any advertiser would bother going anywhere else.

An important factor in making Meta’s AI work is not simply building the base model but also tuning it to individual users on an ongoing basis; that is what will take such a large amount of capacity and it will be essential for Meta to figure out how to do this customization cost-effectively. Here, though, it helps that Meta’s offering will probably be increasingly integrated: while the company may have committed to Qualcomm for chips for its VR headsets, Meta continues to develop its own server chips; the company has also released tools to abstract away Nvidia and AMD chips for its workloads, but it seems likely the company is working on its own AI chips as well.

What will be interesting to see is how things like image and text generation impact Meta in the long run: Sam Lessin has posited that the end-game for algorithmic timelines is AI content; I’ve made the same argument when it comes to the Metaverse. In other words, while Meta is investing in AI to give personalized recommendations, that idea, combined with 2022’s breakthroughs, is personalized content, delivered through Meta’s channels.

For now it will be interesting to see how Meta’s advertising tools develop: the entire process of both generating and A/B testing copy and images can be done by AI, and no company is better than Meta at making these sort of capabilities available at scale. Keep in mind that Meta’s advertising is primarily about the top of the funnel: the goal is to catch consumers’ eyes for a product or service or app they did not know previously existed; this means that there will be a lot of misses — the vast majority of ads do not convert — but that also means there is a lot of latitude for experimentation and iteration. This seems very well suited to AI: yes, generation may have marginal costs, but those marginal costs are drastically lower than a human.

Google

The Innovator’s Dilemma was published in 1997; that was the year that Eastman Kodak’s stock reached its highest price of $94.25, and for seemingly good reason: Kodak, in terms of technology, was perfectly placed. Not only did the company dominate the current technology of film, it had also invented the next wave: the digital camera.

The problem came down to business model: Kodak made a lot of money with very good margins providing silver halide film; digital cameras, on the other hand, were digital, which means they didn’t need film at all. Kodak’s management was thus very incentivized to convince themselves that digital cameras would only ever be for amateurs, and only when they became drastically cheaper, which would certainly take a very long time.

In fact, Kodak’s management was right: it took over 25 years from the time of the digital camera’s invention for digital camera sales to surpass film camera sales; it took longer still for digital cameras to be used in professional applications. Kodak made a lot of money in the meantime, and paid out billions of dollars in dividends. And, while the company went bankrupt in 2012, that was because consumers had access to better products: first digital cameras, and eventually, phones with cameras built in.

The idea that this is a happy ending is, to be sure, a contrarian view: most view Kodak as a failure, because we expect companies to live forever. In this view Kodak is a cautionary tale of how an innovative company can allow its business model to lead it to its eventual doom, even if said doom was the result of consumers getting something better.

And thus we arrive at Google and AI. Google invented the transformer, the key technology undergirding the latest AI models. Google is rumored to have a conversation chat product that is far superior to ChatGPT. Google claims that its image generation capabilities are better than Dall-E or anyone else on the market. And yet, these claims are just that: claims, because there aren’t any actual products on the market.

This isn’t a surprise: Google has long been a leader in using machine learning to make its search and other consumer-facing products better (and has offered that technology as a service through Google Cloud). Search, though, has always depended on humans as the ultimate arbiter: Google will provide links, but it is the user that decides which one is the correct one by clicking on it. This extended to ads: Google’s offering was revolutionary because instead of charging advertisers for impressions — the value of which was very difficult to ascertain, particularly 20 years ago — it charged for clicks; the very people the advertisers were trying to reach would decide whether their ads were good enough.

I wrote about the conundrum this presented for Google’s business in a world of AI seven years ago in Google and the Limits of Strategy:

In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014, declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.

It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.

This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?

That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.

Microsoft

Microsoft, meanwhile, seems the best placed of all. Like AWS it has a cloud service that sells GPU; it is also the exclusive cloud provider for OpenAI. Yes, that is incredibly expensive, but given that OpenAI appears to have the inside track to being the AI epoch’s addition to this list of top tech companies, that means that Microsoft is investing in the infrastructure of that epoch.

Bing, meanwhile, is like the Mac on the eve of the iPhone: yes it contributes a fair bit of revenue, but a fraction of the dominant player, and a relatively immaterial amount in the context of Microsoft as a whole. If incorporating ChatGPT-like results into Bing risks the business model for the opportunity to gain massive market share, that is a bet well worth making.

The latest report from The Information, meanwhile, is that GPT is eventually coming to Microsoft’s productivity apps. The trick will be to imitate the success of AI-coding tool GitHub Copilot (which is built on GPT), which figured out how to be a help instead of a nuisance (i.e. don’t be Clippy!).

What is important is that adding on new functionality — perhaps for a fee — fits perfectly with Microsoft’s subscription business model. It is notable that the company once thought of as a poster child for victims of disruption will, in the full recounting, not just be born of disruption, but be well-placed to reach greater heights because of it.


There is so much more to write about AI’s potential impact, but this Article is already plenty long. OpenAI is obviously the most interesting from a new company perspective: it is possible that OpenAI will become the platform on which all other AI companies are built, which would ultimately mean the economic value of AI outside of OpenAI may be fairly modest; this is also the bull case for Google, as they would be the most well-placed to be the Microsoft Azure to OpenAI’s AWS.

There is another possibility where open source models proliferate in the text generation space in addition to image generation. In this world AI becomes a commodity: this is probably the most impactful outcome for the world but, paradoxically, the most muted in terms of economic impact for individual companies (I suspect the biggest opportunities will be in industries where accuracy is essential: incumbents will therefore underinvest in AI, a la Kodak under-investing in digital, forgetting that technology gets better).

Indeed, the biggest winners may be Nvidia and TSMC. Nvidia’s investment in the CUDA ecosystem means the company doesn’t simply have the best AI chips, but the best AI ecosystem, and the company is investing in scaling that ecosystem up. That, though, has and will continue to spur competition, particularly in terms of internal chip efforts like Google’s TPU; everyone, though, will make their chips at TSMC, at least for the foreseeable future.

The biggest impact of all though, though, is probably off our radar completely. Just before the break Nat Friedman told me in a Stratechery Interview about Riffusion, which uses Stable Diffusion to generate music from text via visual sonograms, which makes me wonder what else is possible when images are truly a commodity. Right now text is the universal interface, because text has been the foundation of information transfer since the invention of writing; humans, though, are visual creatures, and the availability of AI for both the creation and interpretation of images could fundamentally transform what it means to convey information in ways that are impossible to predict.

For now, our predictions must be much more time-constrained, and modest. This may be the beginning of the AI epoch, but even in tech, epochs take a decade or longer to transform everything around them.

I wrote a follow-up to this Article in this Daily Update.

Holiday Break: December 26th to January 5th

Stratechery is on holiday from December 26, 2022 to January 5, 2023; the next Stratechery Update will be on Monday, January 9.

In addition, the next episode of Sharp Tech will be on Monday, January 9, and the next episode of Dithering will be on Tuesday, January 10. Sharp China will return the week of January 2.

The full Stratechery posting schedule is here.

The 2022 Stratechery Year in Review

It was only a year ago that I opened the 2021 Year in Review by noting that the news felt like a bit of a drag; the contrast to 2022 has been stark. The biggest story in tech not just this year but, I would argue, since the advent of mobile and cloud computing, was the emergence of AI. AI looms large not simply in terms of products, but also its connection to the semiconductor industry; that means the impact is not only a question of technology and society, but also geopolitics and, potentially, war. War, meanwhile, came to Europe, while inflation came to the world; tech valuations collapsed and the crypto bubble burst, and brought to light one of the largest frauds in history. All of this was discussed on Twitter, even as Twitter itself came to dominate the conversation, thanks to its purchase by Elon Musk.

"Paperboy on a bike" with Midjourney V3 and V4

Stratechery, meanwhile, entering its 10th year of publishing, underwent major changes of its own; a subscription to the Daily Update newsletter transformed into a subscription to the Stratechery Plus bundle, including:

Stratechery Interviews, meanwhile, became its own distinct brand, befitting its weekly schedule and increased prominence in Stratechery’s offering. I am excited to see Stratechery Plus continue to expand in 2023.

This year Stratechery published 33 free Weekly Articles, 111 subscriber Updates, and 36 Interviews. Today, as per tradition, I summarize the most popular and most important posts of the year on Stratechery.

You can find previous years here: 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013

On to 2022:

The Five Most-Viewed Articles

The five most-viewed articles on Stratechery according to page views:

  1. AI Homework — It seems appropriate that this article, written after the launch of ChatGPT, was the most popular of the year because AI is, in my estimation, the most important story of the year. This article used homework as a way to discuss how verifying and editing information will not only be essential in the future, but already are. I wrote two other articles about AI:
    • DALL-E, the Metaverse, and Zero Marginal Content — Machine-learning generated content has major implications on the Metaverse, because it brings the marginal cost of production to zero.
    • The AI Unbundling — AI is starting to unbundle the final part of the idea propagation value chain: idea creation and substantiation. The impacts will be far-reaching.
  2. Meta Myths — Meta deserves a bit of a discount off of its recent highs, but a number of myths about its business have caused the market to over-react. See also:
  3. Shopify’s Evolution — Shopify should build an advertising business to complement Shop Pay and the Shopify Fulfillment Network. An additional challenge for Shopify is the changing nature of Amazon’s moat:
  4. Digital Advertising in 2022 — The advertising has shifted from a Google-Facebook duopoly to one where Amazon and potentially Apple are major forces. Speaking of Apple:
  5. Nvidia In the Valley — Nvidia is in the valley in terms of gaming, the data center, and the omniverse; if it makes it to future heights its margins will be well-earned.

A drawing of Shopify with Integrated Payments, Fulfillment, and Advertising

Semiconductors and Geopolitics

Geopolitics, including the Russian invasion of Ukraine and relations with China, were major stories this year; semiconductors figured prominently in both.

A drawing of Google, Amazon, and Facebook's Ad Business

Aggregators and Platforms

A central theme on Stratechery has always been platforms and Aggregators.

A drawing of The Stripe Thin Platform

These themes inevitably lead to questions of antitrust, and I disagree with the biggest FTC action of the year:

A drawing of Microsoft Game Pass

Streaming

This year saw a lot of upheavel in the streaming space; some of these outlooks have already came true (Netflix and ads), remain to be seen (Warner Bros. Discovery), or aren’t looking too good (consolidation may happen in streaming, but cable is looking like a weak player).

A drawing of The Big Ten's Accrual

Tech and Society

The intersection between tech and society has never been more clear than over the last few months as Twitter, a relatively small and unimportant company in business terms, has dominated the news, thanks to its societal impact.

The 2x2 graph in 2022, with challenges from Amazon and Apple

Other Company Coverage

Microsoft continues to show strength, Apple didn’t raise prices (although, in retrospect, the below Article overstates the case), Meta continues to pursue the Metaverse, and what a private Twitter might have been.

A drawing of Twitter's Architecture

Stratechery Interviews

This year Stratechery Interviews became a standard weekly item, with three distinct categories:

Public Executive Interviews

Startup Executive Series

This was a new type of interview I launched this year: given that it is impossible to cover startups objectively through data, I asked founders to give their subjective view of their businesses and long-term prospects.

Analyst Interviews

  • Jay Goldberg: January about Intel, Nvidia, and ARM; and August about AI and the CHIPS Act
  • Bill Bishop about China’s COVID outbreak, the Ukraine war, and Substack
  • Dan Wang, from Gavekal Dragonomics: April about China’s Shanghai lockdown and response to Ukraine; and October about the China chip ban
  • Tony Fadell about his career in tech, including at Apple, and the future of ARM
  • Eric Seufert: May, about the post-ATT landscape; and August, about the future of digital advertising
  • Michael Nathanson about streaming and digital advertising
  • Matthew Ball about the metaverse and Netflix
  • Michael Mignano about podcasts, standards, and recommendation media
  • Daniel Gross and Nat Friedman about the democratization of AI
  • Eugene Wei about streaming and social media
  • Gregory C. Allen about the past, present, and future of the China chip ban

A drawing of Activision's Modularity

The Year in Stratechery Updates

Some of my favorite Stratechery Updates:


I am so grateful to the subscribers that make it possible for me to do this as a job. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2023!

Consoles and Competition

This Article is available as a video essay on YouTube


The first video game was a 1952 research product called OXO — tic-tac-toe played on a computer the size of a large room:

The EDSAC computer
Copyright Computer Laboratory, University of Cambridge, CC BY 2.0

Fifteen years later Ralph Baer produced “The Brown Box”; Magnavox licensed Baer’s device and released it as the Odyssey five years later — it was the first home video game console:

The Magnavox Odyssey

The Odyssey made Magnavox a lot of money, but not through direct sales: the company sued Atari for ripping off one of the Odyssey’s games to make “Pong”, the company’s first arcade game and, in 1975, first home video game, eventually reaping over $100 million in royalties and damages. In other words, arguments about IP and control have been part of the industry from the beginning.

In 1977 Atari released the 2600, the first console I ever owned:1

The Atari 2600

All of the games for the Atari were made by Atari, because of course they were; IBM had unbundled mainframe software and hardware in 1969 in an (unsuccessful) attempt to head off an antitrust case, but video games barely existed as a category in 1977. Indeed, it was only four years earlier when Steve Wozniak had partnered with Steve Jobs to design a circuit board for Atari’s Breakout arcade game; this story is most well-known for the fact that Jobs lied to Wozniak about the size of the bonus he earned, but the pertinent bit for this Article is that video game development was at this point intrinsically tied to hardware.

That, though, was why the 2600 was so unique: games were not tied to hardware but rather self-contained in cartridges, meaning players would use the same system to play a whole bunch of different games:

Atari cartridges
Nathan King, CC BY 2.0

The implications of this separation did not resonate within Atari, which had been sold by founder Nolan Bushnell to Warner Communications in 1976, in an effort to get the 2600 out the door. Game Informer explains what happened:

In early 1979, Atari’s marketing department issued a memo to its programing staff that listed all the games Atari had sold the previous year. The list detailed the percentage of sales each game had contributed to the company’s overall profits. The purpose of the memo was to show the design team what kinds of games were selling and to inspire them to create more titles of a similar breed…David Crane, Larry Kaplan, Alan Miller, and Bob Whitehead were four of Atari’s superstar programmers. Collectively, the group had been responsible for producing many of Atari’s most critical hits…

“I remember looking at that memo with those other guys,” recalls Crane, “and we realized that we had been responsible for 60 percent of Atari’s sales in the previous year – the four of us. There were 35 people in the department, but the four of us were responsible for 60 percent of the sales. Then we found another announcement that [Atari] had done $100 million in cartridge sales the previous year, so that 60 percent translated into ­$60 ­million.”

These four men may have produced $60 million in profit, but they were only making about $22,000 a year. To them, the numbers seemed astronomically disproportionate. Part of the problem was that when the video game industry was founded, it had molded itself after the toy industry, where a designer was paid a fixed salary and everything that designer produced was wholly owned by the company. Crane, Kaplan, Miller, and Whitehead thought the video game industry should function more like the book, music, or film industries, where the creative talent behind a project got a larger share of the profits based on its success.

The four walked into the office of Atari CEO Ray Kassar and laid out their argument for programmer royalties. Atari was making a lot of money, but those without a corner office weren’t getting to share the wealth. Kassar – who had been installed as Atari’s CEO by parent company Warner Communications – felt obligated to keep production costs as low as possible. Warner was a massive c­orporation and everyone helped contribute to the ­company’s ­success. “He told us, ‘You’re no more important to those projects than the person on the assembly line who put them together. Without them, your games wouldn’t have sold anything,’” Crane remembers. “He was trying to create this corporate line that it was all of us working together that make games happen. But these were creative works, these were authorships, and he didn’t ­get ­it.”

“Kassar called us towel designers,” Kaplan told InfoWorld magazine back in 1983, “He said, ‘I’ve dealt with your kind before. You’re a dime a dozen. You’re not unique. Anybody can do ­a ­cartridge.’”

That “anyone” included the so-called “Gang of Four”, who decided to leave Atari and form the first 3rd-party video game company; they called it Activision.

3rd-Party Software

Activision represented the first major restructuring of the video game value chain; Steve Wozniak’s Breakout was fully integrated in terms of hardware and software:

The first Atari equipment was fully integrated

The Atari 2600 with its cartridge-based system modularized hardware and software:2

The Atari 2600 was modular

Activision took that modularization to its logical (and yet, at the time, unprecedented) extension, by being a different company than the one that made the hardware:

Activision capitalized on the modularity

Activision, which had struggled to raise money given the fact it was targeting a market that didn’t yet exist, and which faced immediate lawsuits from Atari, was a tremendous success; now venture capital was eager to fund the market, leading to a host of 3rd-party developers, few of whom had the expertise or skill of Activision. The result was a flood of poor quality games that soured consumers on the entire market, leading to the legendary video game crash of 1983: industry revenue plummeted from $3.2 billion in 1983 to a mere $100 million in 1985. Activision survived, but only by pivoting to making games for the nascent personal computing market.

The personal computer market was modular from the start, and not just in terms of software. Compaq’s success in reverse-engineering the IBM PC’s BIOS created a market for PC-compatible computers, all of which ran the increasingly ubiquitous Microsoft operating system (first DOS, then Windows). This meant that developers like Activision could target Windows and benefit from competition in the underlying hardware.

Moreover, there were so many more use cases for the personal computer, along with a burgeoning market in consumer-focused magazines that reviewed software, that the market was more insulated from the anarchy that all but destroyed the home console market.

That market saw a rebirth with Nintendo’s Famicom system, christened the “Nintendo Entertainment System” for the U.S. market (Nintendo didn’t want to call it a console to avoid any association with the 1983 crash, which devastated not just video game makers but also retailers). Nintendo created its own games like Super Mario Bros. and Zelda, but also implemented exacting standards for 3rd-party developers, requiring them to pass a battery of tests and pay a 30% licensing fee for a maximum of five games a year; only then could they receive a dedicated chip for their cartridge that allowed it to work in the NES.

Nintendo controlled its ecosystem

Nintendo’s firm control of the third-party developer market may look familiar: it was an early precedent for the App Store battles of the last decade. Many of the same principles were in play:

  • Nintendo had a legitimate interest in ensuring quality, not simply for its own sake but also on behalf of the industry as a whole; similarly, the App Store, following as it did years of malware and viruses in the PC space, restored customer confidence in downloading third-party software.
  • It was Nintendo that created the 30% share for the platform owner that all future console owners would implement, and which Apple would set as the standard for the App Store.
  • While Apple’s App Store lockdown is rooted in software, Nintendo had the same problem that Atari had in terms of the physical separation of hardware and software; this was overcome by the aforementioned lockout chips, along with branding the Nintendo “Seal of Quality” in an attempt to fight counterfeit lockout chips.

Nintendo’s strategy worked, but it came with long-term costs: developers, particularly in North America, hated the company’s restrictions, and were eager to support a challenger; said challenger arrived in the form of the Sega Genesis, which launched in the U.S. in 1989. Sega initially followed Nintendo’s model of tight control, but Electronic Arts reverse-engineered Sega’s system, and threatened to create their own rival licensing program for the Genesis if Sega didn’t dramatically loosen their controls and lower their royalties; Sega acquiesced and went on to fight the Super Nintendo, which arrived in the U.S. in 1991, to a draw, thanks in part to a larger library of third-party games.

Sony’s Emergence

The company that truly took the opposite approach to Nintendo was Sony; after being spurned by Nintendo in humiliating fashion — Sony announced the Play Station CD-ROM add-on at CES in 1991, only for Nintendo to abandon the project the next day — the electronics giant set out to create their own console which would focus on 3D-graphics and package games on CD-ROMs instead of cartridges. The problem was that Sony wasn’t a game developer, so it started out completely dependent on 3rd-party developers.

One of the first ways that Sony addressed this was by building an early partnership with Namco, Sega’s biggest rival in terms of arcade games. Coin-operated arcade games were still a major market in the 1990s, with more revenue than the home market for the first half of the decade. Arcade games had superior graphics and control systems, and were where new games launched first; the eventual console port was always an imitation of the original. The problem, however, is that it was becoming increasingly expensive to build new arcade hardware, so Sony proposed a partnership: Namco could use modified PlayStation hardware as the basis of its System 11 arcade hardware, which would make it easy to port its games to PlayStation. Namco, which also rebuilt its more powerful Ridge Racer arcade game for the PlayStation, took Sony’s offer: Ridge Racer launched with the Playstation, and Tekken was a massive hit given its near perfect fidelity to the arcade version.

Sony was much better for 3rd-party developers in other ways, as well: while the company maintained a licensing program, its royalty rates were significantly lower than Nintendo’s, and the cost of manufacturing CD-ROMs was much lower than manufacturing cartridges; this was a double whammy for the Nintendo 64 because while cartridges were faster and offered the possibility of co-processor add-ons, what developers really wanted was the dramatically increased amount of storage CD-ROMs afforded. The Playstation was also the first console to enable development on the PC in a language (C) that was well-known to existing developers. In the end, despite the fact that the Nintendo 64 had more capable hardware than the PlayStation, it was the PlayStation that won the generation thanks to a dramatically larger game library, the vast majority of which were third-party games.

Sony extended that advantage with the PlayStation 2, which was backwards compatible with the PlayStation, meaning it had a massive library of 3rd-party games immediately; the newly-launched Xbox, which was basically a PC, and thus easy to develop for, made a decent showing, while Nintendo struggled with the Gamecube, which had both a non-standard controller and non-standard microdisks that once again limited the amount of content relative to the DVDs used for PlayStation 2 and Xbox (and it couldn’t function as a DVD player, either).

The peak of 3rd-party based competition

This period for video games was the high point in terms of console competition for 3rd-party developers for two reasons:

  • First, there were still meaningful choices to be made in terms of hardware and the overall development environment, as epitomized by Sony’s use of CD-ROMs instead of cartridges.
  • Second, developers were still constrained by the cost of developing for distinct architectures, which meant it was important to make the right choice (which dramatically increased the return of developing for the same platform as everyone else).

It was the Sony-Namco partnership, though, that was a harbinger of the future: it behooved console makers to have similar hardware and software stacks to their competitors, so that developers would target them; developers, meanwhile, were devoting an increasing share of their budget to developing assets, particularly when the PS3/Xbox 360 generation targeted high definition, which increased their motivation to be on multiple platforms to better leverage their investments. It was Sony that missed this shift: the PS3 had a complicated Cell processor that was hard to develop for, and a high price thanks to its inclusion of a Blu-Ray player; the Xbox 360 had launched earlier with a simpler architecture, and most developers built for the Xbox first and Playstation 3 second (even if they launched at the same time).

The real shift, though, was the emergence of game engines as the dominant mode of development: instead of building a game for a specific console, it made much more sense to build a game for a specific engine which abstracted away the underlying hardware. Sometimes these game engines were internally developed — Activision launched its Call of Duty franchise in this time period (after emerging from bankruptcy under new CEO Bobby Kotick) — and sometimes they were licensed (i.e. Epic’s Unreal Engine). The impact, though, was in some respects similar to cartridges on the Atari 2600:

Consoles became a commodity in the PS3/Xbox 360 generation

In this new world it was the consoles themselves that became modularized: consumers picked out their favorite and 3rd-party developers delivered their games on both.

Nintendo, meanwhile, dominated the generation with the Nintendo Wii. What was interesting, though, is that 3rd-party support for the Wii was still lacking, in part because of the underpowered hardware (in contrast to previous generations): the Wii sold well because of its unique control method — which most people used to play Wii Sports — and Nintendo’s first-party titles. It was, in many respects, Nintendo’s most vertically-integrated console yet, and was incredibly successful.

Sony Exclusives

Sony’s pivot after the (relatively) disappointing PlayStation 3 was brilliant: if the economic imperative for 3rd-party developers was to be on both Xbox and PlayStation (and the PC), and if game engines made that easy to implement, then there was no longer any differentiation to be had in catering to 3rd-party developers.

Instead Sony beefed up its internal game development studios and bought up several external ones, with the goal of creating PlayStation 4 exclusives. Now some portion of new games would not be available on Xbox not because it had crappy cartridges or underpowered graphics, but because Sony could decide to limit its profit on individual titles for the sake of the broader PlayStation 4 ecosystem. After all, there would still be a lot of 3rd-party developers; if Sony had more consoles than Microsoft because of its exclusives, then it would harvest more of those 3rd-party royalty fees.

Those fees, by the way, started to head back up, particularly for digital-only versions, which returned to that 30% cut that Nintendo had pioneered many years prior; this is the downside of depending on universal abstractions like game engines while bearing high development costs: you have no choice but to be on every platform no matter how much it costs.

Sony's exclusive strategy gave it the edge in the PS4 generation

Sony bet correctly: the PS4 dominated its generation, helped along by Microsoft making a bad bet of its own by packing in the Kinect with the Xbox One. It was a repeat of Sony’s mistake with the PS3, in that it was a misguided attempt to differentiate in hardware when the fundamental value chain had long since dictated that the console was increasingly a commodity. Content is what mattered — at least as long as the current business model persisted.

Nintendo, meanwhile, continued to march to its own vertically-integrated drum: after the disastrous Wii U the company quickly pivoted to the Nintendo Switch, which continues to leverage its truly unique portable form factor and Nintendo’s first-party games to huge sales. Third party support, though, remains extremely tepid; it’s just too underpowered, and the sort of person that cares about third-party titles like Madden or Call of Duty has long since bought a PlayStation or Xbox.

The FTC vs. Microsoft

Forty years of context may seem like overkill when it comes to examining the FTC’s attempt to block Microsoft’s acquisition of Activision, but I think it is essential for multiple reasons.

First, the video game market has proven to be extremely dynamic, particularly in terms of 3rd-party developers:

  • Atari was vertically integrated
  • Nintendo grew the market with strict control of 3rd-party developers
  • Sony took over the market by catering to 3rd-party developers and differentiating on hardware
  • Xbox’s best generation leaned into increased commodification and ease-of-development
  • Sony retook the lead by leaning back into vertical integration

That is quite the round trip, and it’s worth pointing out that attempting to freeze the market in its current iteration at any point over the last forty years would have foreclosed future changes.

At the same time, Sony’s vertical integration seems more sustainable than Atari’s. First, Sony owns the developers who make the most compelling exclusives for its consoles; they can’t simply up-and-leave like the Gang of Four. Second, the costs of developing modern games has grown so high that any 3rd-party developer has no choice but to develop for all relevant consoles. That means that there will never be a competitor who wins by offering 3rd-party developers a better deal; the only way to fight back is to have developers of your own, or a completely different business model.

The first fear raised by the FTC is that Microsoft, by virtue of acquiring Activision, is looking to fight its own exclusive war, and at first blush it’s a reasonable concern. After all, Activision has some of the most popular 3rd-party games, particularly the aforementioned Call of Duty franchise. The problem with this reasoning, though, is that the price Microsoft paid for Activision was a multiple of Activision’s current revenues, which include billions of dollars for games sold on Playstation. To suddenly cut Call of Duty (or Activision’s other multi-platform titles) off from Playstation would be massively value destructive; no wonder Microsoft said it was happy to sign a 10-year deal with Sony to keep Call of Duty on PlayStation.

Just for clarity’s sake, the distinction here from Sony’s strategy is the fact that Microsoft is acquiring these assets. It’s one thing to develop a game for your own platform — you’re building the value yourself, and choosing to harvest it with an ecosystem strategy as opposed to maximizing that games’ profit. An acquirer, though, has to pay for the business model that already exists.

At the same time, though, it’s no surprise that Microsoft has taken in-development assets from its other acquisition like ZeniMax and made them exclusives; that is the Sony strategy, and Microsoft was very clear when it acquired ZeniMax that it would keep cross-platform games cross-platform but may pursue a different strategy for new intellectual property. CEO of Microsoft Gaming Phil Spencer told Bloomberg at the time:

In terms of other platforms, we’ll make a decision on a case-by-case basis.

Given this, it’s positively bizarre that the FTC also claims that Microsoft lied to the E.U. with regards to its promises surrounding the ZeniMax acquisition: the company was very clear that existing cross-platform games would stay cross-platform, and made no promises about future IP. Indeed, the FTC’s claims were so off-base that the European Commission felt the need to clarify that Microsoft didn’t mislead the E.U.; from Mlex:

Microsoft didn’t make any “commitments” to EU regulators not to release Xbox-exclusive content following its takeover of ZeniMax Media, the European Commission has said. US enforcers yesterday suggested that the US tech giant had misled the regulator in 2021 and cited that as a reason to challenge its proposed acquisition of Activision Blizzard. “The commission cleared the Microsoft/ZeniMax transaction unconditionally as it concluded that the transaction would not raise competition concerns,” the EU watchdog said in an emailed statement.

The absence of competition concerns “did not rely on any statements made by Microsoft about the future distribution strategy concerning ZeniMax’s games,” said the commission, which itself has opened an in-depth probe into the Activision Blizzard deal and appears keen to clarify what happened in the previous acquisition. The EU agency found that even if Microsoft were to restrict access to ZeniMax titles, it wouldn’t have a significant impact on competition because rivals wouldn’t be denied access to an “essential input,” and other consoles would still have a “large array” of attractive content.

The FTC’s concerns about future IP being exclusive ring a bit hypocritical given the fact that Sony has been pursuing the exact same strategy — including multiple acquisitions — without any sort of regulatory interference; more than that, though, to effectively make up a crime is disquieting. To be fair, those Sony acquisitions were a lot smaller than Activision, but this goes back to the first point: the entire reason Activision is expensive is because of its already-in-market titles, which Microsoft has every economic incentive to keep cross-platform (and which it is willing to commit to contractually).

Whither Competition

It’s the final FTC concern, though, that I think is dangerous. From the complaint:

These effects are likely to be felt throughout the video gaming industry. The Proposed Acquisition is reasonably likely to substantially lessen competition and/or tend to create a monopoly in both well-developed and new, burgeoning markets, including highperformance consoles, multi-game content library subscription services, and cloud gaming subscription services…

Multi-Game Content Library Subscription Services comprise a Relevant Market. The anticompetitive effects of the Proposed Acquisition also are reasonably likely to occur in any relevant antitrust market that contains Multi-Game Content Library Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

Cloud Gaming Subscription Services are a Relevant Market. The anticompetitive effects of the Proposed Acquisition alleged in this complaint are also likely to occur in any relevant antitrust market that contains Cloud Gaming Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

“Multi-Game Content Library Subscription Services” and “Cloud Gaming Subscription Services” are, indeed, the reason why Microsoft wants to do this deal. I explained the rationale when Microsoft acquired ZeniMax:

A huge amount of discussion around this acquisition was focused on Microsoft needing its own stable of exclusives in order to compete with Sony, but it’s important to note that making all of ZeniMax’s games exclusives would be hugely value destructive, at least in the short-to-medium term. Microsoft is paying $7.5 billion for a company that currently makes money selling games on PC, Xbox, and PS5, and simply cutting off one of those platforms — particularly when said platform is willing to pay extra for mere timed exclusives, not all-out exclusives — is to effectively throw a meaningful chunk of that value away. That certainly doesn’t fit with Nadella’s statement that “each layer has to stand on its own for what it brings”…

Microsoft isn’t necessarily buying ZeniMax to make its games exclusive, but rather to apply a new business model to them — specifically, the Xbox Game Pass subscription. This means that Microsoft could, if it chose, have its cake and eat it too: sell ZeniMax games at their usual $60~$70 price on PC, PS5, Xbox, etc., while also making them available from day one to Xbox Game Pass subscribers. It won’t take long for gamers to quickly do the math: $180/year — i.e. three games bought individually — gets you access to all of the games, and not just on one platform, but on all of them, from PC to console to phone.

Sure, some gamers will insist on doing things the old way, and that’s fine: Microsoft can make the same money ZeniMax would have as an independent company. Everyone else can buy into Microsoft’s model, taking advantage of the sort of win-win-win economics that characterize successful bundles. And, if they have a PS5 and thus can’t get access to Xbox Game Pass on their TVs, an Xbox is only an extra $10/month away.

Microsoft is willing to cannibalize itself to build a new business model for video games, and it’s a business model that is pretty darn attractive for consumers. It’s also a business model that Activision wouldn’t pursue on its own, because it has its own profits to protect. Most importantly, though, it’s a business model that is anathema to Sony: making titles broadly available to consumers on a subscription basis is the exact opposite of the company’s exclusive strategy, which is all about locking consumers into Sony’s platform.

Microsoft's Xbox Game Pass strategy is orthogonal to Sony's

Here’s the thing: isn’t this a textbook example of competition? The FTC is seeking to preserve a model of competition that was last relevant in the PS2/Xbox generation, but that plane of competition has long since disappeared. The console market as it is today is one that is increasingly boring for consumers, precisely because Sony has won. What is compelling about Microsoft’s approach is that they are making a bet that offering consumers a better deal is the best way to break up Sony’s dominance, and this is somehow a bad thing?

What makes this determination to outlaw future business models particularly frustrating is that the real threat to gaming today is the dominance of storefronts that exact their own tax while contributing nothing to the development of the industry. The App Store and Google Play leverage software to extract 30% from mobile games just because they can — and sure, go ahead and make the same case about Microsoft and Sony. If the FTC can’t be bothered to check the blatant self-favoring inherent in these models, at the minimum it seems reasonable to give a chance to a new kind of model that could actual push consumers to explore alternative ways to game on their devices.

For the record, I do believe this acquisition demands careful overview, and it’s completely appropriate to insist that Microsoft continue to deliver Activision titles to other platforms, even if it wouldn’t make economic sense to do anything but. It’s increasingly difficult, though, to grasp any sort of coherent theory to the FTC’s antitrust decisions beyond ‘big tech bad’. There are real antitrust issues in the industry, but that requires actually understanding the industry to tease them out; that sort of understanding applied to this case would highlight Sony’s actual dominance and that having multiple compelling platforms with different business models is the essence of competition.



  1. Ten years later, as a hand-me-down from a relative 

  2. The Fairchild Channel F, which was released in 1976, was the actual first console-based video game system, but the 2600 was by far the most popular. 

AI Homework

It happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had just been announced by OpenAI a few hours earlier:

A wrong answer from ChatGPT about Thomas Hobbes

This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch. James Madison, while writing the U.S. Constitution, adopted an evolved proposal from Charles Montesquieu that added a judicial branch as a check on the other two.

The ChatGPT Product

It was dumb luck that my first ChatGPT query ended up being something the service got wrong, but you can see how it might have happened: Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet. Those assignments — by virtue of being on the Internet — are probably some of the grist of the GPT-3 language model that undergirds ChatGPT; ChatGPT applies a layer of Reinforcement Learning from Human Feedback (RLHF) to create a new model that is presented in an intuitive chat interface with some degree of memory (which is achieved by resending previous chat interactions along with the new prompt).

What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3. The critical factor is, I suspect, that ChatGPT is easy to use, and it’s free: it is one thing to read examples of AI output, like we saw when GPT-3 was first released; it’s another to generate those outputs yourself; indeed, there was a similar explosion of interest and awareness when Midjourney made AI-generated art easy and free (and that interest has taken another leap this week with an update to Lensa AI to include Stable Diffusion-driven magic avatars).

More broadly, this is a concrete example of the point former GitHub CEO Nat Friedman made to me in a Stratechery interview about the paucity of real-world AI applications beyond Github Copilot:

I left GitHub thinking, “Well, the AI revolution’s here and there’s now going to be an immediate wave of other people tinkering with these models and developing products”, and then there kind of wasn’t and I thought that was really surprising. So the situation that we’re in now is the researchers have just raced ahead and they’ve delivered this bounty of new capabilities to the world in an accelerating way, they’re doing it every day. So we now have this capability overhang that’s just hanging out over the world and, bizarrely, entrepreneurs and product people have only just begun to digest these new capabilities and to ask the question, “What’s the product you can now build that you couldn’t build before that people really want to use?” I think we actually have a shortage.

Interestingly, I think one of the reasons for this is because people are mimicking OpenAI, which is somewhere between the startup and a research lab. So there’s been a generation of these AI startups that style themselves like research labs where the currency of status and prestige is publishing and citations, not customers and products. We’re just trying to, I think, tell the story and encourage other people who are interested in doing this to build these AI products, because we think it’ll actually feed back to the research world in a useful way.

OpenAI has an API that startups could build products on; a fundamental limiting factor, though, is cost: generating around 750 words using Davinci, OpenAI’s most powerful language model, costs 2 cents; fine-tuning the model, with RLHF or anything else, costs a lot of money, and producing results from that fine-tuned model is 12 cents for ~750 words. Perhaps it is no surprise, then, that it was OpenAI itself that came out with the first widely accessible and free (for now) product using its latest technology; the company is certainly getting a lot of feedback for its research!

OpenAI has been the clear leader in terms of offering API access to AI capabilities; what is fascinating is about ChatGPT is that it establishes OpenAI as a leader in terms of consumer AI products as well, along with MidJourney. The latter has monetized consumers directly, via subscriptions; it’s a business model that makes sense for something that has marginal costs in terms of GPU time, even if it limits exploration and discovery. That is where advertising has always shined: of course you need a good product to drive consumer usage, but being free is a major factor as well, and text generation may end up being a better match for advertising, given its utility — and thus opportunity to collect first party data — is likely going to be higher than image generation for most people.

Deterministic vs. Probabilistic

It is an open question as to what jobs will be the first to be disrupted by AI; what became obvious to a bunch of folks this weekend, though, is that there is one universal activity that is under serious threat: homework.

Go back to the example of my daughter I noted above: who hasn’t had to write an essay about a political philosophy, or a book report, or any number of topics that are, for the student assigned to write said paper theoretically new, but in terms of the world generally simply a regurgitation of what has been written a million times before. Now, though, you can write something “original” from the regurgitation, and, for at least the next few months, you can do it for free.

The obvious analogy to what ChatGPT means for homework is the calculator: instead of doing tedious math calculations students could simply punch in the relevant numbers and get the right answer, every time; teachers adjusted by making students show their work.

That there, though, also shows why AI-generated text is something completely different; calculators are deterministic devices: if you calculate 4,839 + 3,948 - 45 you get 8,742, every time. That’s also why it is a sufficient remedy for teachers to requires students show their work: there is one path to the right answer and demonstrating the ability to walk down that path is more important than getting the final result.

AI output, on the other hand, is probabilistic: ChatGPT doesn’t have any internal record of right and wrong, but rather a statistical model about what bits of language go together under different contexts. The base of that context is the overall corpus of data that GPT-3 is trained on, along with additional context from ChatGPT’s RLHF training, as well as the prompt and previous conversations, and, soon enough, feedback from this week’s release. This can result in some truly mind-blowing results, like this Virtual Machine inside ChatGPT:

Did you know, that you can run a whole virtual machine inside of ChatGPT?

Making a virtual machine in ChatGPT

Great, so with this clever prompt, we find ourselves inside the root directory of a Linux machine. I wonder what kind of things we can find here. Let’s check the contents of our home directory.

Making a virtual machine in ChatGPT

Hmmm, that is a bare-bones setup. Let’s create a file here.

Making a virtual machine in ChatGPT

All the classic jokes ChatGPT loves. Let’s take a look at this file.

Making a virtual machine in ChatGPT

So, ChatGPT seems to understand how filesystems work, how files are stored and can be retrieved later. It understands that linux machines are stateful, and correctly retrieves this information and displays it.

What else do we use computers for. Programming!

Making a virtual machine in ChatGPT

That is correct! How about computing the first 10 prime numbers:

Making a virtual machine in ChatGPT

That is correct too!

I want to note here that this codegolf python implementation to find prime numbers is very inefficient. It takes 30 seconds to evaluate the command on my machine, but it only takes about 10 seconds to run the same command on ChatGPT. So, for some applications, this virtual machine is already faster than my laptop.

The difference is that ChatGPT is not actually running python and determining the first 10 prime numbers deterministically: every answer is a probabilistic result gleaned from the corpus of Internet data that makes up GPT-3; in other words, ChatGPT comes up with its best guess as to the result in 10 seconds, and that guess is so likely to be right that it feels like it is an actual computer executing the code in question.

This raises fascinating philosophical questions about the nature of knowledge; you can also simply ask ChatGPT for the first 10 prime numbers:

ChatGPT listing the first 10 prime numbers

Those weren’t calculated, they were simply known; they were known, though, because they were written down somewhere on the Internet. In contrast, notice how ChatGPT messes up the far simpler equation I mentioned above:

ChatGPT doing math wrong

For what it’s worth, I had to work a little harder to make ChatGPT fail at math: the base GPT-3 model gets basic three digit addition wrong most of the time, while ChatGPT does much better. Still, this obviously isn’t a calculator: it’s a pattern matcher — and sometimes the pattern gets screwy. The skill here is in catching it when it gets it wrong, whether that be with basic math or with basic political theory.

Interrogating vs. Editing

There is one site already on the front-lines in dealing with the impact of ChatGPT: Stack Overflow. Stack Overflow is a site where developers can ask questions about their code or get help in dealing with various development issues; the answers are often code themselves. I suspect this makes Stack Overflow a goldmine for GPT’s models: there is a description of the problem, and adjacent to it code that addresses that problem. The issue, though, is that the correct code comes from experienced developers answering questions and having those questions upvoted by other developers; what happens if ChatGPT starts being used to answer questions?

It appears it’s a big problem; from Stack Overflow Meta:

Use of ChatGPT generated text for posts on Stack Overflow is temporarily banned.

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need the volume of these posts to reduce and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts. So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.

There are a few fascinating threads to pull on here. One is about the marginal cost of producing content: Stack Overflow is about user-generated content; that means it gets its content for free because its users generate it for help, generosity, status, etc. This is uniquely enabled by the Internet.

AI-generated content is a step beyond that: it does, especially for now, cost money (OpenAI is bearing these costs for now, and they’re | substantial), but in the very long run you can imagine a world where content generation is free not only from the perspective of the platform, but also in terms of user’s time; imagine starting a new forum or chat group, for example, with an AI that instantly provides “chat liquidity”.

For now, though, probabilistic AI’s seem to be on the wrong side of the Stack Overflow interaction model: whereas deterministic computing like that represented by a calculator provides an answer you can trust, the best use of AI today — and, as Noah Smith and roon argue, the future — is providing a starting point you can correct:

What’s common to all of these visions is something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.

The sandwich workflow is very different from how people are used to working. There’s a natural worry that prompting and editing are inherently less creative and fun than generating ideas yourself, and that this will make jobs more rote and mechanical. Perhaps some of this is unavoidable, as when artisanal manufacturing gave way to mass production. The increased wealth that AI delivers to society should allow us to afford more leisure time for our creative hobbies…

We predict that lots of people will just change the way they think about individual creativity. Just as some modern sculptors use machine tools, and some modern artists use 3d rendering software, we think that some of the creators of the future will learn to see generative AI as just another tool – something that enhances creativity by freeing up human beings to think about different aspects of the creation.

In other words, the role of the human in terms of AI is not to be the interrogator, but rather the editor.

Zero Trust Homework

Here’s an example of what homework might look like under this new paradigm. Imagine that a school acquires an AI software suite that students are expected to use for their answers about Hobbes or anything else; every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system. Moreover, instead of futilely demanding that students write essays themselves, teachers insist on AI. Here’s the thing, though: the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.

What is compelling about this new skillset is that it isn’t simply a capability that will be increasingly important in an AI-dominated world: it’s a skillset that is incredibly valuable today. After all, it is not as if the Internet is, as long as the content is generated by humans and not AI, “right”; indeed, one analogy for ChatGPT’s output is that sort of poster we are all familiar with who asserts things authoritatively regardless of whether or not they are true. Verifying and editing is an essential skillset right now for every individual.

It’s also the only systematic response to Internet misinformation that is compatible with a free society. Shortly after the onset of COVID I wrote Zero Trust Information that made the case that the only solution to misinformation was to adopt the same paradigm behind Zero Trust Networking:

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

A drawing of Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications…In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

I argued that young people were already adapting to this new paradigm in terms of misinformation:

To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.

The biggest mistake in that article was the assumption that the distribution of information is a normal one; in fact, as I noted in Defining Information, there is a lot more bad information for the simple reason that it is cheaper to generate. Now the deluge of information is going to become even greater thanks to AI, and while it will often be true, it will sometimes be wrong, and it will be important for individuals to figure out which is which.

The solution will be to start with Internet assumptions, which means abundance, and choosing Locke and Montesquieu over Hobbes: instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out. In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.

I wrote a follow-up to this Article in this Daily Update.

Narratives

Two pieces of news dominated the tech industry last week: Elon Musk and Twitter, and Sam Bankman-Fried and FTX. Both showed how narratives can lead people astray. Another piece of news, though, flew under the radar: yet another development in AI, which is a reminder that the only narratives that last are rooted in product.

Twitter and the Wrong Narrative

I did give Elon Musk the benefit of the doubt.

Back in 2016 I wrote It’s a Tesla, marveling at the way Musk had built a brand that transcended far beyond a mere car company; what was remarkable about Musk’s approach is that said brand was a prerequisite to Tesla, in contrast to a company like Apple, the obvious analog as far as customer devotion goes. From 2021’s Mistakes and Memes:

This comparison works as far as it goes, but it doesn’t tell the entire story: after all, Apple’s brand was derived from decades building products, which had made it the most profitable company in the world. Tesla, meanwhile, always seemed to be weeks from going bankrupt, at least until it issued ever more stock, strengthening the conviction of Tesla skeptics and shorts. That, though, was the crazy thing: you would think that issuing stock would lead to Tesla’s stock price slumping; after all, existing shares were being diluted. Time after time, though, Tesla announcements about stock issuances would lead to the stock going up. It didn’t make any sense, at least if you thought about the stock as representing a company.

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

TSLA is not at the level it was during the heights of the bull market, but Tesla is a real company, with real cars, and real profits; last quarter the electric car company made more money than Toyota (thanks in part to a special charge for Toyota; Toyota’s operating profit was still greater). SpaceX is a real company, with real rockets that land on real rafts, and while the company is not yet profitable, there is certainly a viable path to making money; the company’s impact on both humanity’s long-term potential and the U.S.’s national security is already profound.

Twitter, meanwhile, is a real product that has largely failed as company; I wrote earlier this year when Musk first made a bid:

Twitter has, over 19 different funding rounds (including pre-IPO, IPO, and post-IPO), raised $4.4 billion in funding; meanwhile the company has lost a cumulative $861 million in its lifetime as a public company (i.e. excluding pre-IPO losses). During that time the company has held 33 earnings calls; the company reported a profit in only 14 of them.

Given this financial performance it is kind of amazing that the company was valued at $30 billion the day before Musk’s investment was revealed; such is the value of Twitter’s social graph and its cultural impact: despite there being no evidence that Twitter can even be sustainably profitable, much less return billions of dollars to shareholders, hope springs eternal that the company is on the verge of unlocking its potential. At the same time, these three factors — Twitter’s financials, its social graph, and its cultural impact — get at why Musk’s offer to take Twitter private is so intriguing.

Stop right there: can you see where I opened the door for an error of omission as far as my analysis is concerned? Yes, Musk has successfully built two companies, and yes, Twitter is not a successful company; what followed in that Article, though, was my own vision of what Twitter might become. I should have taken the time to think more critically about Musk’s vision…which doesn’t appear to exist.

Oh sure, Musk and his coterie of advisors have narratives: bots are bad and blue checks are about status. And, to be fair, both are true as far as it goes. The problem with bots is self-explanatory, while those who actually need blue checks — brands, celebrities, and reliable news breakers — likely care about them the least; the rest of us were happy to get our checkmark despite the fact there was no real risk of anyone impersonating us in any damaging way just because it made us feel special (speaking for myself anyway: I don’t much care about it now, but I was pretty delighted when I got it back in 2014 or so).

Of course Musk felt these problems more acutely than most: his high profile, active usage of Twitter, and popularity in crypto communities meant Musk tweets were the most likely place to encounter bots on the service; meanwhile Musk’s own grievances with journalists generally could, one imagine, engender a certain antipathy for “Bluechecks”, given that the easiest way to get one was to work for a media organization. The problem, though, is that Musk’s Twitter experience — thought to be an asset, including by yours truly — isn’t really relevant to the actual day-to-day reality of the site as experience by Twitter’s actual users.

And so we got last week’s verified disaster, where Musk could have his revenge on bluechecks by selling them to everyone, with the most eager buyers being those eager to impersonate brands, celebrities, and Musk himself. It was certainly funny, and I believe Musk that Twitter usage was off the charts, but it wasn’t a particularly prudent move for a company reliant on brand advertising in the middle of an economic slowdown.

This is not, to be clear, to criticize Musk for acting, or even for acting quickly: Twitter needed a kick in the pants (and, even had the company not been sold, was almost certainly in line for significant layoffs), and it’s understandable that mistakes will be made; the point of rapid iteration is to learn more quickly, which is to say that Twitter has, for years, not been learning very much at all. Rather, what was concerning about this mistake in particular is the degree to which it was so clearly rooted in Musk’s personal grievances, which (1) were knowable before he acted and (2) were not the biggest problems facing Twitter. That was knowable by me as an analyst, and I regret not pointing them out.

Indeed, these aren’t the only Musk narratives that have bothered me; here is his letter to advertisers posted on his first day on the job:

I wanted to reach out personally to share my motivation in acquiring Twitter. There has been much speculation about why I bought Twitter and what I think about advertising. Most of it has been wrong.

The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.

In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.

This is why I bought Twitter. I didn’t do it because it would be easy. I didn’t do it to make more money. I did it to try to help humanity, whom I love. And I do so with humility, recognizing that failure in pursuing this goal, despite our best efforts, is a very real possibility.

That said, Twitter obviously cannnot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all, where you can choose your desired experience according to your preferences, just as you can choose, for example, to see movies or play video games ranging from all ages to mature.

I also very much believe that advertising, when done right, can delight, entertain and inform you; it can show you a service or product or medical treatment that you never knew existed, but is right for you. For this to be true, it is essential to show Twitter users advertising that is as relevant as possible to their needs. Low relevancy ads are spam, but highly relevant ads are actually content!

Fundamentally, Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. To everyone who has partnered with us, I thank you. Let us build something extraordinary together.

All of this sounds good, and on closer examination is mostly wrong. Obviously relevant ads are better, but Twitter’s problem is not just poor execution in terms of its ad product but also that it’s a terrible place for ads. I do agree that giving users more control is a better approach to content moderation, but the obsession with the doing-it-for-the-clicks narrative ignores the nichification of the media. And, when it comes to the good of humanity, I think the biggest learning from Twitter is that putting together people who disagree with each other is actually a terrible idea; yes, it is why Twitter will never be replicated, but also why it has likely been a net negative for society. The digital town square is the Internet broadly; Twitter is more akin to a digital cage match, perhaps best monetized on a pay-per-view basis.

In short, it seems clear that Musk has the wrong narrative, and that’s going to mean more mistakes. And, for my part, I should have noted that sooner.

FTX and the Diversionary Narrative

Eric Newcomer wrote on Twitter with regards to the FTX blow-up:1

There are a few different ways to interpret Sam Bankman-Fried’s political activism:2

  • That he believed in the causes he supported sincerely and made a mistake with his business.
  • That he supported the causes cynically as a way to curry favor and hide his fraud.
  • That he believed he was some sort of savior gripped with an ends-justify-the-means mindset that led him to believe fraud was actually the right course of action.

In the end, whichever explanation is true doesn’t really matter: the real world impact was that customers lost around $10 billion in assets, and counting. What is interesting is that all of the explanations are an outgrowth of the view that business ought to be about more than business: to simply want to make money is somehow wrong; business is only good insofar as it is dedicated to furthering goals that don’t have anything to do with the business in question.

To put it another way, there tends to be cynicism about the idea of changing the world by building a business; entrepreneurs are judged by whether their intentions beyond business are sufficiently large and politically correct. That, though, is precisely why Bankman-Fried was viewed with such credulousness: he had the “right” ambitions and the “right” politics, so of course he was running the “right” business; he wasn’t one of those “true believers” who simply wanted to get rich off of blockchains.

In the end, though, the person who arguably comes out of this disaster looking the best is Changpeng Zhao (CZ), the founder and CEO of Binance, and the person whose tweet started the run that revealed FTX’s insolvency.3 No one, as far as I know, holds up CZ as any sort of activist or political actor for anything outside of crypto; isn’t that better? Perhaps had Bankman-Fried done nothing but run Alameda Research and FTX there would have been more focus on his actual business; too many folks, though, including journalists and venture capitalists, were too busy looking at things everyone claims are important but which were, in the end, a diversion from massive fraud.

Crypto and the Theory Narrative

I never wrote about Bankman-Fried; for what it’s worth, I always found his narrative suspect (this isn’t a brag, as I will explain in a moment). More broadly, I never wrote much about crypto-currency based financial applications either, beyond this article which was mostly about Bitcoin,4 and this article that argued that digital currencies didn’t make sense in the physical world but had utility in a virtual one.5 This was mostly a matter of uncertainty: yes, many of the financial instruments on exchanges like FTX were modeled on products that were first created on Wall Street, but at the end of the day Wall Street is undergirded by actual companies building actual products (and even then, things can quite obviously go sideways). Crypto-currency financial applications were undergirded by electricity and collective belief, and nothing more, and yet so many smart people seemed on-board.

What I did write about was Technological Revolutions and the possibility that crypto was the birth of something new; why Aggregation Theory would apply to crypto not despite, but because, of its decentralization; and Internet 3.0 and the theory that political considerations would drive decentralization. That last Article was explicitly not about cryptocurrencies, but it certainly fit the general crypto narrative of decentralization being an important response to the increased centralization of the Web 2.0 era.

What was weird in retrospect is that the Internet 3.0 Article was written a week after the article about Aggregation Theory and OpenSea, where I wrote:

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply…What is striking is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtuous cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Of the three Articles I listed, this one seems to be the most correct, and I think the reason is obvious: that was the only Article written about an actual product — OpenSea — while the other ones were about theory and narrative. When that narrative was likely wrong — that crypto is the foundation of a new technological revolution, for example — then the output that resulted was wrong, not unlike Musk’s wrong narrative leading to major mistakes at Twitter.

What I regret more, though, was keeping quiet about my uncertainty about what exactly all of these folks were creating these complex financial products out of: here I suffered from my own diversionary narrative, paying too much heed to the reputation and viewpoint of people certain that there was a there there, instead of being honest that while I could see the utility of a blockchain as a distributed-but-very-slow database, all of these financial instruments seemed to be based on, well, nothing.

The FTX case is not, technically speaking, about cryptocurrency utility; it is a pretty straight-forward case of fraud. Moreover, it was, as I noted in passing in that OpenSea article, a problem of centralization, as opposed to true DeFi. Such disclaimers do, though, have a whiff of “communism just hasn’t been done properly”: I already made the case that centralization is an inevitability at scale, and in terms of utility, that’s the entire problem. An entire financial ecosystem with a void in terms of underlying assets may not be fraud in a legal sense, but it sure seems fraudulent in terms of intrinsic value. I am disappointed in myself for not saying so before.

AI and the Product Narrative

Peter Thiel said in a 2018 debate with Reid Hoffman:

One axis that I am struck by is the centralization versus decentralization axis…for example, two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. Even though I think these things are under-determined, I do think these two map in a way politically very tightly on this centralization-decentralization thing. Crypto is decentralizing, AI is centralizing, or if you want to frame in a more ideologically, you could say crypto is libertarian, and AI is communist…

AI is communist in the sense it’s about big data, it’s about big governments controlling all the data, knowing more about you than you know about yourself, so a bureaucrat in Moscow could in fact set the prices of potatoes in Leningrad and hold the whole system together. If you look at the Chinese Communist Party, it loves AI and it hates crypto, so it actually fits pretty closely on that level, and I think that’s a purely technological version of this debate. There probably are ways that AI could be libertarian and there are ways that crypto could be communist, but I think that’s harder to do.

This is a narrative that makes all kind of sense in theory; I just noted, though, that my crypto Article that holds up the best is based on a realized product, and my takeaway was the opposite: crypto in practice and at scale tends towards centralization. What has been an even bigger surprise, though, is the degree to which it is AI that appears to have the potential for far more decentralization than anyone thought. I wrote earlier this fall in The AI Unbundling:

This, by extension, hints at an even more surprising takeaway: the widespread assumption — including by yours truly — that AI is fundamentally centralizing may be mistaken. If not just data but clean data was presumed to be a prerequisite, then it seemed obvious that massively centralized platforms with the resources to both harvest and clean data — Google, Facebook, etc. — would have a big advantage. This, I would admit, was also a conclusion I was particularly susceptible to, given my focus on Aggregation Theory and its description of how the Internet, contrary to initial assumptions, leads to centralization.

The initial roll-out of large language models seemed to confirm this point of view: the two most prominent large language models have come from OpenAI and Google; while both describe how their text (GPT and GLaM, respectively) and image (DALL-E and Imagen, respectively) generation models work, you either access them through OpenAI’s controlled API, or in the case of Google don’t access them at all. But then came this summer’s unveiling of the aforementioned Midjourney, which is free to anyone via its Discord bot. An even bigger surprise was the release of Stable Diffusion, which is not only free, but also open source — and the resultant models can be run on your own computer…

What is important to note, though, is the direction of each project’s path, not where they are in the journey. To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.

Just as the theory of crypto was decentralization but the product manifestation tended towards centralization, the theory of AI was centralization but a huge amount of the product excitement over the last few months has been decentralized and open source. This does, in retrospect, make sense: the malleability of software, combined with the free corpus of data that is the Internet, is much more accessible and flexible than blockchains that require network effects to be valuable, and where a single coding error results in the loss of money.

The relevance to this Article and introspection, though, is that this realization about AI is rooted in a product-based narrative, not theory. To that end, the third piece of news that happened last week was the release of Midjourney V4; the jump in quality and coherence is remarkable, even if the Midjourney aesthetic that was a hallmark of V3 is less distinct. Here is the image I used in The AI Unbundling, and a new version made with V4:

"Paperboy on a bike" with Midjourney V3 and V4

One of the things I found striking about my interview with MidJourney founder and CEO David Holz was how Midjourney came out of a process of exploration and uncertainty:

I had this goal, which was we needed to somehow create a more imaginative world. I mean, one of the biggest risks in the world I think is a collapse in belief, a belief in ourselves, a belief in the future. And part of that I think comes from a lack of imagination, a lack of imagination of what we can be, lack of imagination of what the future can be. And so this imagination thing I think is an important pillar of something that we need in the world. And I was thinking about this and I saw this, I’m like, “I can turn this into a force that can expand the imagination of the human species.” It was what we put on our company thing now. And that felt realistic. So that was really exciting.

Well, your prompt is, “/Imagine”, which is perfect.

So that was kind of the vision. But I mean, there is a lot of stuff we didn’t know. We didn’t know, how do people interact with this? What do they actually want out of it? What is the social thing? What is that? And there’s a lot of things. What are the mechanisms? What are the interfaces? What are the components that you build this experiences through? And so we kind of just have to go into that without too many opinions and just try things. And I kind of used a lot of lessons from Leap here, which was that instead of trying to go in and design a whole experience out of nothing, presupposing that you can somehow see 10 steps into the future, just make a bunch of things and see what’s cool and what people like. And then take a few of those and put them together.

It’s amazing how you try 10 things and you find the three coolest pieces, and you put them together, it feels like a lot more than three things. It kind of multiplies out in complexity and detail and it feels like it has depth, even though it doesn’t seem like a lot. And so yeah, there’s something magic about finding three cool things and then starting to build a product out of that.

In the end, the best way of knowing is starting by consciously not-knowing. Narratives are tempting but too often they are wrong, a diversion, or based on theory without any tether to reality. Narratives that are right, on the other hand, follow from products, which means that if you want to control the narrative in the long run, you have to build the product first, whether that be a software product, a publication, or a company.

That does leave open the question of Musk, and the way he seemed to meme Tesla into existence, while building a rocket ship on the side. I suspect the distinction is that both companies are rooted in the physical world: physics has a wonderful grounding effect on the most fantastical of narratives. Digital services like Twitter, though, built as they are on infinitely malleable software, are ultimately about people and how they interact with each other. The paradox is that this makes narratives that much more alluring, even — especially! — if they are wrong.


  1. I gave a 15 minute overview of the FTX blow-up on Friday’s Dithering

  2. Beyond the conspiracy theories that he was actually some sort of secret agent sent to destroy crypto, a close cousin of the conspiracy theory that Musk’s goal is to actually destroy Twitter; I mean, you can make a case for both! 

  3. Past performance is no guarantee of future results! 

  4. Given Bitcoin’s performance in a high inflationary environment the argument that it is a legitimate store of value looks quite poor 

  5. TBD 

Stratechery Plus Adds Sharp China with Sinocism’s Bill Bishop

In September I announced Sharp Tech and Stratechery Plus:

I am very pleased to announce the latest addition to the Stratechery Plus bundle: Sharp China with Sinocism’s Bill Bishop:

Sharp China with Sinocism's Bill Bishop

Sharp China with Sinocism’s Bill Bishop is a collaboration between Stratechery and Sinocism. Sharp China is, like Sharp Tech, hosted by Andrew Sharp;1 just as Sharp Tech seeks to provide a better understanding of the tech industry through an engaging and approachable conversational format, Sharp China seeks to do the same with everything China-related, and there is no better person to provide this understanding than Sinocism’s Bill Bishop.

Bill Bishop is an entrepreneur and former media executive with more than a decade’s experience living and working in China. Since leaving Beijing in 2015, he has lived in Washington DC. Bishop previously wrote the Axios China weekly newsletter and the China Insider column for the New York Times Dealbook and, in the late 1990s, co-founded MarketWatch.com.

Bishop founded Sinocism in 2012 to provide investors, policymakers, executives, analysts, diplomats, journalists, scholars and others a comprehensive overview of what is happening in China; Bishop reads Chinese fluently, and provides summaries of reports from not just the U.S. but China as well. I personally find Sinocism essential, but what I have always hoped for were more of Bishop’s opinions on the news: I’m excited that Sharp China will give him room for just that.

While Sharp China launched in beta last week for Stratechery Plus and Sinocism subscribers, today we are announcing it to everyone, and making the latest episode about The State of Dynamic Zero-COVID free to listen to. In addition, you can listen to excerpts from the first two shows.

To add the show to your podcast player, please log in to your member account, or listen in Spotify. Sharp China will publish most weeks going forward. You can also email questions for Bill to email@sharpchina.fm; I’ve been really pleased with the mailbag segments of Sharp Tech, and I look forward to listening to them on Sharp China as well.2

Once again, to receive every episode of Sharp China, along with Stratechery Updates and Interviews, Sharp Tech, and Dithering, subscribe to Stratechery Plus. I look forward to continuing to make your subscription more valuable.


  1. Sharp China is the first addition to the Stratechery Plus bundles that I do not personally appear on regularly 

  2. If you have any issues adding Sharp China to your podcast player please email support@stratechery.com 

Meta Myths

What happened to Meta last week — and my response to it — expressed in meme-form:

The GTA meme about "Here we go again" as applied to Facebook

In 2018, the market was panicking about Facebook’s slowing revenue and growing expenses, and was concerned about the negative impact that Stories was having on Facebook’s feed advertising business. I wrote that the reaction was overblown in Facebook Lenses, which looked at the business in five different ways:

  • Lens 1 was Facebook’s finances, which did show troubling trends in terms of revenue and expense growth:

    Facebook's revenue growth is decreasing even as its expense growth increases

    As I noted at the time, I could understand investor trepidation about these trend lines, which is why other lenses were necessary.

  • Lens 2 was Facebook’s products, where I argued that investors were over-indexed on Facebook the app and were ignoring Instagram’s growth potential, and, in the very long run, WhatsApp.
  • Lens 3 was Facebook’s advertising infrastructure, which I argued was very underrated, and which would provide a platform for dramatically scaling Instagram monetization in particular.
  • Lens 4 was Facebook’s moats, including its network, scaled advertising product, and investments in security and content review.
  • Lens 5 was Facebook’s raison d’être — connecting people — where I made the argument that the company’s core competency was in addressing a human desire that wasn’t going anywhere.

I concluded:

To insist that Facebook will die any day now is in some respects to suggest that humanity will cease to exist any day now; granted, it is a company and companies fail, but even if Facebook failed it would only be a matter of time before another Facebook rose to replace it.

That seems unlikely: for all of the company’s travails and controversies over the past few years, its moats are deeper than ever, its money-making potential not only huge but growing both internally and secularly; to that end, what is perhaps most distressing of all to would-be competitors is in fact this quarter’s results: at the end of the day Facebook took a massive hit by choice; the company is not maximizing the short-term, it is spending the money and suppressing its revenue potential in favor of becoming more impenetrable than ever.

The optimism proved prescient, at least for the next three years:

Facebook's stock run-up from 2018 to 2021

Facebook’s stock price increased by 118% between the day I wrote that Article, before peaking on September 15, 2021. Over the past year, though, things have certainly gone in the opposite direction:

Meta's massive drawdown in 2022

Meta, née Facebook, is now, incredibly enough, worth 42% less than it was when I wrote Facebook Lenses, hitting levels not seen since January 2016. It seems the company’s many critics are finally right: Facebook is dying, for real this time.

The problem is that the evidence just doesn’t support this point of view. Forget five lenses: there are five myths about Meta’s business that I suspect are driving this extreme reaction; all of them have a grain of truth, so they feel correct, but the truth is, if not 100% good news, much better than most of those dancing on the company’s apparent grave seem to realize.

Myth 1: Users Are Deserting Facebook

Myspace is, believe it or not, still around; however, it has been irrelevant for so long that I needed to look it up to remember if the name used camel case or not (it doesn’t). It does, though, still seem to loom large in the mind of Meta skeptics certain that the mid-2000’s social network’s fate was predictive for the company that supplanted it.

The problem with this narrative is that Meta is still adding users: the company is up to 2.93 billion Daily Active Users (DAUs), an increase of 50 million, and 3.71 billion Monthly Active Users (MAUs), an increase of 60 million. Moreover, this isn’t all Instagram and WhatsApp: Facebook itself increased its DAUs by 16 million (to 1.98 billion) and its MAUs by 24 million (to 2.96 billion). Granted, all of that growth, at least in the case of Facebook, was in Asia-Pacific and the rest of the world, but the U.S. and Europe were flat, not declining; given that Facebook long ago completely saturated those markets, it is meaningful that the service is not seeing any churn.

This goes back to my fifth lens: Facebook does connect people, and that connection is still meaningful enough for a whole lot of people to continue to use its services, and there is no sign of that desire for connection disappearing or shifting to other apps.

Myth 2: Instagram Engagement is Plummeting

The obvious retort is that sure, users may occasionally open Meta’s apps when they are bored, but they are spending most of their time in other apps like TikTok, and that that time is coming at the expense of Meta’s apps, particularly Instagram.

There is, to be clear, good reason to think that TikTok is having a big impact on Instagram specifically and Facebook broadly, but that impact, to the extent it is being felt, is in depressing growth, not in reversing it. CEO Mark Zuckerberg said at the beginning of his opening remarks on Meta’s earnings call:

There has been a bunch of speculation about engagement on our apps and what we’re seeing is more positive. On Facebook specifically, the number of people using the service each day is the highest it’s ever been — nearly 2 billion — and engagement trends are strong. Instagram has more than 2 billion monthly actives. WhatsApp has more than 2 billion daily actives, also with the exciting trend that North America is now our fastest growing region. Across the family, some apps may be saturated in some countries or some demographics, but overall our apps continue to grow from a large base. We’re also seeing engagement grow — especially strong growth in Reels — and I’ll share more details around that when I discuss our product priorities shortly.

Analysts on the call were skeptical, and asked specifically about the U.S. market; CFO Dave Wehner had good news in that regard as well:

So on time spent, we are really pleased with what we’re seeing on engagement. And as Mark mentioned, Reels is incremental to time spent. Specifically, in terms of aggregate time spent on Instagram and Facebook, both are up year-over-year in both the U.S. and globally. So while we’re not specifically optimizing for time spent, those trends are positive. And we aren’t specifically optimizing for time spent because that would tend to tilt us towards longer-form video, and we’re actually focused more on short-form and other types of content.

Again, TikTok usage is certainly usage that Meta would prefer happen on their platforms; what seems clear, though, is that short-form videos are growing the overall market for user-generated content. In other words, TikTok isn’t eating Meta’s usage, but rather growing the overall pie (and, to be clear, taking more of that pie than Meta is — at least until recently).

Myth 3: TikTok is Dominating

It is frustrating to not know exactly how big that new pie is, or what Meta’s share is relative to TikTok, but the company offered more evidence in line with my takeaway last quarter that Meta has contained the TikTok threat. First, according to Sensor Tower data as reported by Morgan Stanley, TikTok usage appears to be plateauing:

TikTok's growth is plateauing in the U.S.

Growth in the U.S. specifically was around 4%, with half the penetration of Instagram.

Second, Reels usage is still growing: Zuckerberg said on the earnings call:

Our AI discovery engine is playing an increasingly important role across our products — especially as advances enable us to recommend more interesting content from across our networks in feeds that used to be primarily driven just by the people and accounts you follow. This of course includes Reels, which continues to grow quickly across our apps — both in production and consumption. There are now more than 140 billion Reels plays across Facebook and Instagram each day. That’s a 50% increase from six months ago. Reels is incremental to time spent on our apps. The trends look good here, and we believe that we’re gaining time spent share on competitors like TikTok.

It’s fair to be a bit skeptical about that number, particularly as auto-playing Reels take over more of both the Facebook and Instagram feeds; what is perhaps more meaningful is the fact that Reels now has a $3 billion annual run rate (despite the fact it doesn’t monetize nearly as well as Meta’s other ad formats — for now, anyways). TikTok, by comparison, had $4 billion in revenue in 2021, and set a goal of $12 billion this year (I suspect the company won’t reach that goal, thanks to both ATT and the macroeconomic environment; still, it should be a good-sized number).

Meta, to be sure, has a much more fleshed out ad product that almost certainly monetizes better than TikTok; the takeaway here is not that Reels is surpassing TikTok anytime soon, but it is a real product that is almost certainly growing more quickly (which, it’s worth noting, is what Instagram did to Snapchat with Stories: Facebook didn’t take usage back, but it stopped more users from moving, which ultimately resulted in far more usage).

Third, the fact that Reels usage is “incremental to time spent on [Meta] apps” supports the argument above that short-form video is growing the pie for user-generated content; to be sure, all of that TikTok usage is probably the equivalent of tens of billions of revenue if Meta could harvest it, but once again the evidence suggests that the cost of TikTok to Meta is, at least for now, opportunity cost, not actual infringement on the company’s business.

Myth 4: Advertising is Dying

This is probably the point where my statement in the beginning, that all of these myths have a bit of truth to them that makes them believable, is the most important: a good chunk of Meta’s drawdown is justified, and the reason is Apple’s App Tracking Transparency (ATT) policy.

Before ATT, ad measurement, particularly for all-digital transactions like app installs and e-commerce sales, was measured deterministically: this meant that Meta knew with a high degree of certainty which ads led to which results, because it collected that data from within advertisers’ apps and websites (via a Facebook SDK or pixel). This in turn gave advertisers the confidence to spend on advertising not with an eye towards its cost, but rather with an expectation of how much revenue could be generated.

ATT severed that connection between Meta’s ads on one side, and conversions on the other, by labeling the latter as third party data and thus tracking (never mind that none of the data was collected by the app maker or merchant, who were more than happy to deputize Meta for ad-related data collection). This not only made the company’s ads less valuable, it also made them more uncertain: unlike COVID, when return-on-advertising spend (ROAS)-focused advertisers bought up inventory abandoned by brands, the current macroeconomic slowdown has much less of a buffer.

This was, needless to say, a big deal for the entire industry, but what has been fascinating to observe over the last nine months is how few companies want to talk about it (particularly Google in the context of YouTube). Meta’s stock slide, though, shows why: ATT was a secular, structural change in the digital ad market, that absolutely should have a big impact on an affected company’s stock price. Meta, to their credit, admitted that ATT would reduce their revenue by $10 billion a year, and because that impact is primarily felt through lower prices, that is money straight off of the bottom line — and it’s a loss that will only accumulate over time, by extension reducing the terminal value of the company. Again, the stock should be down!

What ATT did not do, though, was kill digital advertising. There are still plenty of ads on Facebook, and mostly not from traditional advertisers from the analog world: entire industries have developed online over the last fifteen years in particular, built for a reality where the entire world as addressable market makes niche products viable in a way they never were previously — as long as the seller can find a customer. Meta is still the best option for that sort of top-of-the-funnel advertising, which is why the company still took in $27 billion in advertising last quarter. Moreover, the fact that number was barely down year-over-year speaks to the fact that digital advertising is still growing strongly: yes, ATT lopped off a big chunk of revenue, but it is not as if Meta revenue actually decreased by $10 billion annually (there is an analogy here to how short-form video has increased the share of time of user-generated content, as opposed to taking time away from Meta).

Meta, of course, is not standing still, either: SKAdNetwork 4 has seen Apple retreat from its most extreme positions with a new ad API that should help larger advertisers in particular; Meta is meanwhile working to move more conversions onto their own platform (which magically makes that data allowable as far as Apple is concerned, even though there is no meaningful difference for merchants beyond losing that much more control of their business).1 It’s also notable that the company’s click-to-message advertising product is itself on a $9 billion run rate, and growing fast. The most important efforts, though, are AI-driven.

Myth 5: Meta’s Spending is a Waste

That revenue and expenses graph I posted in 2018 does look a lot more hairy today:

Facebook's expense growth relative to revenue growth looks worse than ever

Some of this is Metaverse-related, which I will get to in a moment; what also has investors spooked, though, is Facebook’s increasing capital expenditures, which have nothing to do with the Metaverse (Metaverse spending is almost all research and development). Meta expects to spend $32-$33 billion in capital expenditures in 2022, and $34-$39 billion in 2023; that won’t hit the income statement right way (capital expenditures show up as depreciation in cost of revenue), but that just means that longer-term profitability may be increasingly impaired. Facebook’s gross margins were down to 79% last quarter, its lowest mark since 2013, and if revenue growth doesn’t pick back up then those margins will fall further, given that the costs are already built in.

The problem with this line of reasoning is that Meta’s capital expenditures are directly focused on both of the two main reasons for alarm: TikTok and ATT. That is because the answer to both challenges is more AI, and building up AI capacity requires a lot of capital investment.

Start with the second point: Wehner said in his prepared remarks:

We are significantly expanding our AI capacity. These investments are driving substantially all of our capital expenditure growth in 2023. There is some increased capital intensity that comes with moving more of our infrastructure to AI. It requires more expensive servers and networking equipment, and we are building new data centers specifically equipped to support next generation AI-hardware. We expect these investments to provide us a technology advantage and unlock meaningful improvements across many of our key initiatives, including Feed, Reels and ads. We are carefully evaluating the return we achieve from these investments, which will inform the scale of our AI investment beyond 2023.

Meta has huge data centers, but those data centers are primarily about CPU compute, which is what is needed to power Meta’s services. CPU compute is also what was necessary to drive Meta’s deterministic ad model, and the algorithms it used to recommend content from your network.

The long-term solution to ATT, though, is to build probabilistic models that not only figure out who should be targeted (which, to be fair, Meta was already using machine learning for), but also understanding which ads converted and which didn’t. These probabilistic models will be built by massive fleets of GPUs, which, in the case of Nvidia’s A100 cards, cost in the five figures; that may have been too pricey in a world where deterministic ads worked better anyways, but Meta isn’t in that world any longer, and it would be foolish to not invest in better targeting and measurement.

Moreover, the same approach will be essential to Reels’ continued growth: it is massively more difficult to recommend content from across the entire network than only from your friends and family, particularly because Meta plans to recommend not just video but also media of all types, and intersperse it with content you care about. Here too AI models will be the key, and the equipment to build those models costs a lot of money.

In the long run, though, this investment should pay off. First, there are the benefits to better targeting and better recommendations I just described, which should restart revenue growth. Second, once these AI data centers are built out the cost to maintain and upgrade them should be significantly less than the initial cost of building them the first time. Third, this massive investment is one no other company can make, except for Google (and, not coincidentally, Google’s capital expenditures are set to rise as well).

That last point is perhaps the most important: ATT hurt Meta more than any other company, because it already had by far the largest and most finely-tuned ad business, but in the long run it should deepen Meta’s moat. This level of investment simply isn’t viable for a company like Snap or Twitter or any of the other also-rans in digital advertising (even beyond the fact that Snap relies on cloud providers instead of its own data centers); when you combine the fact that Meta’s ad targeting will likely start to pull away from the field (outside of Google), with the massive increase in inventory that comes from Reels (which reduces prices), it will be a wonder why any advertiser would bother going anywhere else.

The one caveat to this happy story is the existential threat of TikTok not just stealing growth but actually stealing users and time, but again the answer there is better recommendation algorithms first and foremost, and that, as noted, is an AI problem. In other words, this is the most important money that Meta can spend.

Maybe True: The Metaverse is a Waste of Time and Money

This isn’t an Article about the Metaverse, which as I noted in Meta Meets Microsoft, may be a real product even as it is potentially a bad business for Meta (as an addendum to that piece, I noted on Dithering that I found John Carmack’s critique of Meta’s approach very compelling; he believes the company should be focused on low-cost low-weight devices, which to my mind makes much more sense for a social network).

It’s worth pointing out, though, that the Metaverse’s costs, which will exceed $10 billion this year and be even more next year, are, relative to Meta’s overall business and overall spending, fairly small. It’s definitely legitimate to decrease your valuation of Meta’s business if you think this investment will never contribute to the bottom line — that’s a lot of foregone profit — but this idea that Meta’s business is doomed and that the Metaverse is a Hail Mary flail to build something out of the ashes simply isn’t borne out by the numbers.

Zuckerberg does, to be sure, deserve blame for this perception: he’s the one that renamed the company and committed to spending all of that money, and made clear that it was his vision that dictated that Meta’s efforts go towards expensive hardware like face-tracking, and the fact that he can’t be replaced has always been worth its own discount. This, though, feels like a rebrand that was too successful: Meta the metaverse company may be a speculative boondoggle, but that doesn’t change the fact that the old Facebook is still a massive business with far more of its indicators pointing up-and-to-the-right than its Myspace-analogizers want to admit.


  1. The news last month about pulling back on Instagram Shopping was about focusing on ad-driven commerce. 

Chips and China

Intel may not be the most obvious place to start when it comes to the China chip sanctions announced by the Biden administration three weeks ago (I covered the ban in the Daily Update here and here); the company recently divested its 3DNAND fab in Dalian, and only maintains two test and assembly sites in Chengdu. Sure, there is an angle about Intel’s future as a foundry and its importance in helping the United States catch up in terms of the most advanced processes currently dominated by Taiwan’s TSMC, but when it comes to exploring the implications and risks of these sanctions I am much more interested in Intel’s past.

Start with the present, though: two weeks ago Intel CEO Pat Gelsinger announced a restructuring of the company, with the goal of putting more distance between its design and manufacturing teams. From the Wall Street Journal:

Intel Corp. plans to create greater decision-making separation between its chip designers and chip-making factories as part of Chief Executive Pat Gelsinger’s bid to revamp the company and boost returns. The new structure, which Mr. Gelsinger disclosed in a letter to staff on Tuesday, is designed to let Intel’s network of factories operate like a contract chip-making operation, taking orders from both Intel engineers and external chip companies on an equal footing. Intel has historically used its factories almost exclusively to make its own chips, something Mr. Gelsinger changed when he launched a contract chip-making arm last year.

Back in 2018 I wrote about Intel and the Danger of Integration:

It is perhaps simpler to say that Intel, like Microsoft, has been disrupted. The company’s integrated model resulted in incredible margins for years, and every time there was the possibility of a change in approach Intel’s executives chose to keep those margins. In fact, Intel has followed the script of the disrupted even more than Microsoft: while the decline of the PC finally led to The End of Windows, Intel has spent the last several years propping up its earnings by focusing more and more on the high-end, selling Xeon processors to cloud providers. That approach was certainly good for quarterly earnings, but it meant the company was only deepening the hole it was in with regards to basically everything else. And now, most distressingly of all, the company looks to be on the verge of losing its performance advantage even in high-end applications.

That article was primarily about Intel’s reliance on high margin integrated processors and its unwillingness/inability to become a foundry serving 3rd-party customers, and how smartphones provided the volume for modular players like TSMC to threaten Intel’s manufacturing dominance. However, it’s worth diving into the implications of Intel’s integrated approach relative to TSMC’s modular approach, because it offers lessons for the long road facing China when it comes to building its own semiconductor industry, highlights why the U.S. is itself vulnerable in semiconductors, and explains why the risk for Taiwan has increased significantly.

TSMC’s Depreciation

Fabs are incredibly expensive to build, while chips are extremely cheap; to put it in economic terms, fabs entail massive fixed costs, while chips have minimal marginal costs. This dynamic is very similar to software, which is why venture capital rose up to support chip companies like Intel, and then seamlessly transitioned to supporting software (Silicon Valley, which is today known for software, is literally named for the material used for chips).

One way to manage these costs is to build a fab once and then run it for as long as possible. TSMC’s Fab 2, for example, the company’s sole 150-millimeter wafer facility, was built in 1990, and is still in operation today. That is one of seven TSMC fabs that are over 20 years old, amongst the company’s 26 total (several more are under construction, including the one in Arizona). The chips in these fabs don’t sell for much, but that’s ok because the fabs are completely depreciated: almost all of the revenue is pure profit.

This may seem like the obvious strategy, but it’s a very path dependent one: TSMC was unique precisely because they didn’t design their own chips. I explained the company’s origin story in Chips and Geopolitics:

A few years later, in 1987, Chang was invited home to Taiwan, and asked to put together a business plan for a new government initiative to create a semiconductor industry. Chang explained in an interview with the Computer History Museum that he didn’t have much to work with:

I paused to try to examine what we have got in Taiwan. And my conclusion was that [we had] very little. We had no strength in research and development, or very little anyway. We had no strength in circuit design, IC product design. We had little strength in sales and marketing, and we had almost no strength in intellectual property. The only possible strength that Taiwan had, and even that was a potential one, not an obvious one, was semiconductor manufacturing, wafer manufacturing. And so what kind of company would you create to fit that strength and avoid all the other weaknesses? The answer was pure-play foundry…

In choosing the pure-play foundry mode, I managed to exploit, perhaps, the only strength that Taiwan had, and managed to avoid a lot of the other weaknesses. Now, however, there was one problem with the pure-play foundry model and it could be a fatal problem which was, “Where’s the market?”

What happened is exactly what Christensen would describe several years later: TSMC created the market by “enabl[ing] independent, nonintegrated organizations to sell, buy, and assemble components and subsystems.” Specifically, Chang made it possible for chip designers to start their own companies:

When I was at TI and General Instrument, I saw a lot of IC [Integrated Circuit] designers wanting to leave and set up their own business, but the only thing, or the biggest thing that stopped them from leaving those companies was that they couldn’t raise enough money to form their own company. Because at that time, it was thought that every company needed manufacturing, needed wafer manufacturing, and that was the most capital intensive part of a semiconductor company, of an IC company. And I saw all those people wanting to leave, but being stopped by the lack of ability to raise a lot of money to build a wafer fab. So I thought that maybe TSMC, a pure-play foundry, could remedy that. And as a result of us being able to remedy that then those designers would successfully form their own companies, and they will become our customers, and they will constitute a stable and growing market for us.

It worked. Graphics processors were an early example: Nvidia was started in 1993 with only $20 million, and never owned its own fab.1 Qualcomm, after losing millions manufacturing its earliest designs, spun off its chip-making unit in 2001 to concentrate on design, and Apple started building its own chips without a fab a decade later. Today there are thousands of chip designers in all kinds of niches creating specialized chips for everything from appliances to fighter jets, and none of them have their own fab.

By creating this new market TSMC ended up with a massive customer base; moreover, most of those customers didn’t need cutting edge chips, but rather the same chip that they started with for as long as they made the product into which that chip went. That, by extension, meant that all of those old foundries had a customer base, enabling TSMC to make money on them long after they had been paid off.

Intel’s Margins

Intel’s path, though, preceded TSMC’s, which is to say that of course Intel both designed and manufactured their own chips (“real men have fabs”, as AMD founder Jerry Sanders once famously put it); to put it another way, the entire reason why Chang saw a market in being just a manufacturer was because every company that proceeded TSMC had done both out of necessity, because a company like TSMC didn’t exist.

And, it’s worth noting, there was no reason for TSMC to exist: Intel’s chips, for the two decades it existed before TSMC, were never good enough: every generation would result in such massive leaps in performance that it simply wouldn’t have made sense to keep the old assembly lines around. Still, this stuff was expensive, which is where being integrated helped.

This was the other way to manage the cost of cutting edge fabs: because Intel was at the cutting edge, it would charge a huge premium for its chips (and thus have the highest margins in the industry that I referenced earlier). At the beginning, when fabs were cheaper, Intel was happy to sell off its old equipment and make a few extra bucks on the back end. Over the last decade, though, as equipment became more and more expensive, and as Intel’s leadership started to care more about finances than about engineering, it increasingly became a priority to re-use equipment to the greatest extent possible. This wasn’t easy, I would note: Intel would stick with (relatively) outdated equipment in not just one fab but also in the fabs it built around the world.

This is where the integration point was critical: because Intel both designed and manufactured its chips, the latter could call the shots for the former; chips had to be designed to work with Intel manufacturing, not the other way around, and this extended to not just the designs themselves but all of the tooling that went into it. Intel, for example, used its own chip design software, and favored suppliers who would do what Intel told them to, and then hand the equipment off to Intel to do with it as they saw fit. Intel would then get everything to work in one fab, and Copy Exactly! that fab in another location: everything was identical, down to the position of the toilets in the bathrooms.

As I noted in the conclusion of Intel and the Danger of Integration, Intel’s strategy worked phenomenally well, right up until it didn’t:

What makes disruption so devastating is the fact that, absent a crisis, it is almost impossible to avoid. Managers are paid to leverage their advantages, not destroy them; to increase margins, not obliterate them. Culture more broadly is an organization’s greatest asset right up until it becomes a curse. To demand that Intel apologize for its integrated model is satisfying in 2018, but all too dismissive of the 35 years of success and profits that preceded it. So it goes.

So it goes, indeed — or rather, the correct conjugation is the past tense: so went Intel’s manufacturing advantage.

ASML’s Rise

I mentioned TSMC’s Fab 2 earlier and its 150-millimeter wafers; that is 1980’s era technology. The 1990s brought 200-millimeter wafers (which are used in seven of TSMC’s fabs). It was the transition to today’s 300-millimeter fabs in the early 2000’s, though, that marked the rise of ASML.

Intel’s partner in the lithography space — the use of light to draw transistors on wafers — was Nikon, and Nikon’s approach to 300-millimeter wafers was to scale up its 200-millimeter process. There was a downside to this approach, though: because the wafers were larger they had to move more slowly (more mass means more force, unless acceleration is decreased). This was fine with Intel, though: they were their own only customer, and their margins were plenty high enough to handle a decrease in throughput (indeed, Intel was well-known for running their machines well below capacity).

Lower speed wasn’t fine for TSMC and Samsung, the other up-and-comer in the space: like any challenger they were operating on much lower margins, and they didn’t want a decrease in throughput — the entire point of larger wafers was to increase the number of chips that could be produced, not to give away that gain by running everything more slowly. ASML saw the opportunity and designed an entirely new process around 300-millimeter wafers, creating dual wafer stage technology that aligned and mapped one wafer while another was being exposed.

TSMC and ASML were already close, in part because both were part of the Philips family tree (Philips was the only external investor in TSMC, which licensed Philips technology to start, and ASML was a joint venture of Philips and ASMI). What was more important is that both were ignored by the dominant players in the industry: the big chip makers, from Intel to Motorola to Texas Instruments, were matched up with Nikon and Canon; the former didn’t want equipment from a new entrant, and the latter didn’t have capacity for a foundry that was not only working on low margins but also, as part of its cost consciousness, wanted to learn how to service the machines themselves (the Japanese companies preferred to deliver black boxes that their own technicians would service).

ASML’s 300-nanometer process, though, required a reworking on the fab side as well. Now TSMC and ASML weren’t simply stuck together like two kids picked last at recess: they were deeply enmeshed in the process of working through the new process’s bugs, designing new fabs to support it, and maximizing output once everything was working. This increase in output had another side effect: TSMC started to make a bit more money, which it started pouring into its own research and development. It was TSMC that pushed ASML towards immersion lithography, where the space between the lens and the wafer was filled with a liquid with a higher refraction index than air. Nikon would eventually be forced to respond with its own lithography machines, but they were never as good as ASML’s, which meant that even Intel had to come calling as a customer.

ASML, meanwhile, had been working for years on a true moonshot: extreme ultraviolet lithography. Here is the Brookings Institution’s description of the process:

A generator ejects 50,000 tiny droplets of molten tin per second. A high-powered laser blasts each droplet twice. The first shapes the tiny tin, so the second can vaporize it into plasma. The plasma emits extreme ultraviolet (EUV) radiation that is focused into a beam and bounced through a series of mirrors. The mirrors are so smooth that if expanded to the size of Germany they would not have a bump higher than a millimeter. Finally, the EUV beam hits a silicon wafer — itself a marvel of materials science — with a precision equivalent to shooting an arrow from Earth to hit an apple placed on the moon. This allows the EUV machine to draw transistors into the wafer with features measuring only five nanometers — approximately the length your fingernail grows in five seconds. This wafer with billions or trillions of transistors is eventually made into computer chips.

An EUV machine is made of more than 100,000 parts, costs approximately $120 million, and is shipped in 40 freight containers. There are only several dozen of them on Earth and approximately two years’ worth of back orders for more. It might seem unintuitive that the demand for a $120 million tool far outstrips supply, but only one company can make them. It’s a Dutch company called ASML, which nearly exclusively makes lithography machines for chip manufacturing.

It’s not just ASML, though: that mirror is made by Zeiss, and the laser is made by TRUMPF using carbon dioxide sources pioneered by Access Laser (a U.S. company later acquired by TRUMPF). They are the two most important of over 800 suppliers for EUV, but it’s the end users that are equally essential.

When TSMC Passed Intel

In 2012 Intel, TSMC, and Samsung all invested in ASML to help the company finish the EUV project that had started 11 years earlier: there were very real questions about whether or not ASML would ever ship, or die trying, while it was clear that immersion lithography was reaching the limits of what was possible. The investment amounts are interesting in retrospect:

Company Intel TSMC Samsung
Investment in stock 15% for $3.1 billion 5% for $1.03 billion 3% for $630 million
Investment in R&D $1 billion $345 million $345 million

Intel, despite investing the most (and having contributed a big chunk of the underlying technology), was convinced it could stick with immersion lithography as it transitioned first to 10-nanometer and then 7-nanometer chips. Yes, those were awfully small lines to be drawing with a light source that was 193-nanometers in width, but it wasn’t clear that EUV yields were going to be high enough, and besides, Intel had a lot of lithography equipment that, if used for one or two more generations, would make for some very fat margins. That was more of a priority for Intel than technological leadership, even as decades of said leadership had created the arrogance to believe that Intel could use quad-patterning — i.e. doing four exposures on a single wafer — to create those ever thinner lines.

TSMC, on the other hand, had three reasons to commit to EUV:

  • First, TSMC had a multi-decade relationship with ASML that included two significant process transitions (to 300-millimeter wafers and immersion lithography).
  • Second, because TSMC was a foundry, it needed to manufacture smaller lots of much greater variety; this meant that fiddly multi-pattern approaches that took many runs to improve yields didn’t make sense. EUV’s 13.5 nanometer light offered the potential for much simpler designs that fit TSMC’s business model.
  • Third, Apple was willing to pay to have the fastest chips in the world, which meant that TSMC had a guaranteed first customer with massive volume whenever it could get EUV working.

In the end, TSMC started using EUV for non-critical layers at 7 nanometers, and for critical layers at 5 nanometers (in 2020); Intel, meanwhile, failed for years to ship 10 nanometer chips (which are closer to TSMC’s 7 nanometer chips), and had to completely rework its 7 nanometer process to incorporate EUV. Those chips are only starting mass production this fall — the same time period when TSMC is shipping new 3 nanometer chips. Intel, by the way, is a customer for TSMC’s 3nm process: the company’s performance was falling too far behind AMD, which abandoned its own fabs in 2009 and has been riding TSMC’s improvements (along with its own new designs) for the last five years.

China’s Integrated Path

Only now, 3,500 words in, do I turn to China, and the country’s path forward to building the sort of advanced chips that the U.S. has just cut off access to. That, though, is the point: the chip industry’s path to today is China’s path to the future.

This is a daunting challenge: it’s not just that China needs to re-create TSMC, but also ASML, Lam Research, Applied Materials, Tokyo Electronic, and all of the other pieces of the foundry supply chain. And, to go one layer deeper, not only does China need to re-create ASML, but also Zeiss, and TRUMPF, and Access Laser, and all of the other pieces of the global supply chain, much of which is not located in China. China’s manufacturing prowess is centered on traditionally labor-centric components; even though Chinese labor is now much more expensive than it was, and automation much more common, path dependency matters, and China’s capability is massive but in some respects limited.

Globalization made all of those Chinese factories extremely valuable, because the world was China’s market. At the same time, globalization also meant that China could buy high-precision capital-intensive goods abroad: it didn’t need to build them itself to get the benefits immediately. By the same token high-precision capital-intensive goods are exactly what Western countries like the U.S., Germany, Netherlands, Japan and Taiwan invested in, in part because they couldn’t compete with China on labor. To put it another way, the principles of comparative advantage governed an infinite number of decisions on the margins that led to the U.S. government having the ability to impose these sanctions on China; the realities of semiconductor manufacturing, where every paradigm shift costs massive amounts of money, years in R&D, and the willingness of partners to take the leap with you, are a further manifestation of comparative advantage: it simply makes the most sense for one company to do lithography, and another to lead the world in fabrication.

In other words, China is going to need to build up these capabilities from the ground up, and it’s going to be a long hard road. Moreover, China will not have the benefit of partnership and distributed expertise that have driven the last decade of innovation: in some respects China is going to need to be Intel, doing too much on its own.

That said, the country does have three big advantages:

  • First, it is much easier to follow a path than to forge a new one. China may not be able to make EUV machines, but at least they know they can be made.
  • Second, China has benefited from all of the technological sharing to date: Semiconductor Manufacturing International Corporation (SMIC) has successfully manufactured 7nm chips (using ASML’s immersion lithography machines), and Shanghai Micro Electronics Equipment (SMEE) has built its own immersion lithography machines. Granted, those 7nm chips almost certainly had poor yields, and the trick is for SMIC to use SMEE on the cutting edge, but that leads to the third point:
  • China has unlimited money and infinite motivation to figure this out.

Money is not a panacea: you can’t simply spend your way to faster chips, but instead must move down the learning curve on both the foundry and equipment level. Money does, though, pay for processes that don’t have great yields: the problem for Intel at 7 nanometer, for example, wasn’t that they couldn’t make chips, but that they couldn’t get yields high enough to make them economically. That won’t be a concern for China when it comes to chips for military applications.

What is more meaningful, though, will be the alignment of China’s private sector behind China’s chip companies: TSMC didn’t only need ASML, it also needed Apple and AMD and Nvidia, end users who were both willing to pay for performance and also work deeply with TSMC to figure out generation after generation of faster chips. Tencent and Alibaba and Baidu will now join Huawei in being the China chip industry’s most demanding customers, in the best possible sense.

China’s Trailing Edge

There is one more advantage China has: remember all of those old fabs that TSMC is still operating? It turns out that as more and more products incorporate microprocessors, trailing edge chips are exploding in demand. This was seen most clearly during the pandemic when U.S. automakers, who foolishly canceled their chip orders when the pandemic hit, suddenly found themselves at the back of the line as demand for basic chips skyrocketed.

In the end it was China that picked up a lot of the slack: the company’s commitment to building its own semiconductor industry is not a new one (just much more pressing), and part of the process of walking the path I detailed above is building more basic chips using older technologies. China’s share of >45 nanometer chips was 23% in 2019, and probably over 35% today; its share of 28-45 nanometer chips was 19% in 2019 and is probably approaching 30% today. Moreover, these chips still make up most of the volume for the industry as a whole: when you see charts like this, which measure market share by revenue, keep in mind that China has achieved 9% market share with low-priced chips:

China's increasing share of chips by revenue

The Biden administration’s sanctions are designed to not touch this part of the industry: the limitations are on high end fabs and the equipment and people that go into them, not trailing edge fabs that make up most of this volume. There is good reason for this: these trailing edge factories are still using a lot of U.S. equipment; for most equipment makers China is responsible for around a third of their revenue. That means cutting off trailing edge fabs would have two deleterious effects on the U.S.: a huge number of the products U.S. consumers buy would falter for lack of chips, even as the same U.S. companies that have built the advantage the administration is seeking to exploit would have their revenue (and future ability to invest in R&D) impaired.

It’s worth pointing out, though, that this is producing a new kind of liability for the U.S., and potentially more danger for Taiwan.

Go back to Intel’s strategy of selling off and/or reusing its old fabs, which again, made sense given the path Intel started on decades ago: that means that Intel, unlike TSMC, doesn’t have any trailing edge capacity (outside of what it acquired in the Tower Semiconductor deal). Global Foundries, the U.S.’s other foundry, had the same model as Intel while it was the manufacturing arm of AMD; Global Foundries acquired trailing edge capacity with its acquisition of Chartered Semiconductor, but there is a reason why the U.S. >45 nanometer market share was only 9% in 2019 (and likely lower today), and 28-45 nanometer market share was a mere 6% (and again, likely lower today).

Again, these aren’t difficult chips to make, but that is precisely why it makes little sense to build new trailing edge foundries in the U.S.: Taiwan already has it covered (with the largest marketshare in both categories), and China has the motivation to build more just so it can learn.

What, though, if TSMC were taken off the board?

Much of the discussion around a potential invasion of Taiwan — which would destroy TSMC (foundries don’t do well in wars) — centers around TSMC’s lead in high end chips. That lead is real, but Intel, for all of its struggles, is only 3~5 years behind. That is a meaningful difference in terms of the processors used in smartphones, high performance computing, and AI, but the U.S. is still in the game. What would be much more difficult to replace are, paradoxically, trailing node chips, made in fabs that Intel long ago abandoned.

China meanwhile, has had good reason to keep TSMC around, even as it built up its own trailing edge fabs: the country needs cutting edge chips, and TSMC makes them. However, if those chips are cut off, then what use is TSMC to China? This isn’t a new concern, by the way; I wrote after the U.S. imposed sanctions on Huawei:

I am, needless to say, not going to get into the finer details of the relationship between China and Taiwan (and the United States, which plays a prominent role); it is less that reasonable people may disagree and more that expecting reasonableness is probably naive. It is sufficient to note that should the United States and China ever actually go to war, it would likely be because of Taiwan.

In this TSMC specifically, and the Taiwan manufacturing base generally, are a significant deterrent: both China and the U.S. need access to the best chip maker in the world, along with a host of other high-precision pieces of the global electronics supply chain. That means that a hot war, which would almost certainly result in some amount of destruction to these capabilities, would be devastating…one of the risks of cutting China off from TSMC is that the deterrent value of TSMC’s operations is diminished.

My worry is that this excerpt didn’t go far enough: the more that China builds up its chip capabilities — even if that is only at trailing nodes — the more motivation there is to make TSMC a target, not only to deny the U.S. its advanced capabilities, but also the basic chips that are more integral to everyday life than we ever realized.

MAD Chips

So is this chip ban the right move?

In the medium term, the impacts will be significant, particularly in terms of the stated target of these sanctions — AI. Only now is it becoming possible to manufacture intelligence, and the means to do so is incredibly processor intensive, both in terms of quality and quantity. Moreover, not only does AI figure to loom large in military applications, but is also likely to spur innovation in its own right, perhaps even in terms of figuring out how to keep pushing the frontier of chip design.

In the long run, meanwhile, the U.S. may have given up what would have been, thanks to the sheer amount of cost and learning curve distance involved, a permanent economic advantage. Absent politics there simply is no reason to compete with TSMC or ASML or any of the other specialized parts of the supply chain; it would simply be easier to buy instead of build. Now, though, it is possible to envision a future where China undercuts U.S. companies in chips just like they once did in more labor-intensive industries, even as its own AI capabilities catch up and, given China’s demonstrated willingness to use technology in deeply intrusive ways, potentially surpass the West with its concerns about privacy and property rights.

The big question that I am raising in this article is the short run: while I have spent most of the last two years cautioning Americans who thought Taiwan was Thailand to not go from 0 to 100 in terms of the China threat, this move has in fact raised my concern level significantly. I am still, on balance, skeptical about a conflict, thanks in large part to how intertwined the U.S. and Chinese economies still are: any conflict would be mutually assured economic destruction.

Chips did, until three weeks ago, fall under the same paradigm; I wrote earlier this year in Tech and War:

This point applies to semiconductors broadly: as long as China needs U.S. technology or TSMC manufacturing, it is heavily incentivized to not take action against Taiwan; when and if China develops its own technology, whether now or many years from now, that deterrence is no longer a factor. In other words, the short-term and longer-term are in opposition to the medium-term…

There is no obvious answer, and it’s worth noting that the historical pattern — i.e. the Cold War — is a complete separation of trade and technology. That is one possible path, that we may fall into by default. It’s worth remembering, though, that dividers in the street are no way to live, and while most U.S. tech companies have flexed their capabilities, the most impressive tech of all is attractive enough and irreplaceable enough that it could still create dependencies that lead to squabbles but not another war.

Those dependencies are being severed; hopefully we still find sufficient reason to go no further than squabbles.


  1. The very first Nvidia chips were manufactured by SGS-Thomson Microelectronics, but have been manufactured by mostly TSMC from the original GeForce on