Consoles and Competition

The first video game was a 1952 research product called OXO — tic-tac-toe played on a computer the size of a large room:

The EDSAC computer
Copyright Computer Laboratory, University of Cambridge, CC BY 2.0

Fifteen years later Ralph Baer produced “The Brown Box”; Magnavox licensed Baer’s device and released it as the Odyssey five years later — it was the first home video game console:

The Magnavox Odyssey

The Odyssey made Magnavox a lot of money, but not through direct sales: the company sued Atari for ripping off one of the Odyssey’s games to make “Pong”, the company’s first arcade game and, in 1975, first home video game, eventually reaping over $100 million in royalties and damages. In other words, arguments about IP and control have been part of the industry from the beginning.

In 1977 Atari released the 2600, the first console I ever owned:1

The Atari 2600

All of the games for the Atari were made by Atari, because of course they were; IBM had unbundled mainframe software and hardware in 1969 in an (unsuccessful) attempt to head off an antitrust case, but video games barely existed as a category in 1977. Indeed, it was only four years earlier when Steve Wozniak had partnered with Steve Jobs to design a circuit board for Atari’s Breakout arcade game; this story is most well-known for the fact that Jobs lied to Wozniak about the size of the bonus he earned, but the pertinent bit for this Article is that video game development was at this point intrinsically tied to hardware.

That, though, was why the 2600 was so unique: games were not tied to hardware but rather self-contained in cartridges, meaning players would use the same system to play a whole bunch of different games:

Atari cartridges
Nathan King, CC BY 2.0

The implications of this separation did not resonate within Atari, which had been sold by founder Nolan Bushnell to Warner Communications in 1976, in an effort to get the 2600 out the door. Game Informer explains what happened:

In early 1979, Atari’s marketing department issued a memo to its programing staff that listed all the games Atari had sold the previous year. The list detailed the percentage of sales each game had contributed to the company’s overall profits. The purpose of the memo was to show the design team what kinds of games were selling and to inspire them to create more titles of a similar breed…David Crane, Larry Kaplan, Alan Miller, and Bob Whitehead were four of Atari’s superstar programmers. Collectively, the group had been responsible for producing many of Atari’s most critical hits…

“I remember looking at that memo with those other guys,” recalls Crane, “and we realized that we had been responsible for 60 percent of Atari’s sales in the previous year – the four of us. There were 35 people in the department, but the four of us were responsible for 60 percent of the sales. Then we found another announcement that [Atari] had done $100 million in cartridge sales the previous year, so that 60 percent translated into ­$60 ­million.”

These four men may have produced $60 million in profit, but they were only making about $22,000 a year. To them, the numbers seemed astronomically disproportionate. Part of the problem was that when the video game industry was founded, it had molded itself after the toy industry, where a designer was paid a fixed salary and everything that designer produced was wholly owned by the company. Crane, Kaplan, Miller, and Whitehead thought the video game industry should function more like the book, music, or film industries, where the creative talent behind a project got a larger share of the profits based on its success.

The four walked into the office of Atari CEO Ray Kassar and laid out their argument for programmer royalties. Atari was making a lot of money, but those without a corner office weren’t getting to share the wealth. Kassar – who had been installed as Atari’s CEO by parent company Warner Communications – felt obligated to keep production costs as low as possible. Warner was a massive c­orporation and everyone helped contribute to the ­company’s ­success. “He told us, ‘You’re no more important to those projects than the person on the assembly line who put them together. Without them, your games wouldn’t have sold anything,’” Crane remembers. “He was trying to create this corporate line that it was all of us working together that make games happen. But these were creative works, these were authorships, and he didn’t ­get ­it.”

“Kassar called us towel designers,” Kaplan told InfoWorld magazine back in 1983, “He said, ‘I’ve dealt with your kind before. You’re a dime a dozen. You’re not unique. Anybody can do ­a ­cartridge.’”

That “anyone” included the so-called “Gang of Four”, who decided to leave Atari and form the first 3rd-party video game company; they called it Activision.

3rd-Party Software

Activision represented the first major restructuring of the video game value chain; Steve Wozniak’s Breakout was fully integrated in terms of hardware and software:

The first Atari equipment was fully integrated

The Atari 2600 with its cartridge-based system modularized hardware and software:2

The Atari 2600 was modular

Activision took that modularization to its logical (and yet, at the time, unprecedented) extension, by being a different company than the one that made the hardware:

Activision capitalized on the modularity

Activision, which had struggled to raise money given the fact it was targeting a market that didn’t yet exist, and which faced immediate lawsuits from Atari, was a tremendous success; now venture capital was eager to fund the market, leading to a host of 3rd-party developers, few of whom had the expertise or skill of Activision. The result was a flood of poor quality games that soured consumers on the entire market, leading to the legendary video game crash of 1983: industry revenue plummeted from $3.2 billion in 1983 to a mere $100 million in 1985. Activision survived, but only by pivoting to making games for the nascent personal computing market.

The personal computer market was modular from the start, and not just in terms of software. Compaq’s success in reverse-engineering the IBM PC’s BIOS created a market for PC-compatible computers, all of which ran the increasingly ubiquitous Microsoft operating system (first DOS, then Windows). This meant that developers like Activision could target Windows and benefit from competition in the underlying hardware.

Moreover, there were so many more use cases for the personal computer, along with a burgeoning market in consumer-focused magazines that reviewed software, that the market was more insulated from the anarchy that all but destroyed the home console market.

That market saw a rebirth with Nintendo’s Famicom system, christened the “Nintendo Entertainment System” for the U.S. market (Nintendo didn’t want to call it a console to avoid any association with the 1983 crash, which devastated not just video game makers but also retailers). Nintendo created its own games like Super Mario Bros. and Zelda, but also implemented exacting standards for 3rd-party developers, requiring them to pass a battery of tests and pay a 30% licensing fee for a maximum of five games a year; only then could they receive a dedicated chip for their cartridge that allowed it to work in the NES.

Nintendo controlled its ecosystem

Nintendo’s firm control of the third-party developer market may look familiar: it was an early precedent for the App Store battles of the last decade. Many of the same principles were in play:

  • Nintendo had a legitimate interest in ensuring quality, not simply for its own sake but also on behalf of the industry as a whole; similarly, the App Store, following as it did years of malware and viruses in the PC space, restored customer confidence in downloading third-party software.
  • It was Nintendo that created the 30% share for the platform owner that all future console owners would implement, and which Apple would set as the standard for the App Store.
  • While Apple’s App Store lockdown is rooted in software, Nintendo had the same problem that Atari had in terms of the physical separation of hardware and software; this was overcome by the aforementioned lockout chips, along with branding the Nintendo “Seal of Quality” in an attempt to fight counterfeit lockout chips.

Nintendo’s strategy worked, but it came with long-term costs: developers, particularly in North America, hated the company’s restrictions, and were eager to support a challenger; said challenger arrived in the form of the Sega Genesis, which launched in the U.S. in 1989. Sega initially followed Nintendo’s model of tight control, but Electronic Arts reverse-engineered Sega’s system, and threatened to create their own rival licensing program for the Genesis if Sega didn’t dramatically loosen their controls and lower their royalties; Sega acquiesced and went on to fight the Super Nintendo, which arrived in the U.S. in 1991, to a draw, thanks in part to a larger library of third-party games.

Sony’s Emergence

The company that truly took the opposite approach to Nintendo was Sony; after being spurned by Nintendo in humiliating fashion — Sony announced the Play Station CD-ROM add-on at CES in 1991, only for Nintendo to abandon the project the next day — the electronics giant set out to create their own console which would focus on 3D-graphics and package games on CD-ROMs instead of cartridges. The problem was that Sony wasn’t a game developer, so it started out completely dependent on 3rd-party developers.

One of the first ways that Sony addressed this was by building an early partnership with Namco, Sega’s biggest rival in terms of arcade games. Coin-operated arcade games were still a major market in the 1990s, with more revenue than the home market for the first half of the decade. Arcade games had superior graphics and control systems, and were where new games launched first; the eventual console port was always an imitation of the original. The problem, however, is that it was becoming increasingly expensive to build new arcade hardware, so Sony proposed a partnership: Namco could use modified PlayStation hardware as the basis of its System 11 arcade hardware, which would make it easy to port its games to PlayStation. Namco, which also rebuilt its more powerful Ridge Racer arcade game for the PlayStation, took Sony’s offer: Ridge Racer launched with the Playstation, and Tekken was a massive hit given its near perfect fidelity to the arcade version.

Sony was much better for 3rd-party developers in other ways, as well: while the company maintained a licensing program, its royalty rates were significantly lower than Nintendo’s, and the cost of manufacturing CD-ROMs was much lower than manufacturing cartridges; this was a double whammy for the Nintendo 64 because while cartridges were faster and offered the possibility of co-processor add-ons, what developers really wanted was the dramatically increased amount of storage CD-ROMs afforded. The Playstation was also the first console to enable development on the PC in a language (C) that was well-known to existing developers. In the end, despite the fact that the Nintendo 64 had more capable hardware than the PlayStation, it was the PlayStation that won the generation thanks to a dramatically larger game library, the vast majority of which were third-party games.

Sony extended that advantage with the PlayStation 2, which was backwards compatible with the PlayStation, meaning it had a massive library of 3rd-party games immediately; the newly-launched Xbox, which was basically a PC, and thus easy to develop for, made a decent showing, while Nintendo struggled with the Gamecube, which had both a non-standard controller and non-standard microdisks that once again limited the amount of content relative to the DVDs used for PlayStation 2 and Xbox (and it couldn’t function as a DVD player, either).

The peak of 3rd-party based competition

This period for video games was the high point in terms of console competition for 3rd-party developers for two reasons:

  • First, there were still meaningful choices to be made in terms of hardware and the overall development environment, as epitomized by Sony’s use of CD-ROMs instead of cartridges.
  • Second, developers were still constrained by the cost of developing for distinct architectures, which meant it was important to make the right choice (which dramatically increased the return of developing for the same platform as everyone else).

It was the Sony-Namco partnership, though, that was a harbinger of the future: it behooved console makers to have similar hardware and software stacks to their competitors, so that developers would target them; developers, meanwhile, were devoting an increasing share of their budget to developing assets, particularly when the PS3/Xbox 360 generation targeted high definition, which increased their motivation to be on multiple platforms to better leverage their investments. It was Sony that missed this shift: the PS3 had a complicated Cell processor that was hard to develop for, and a high price thanks to its inclusion of a Blu-Ray player; the Xbox 360 had launched earlier with a simpler architecture, and most developers built for the Xbox first and Playstation 3 second (even if they launched at the same time).

The real shift, though, was the emergence of game engines as the dominant mode of development: instead of building a game for a specific console, it made much more sense to build a game for a specific engine which abstracted away the underlying hardware. Sometimes these game engines were internally developed — Activision launched its Call of Duty franchise in this time period (after emerging from bankruptcy under new CEO Bobby Kotick) — and sometimes they were licensed (i.e. Epic’s Unreal Engine). The impact, though, was in some respects similar to cartridges on the Atari 2600:

Consoles became a commodity in the PS3/Xbox 360 generation

In this new world it was the consoles themselves that became modularized: consumers picked out their favorite and 3rd-party developers delivered their games on both.

Nintendo, meanwhile, dominated the generation with the Nintendo Wii. What was interesting, though, is that 3rd-party support for the Wii was still lacking, in part because of the underpowered hardware (in contrast to previous generations): the Wii sold well because of its unique control method — which most people used to play Wii Sports — and Nintendo’s first-party titles. It was, in many respects, Nintendo’s most vertically-integrated console yet, and was incredibly successful.

Sony Exclusives

Sony’s pivot after the (relatively) disappointing PlayStation 3 was brilliant: if the economic imperative for 3rd-party developers was to be on both Xbox and PlayStation (and the PC), and if game engines made that easy to implement, then there was no longer any differentiation to be had in catering to 3rd-party developers.

Instead Sony beefed up its internal game development studios and bought up several external ones, with the goal of creating PlayStation 4 exclusives. Now some portion of new games would not be available on Xbox not because it had crappy cartridges or underpowered graphics, but because Sony could decide to limit its profit on individual titles for the sake of the broader PlayStation 4 ecosystem. After all, there would still be a lot of 3rd-party developers; if Sony had more consoles than Microsoft because of its exclusives, then it would harvest more of those 3rd-party royalty fees.

Those fees, by the way, started to head back up, particularly for digital-only versions, which returned to that 30% cut that Nintendo had pioneered many years prior; this is the downside of depending on universal abstractions like game engines while bearing high development costs: you have no choice but to be on every platform no matter how much it costs.

Sony's exclusive strategy gave it the edge in the PS4 generation

Sony bet correctly: the PS4 dominated its generation, helped along by Microsoft making a bad bet of its own by packing in the Kinect with the Xbox One. It was a repeat of Sony’s mistake with the PS3, in that it was a misguided attempt to differentiate in hardware when the fundamental value chain had long since dictated that the console was increasingly a commodity. Content is what mattered — at least as long as the current business model persisted.

Nintendo, meanwhile, continued to march to its own vertically-integrated drum: after the disastrous Wii U the company quickly pivoted to the Nintendo Switch, which continues to leverage its truly unique portable form factor and Nintendo’s first-party games to huge sales. Third party support, though, remains extremely tepid; it’s just too underpowered, and the sort of person that cares about third-party titles like Madden or Call of Duty has long since bought a PlayStation or Xbox.

The FTC vs. Microsoft

Forty years of context may seem like overkill when it comes to examining the FTC’s attempt to block Microsoft’s acquisition of Activision, but I think it is essential for multiple reasons.

First, the video game market has proven to be extremely dynamic, particularly in terms of 3rd-party developers:

  • Atari was vertically integrated
  • Nintendo grew the market with strict control of 3rd-party developers
  • Sony took over the market by catering to 3rd-party developers and differentiating on hardware
  • Xbox’s best generation leaned into increased commodification and ease-of-development
  • Sony retook the lead by leaning back into vertical integration

That is quite the round trip, and it’s worth pointing out that attempting to freeze the market in its current iteration at any point over the last forty years would have foreclosed future changes.

At the same time, Sony’s vertical integration seems more sustainable than Atari’s. First, Sony owns the developers who make the most compelling exclusives for its consoles; they can’t simply up-and-leave like the Gang of Four. Second, the costs of developing modern games has grown so high that any 3rd-party developer has no choice but to develop for all relevant consoles. That means that there will never be a competitor who wins by offering 3rd-party developers a better deal; the only way to fight back is to have developers of your own, or a completely different business model.

The first fear raised by the FTC is that Microsoft, by virtue of acquiring Activision, is looking to fight its own exclusive war, and at first blush it’s a reasonable concern. After all, Activision has some of the most popular 3rd-party games, particularly the aforementioned Call of Duty franchise. The problem with this reasoning, though, is that the price Microsoft paid for Activision was a multiple of Activision’s current revenues, which include billions of dollars for games sold on Playstation. To suddenly cut Call of Duty (or Activision’s other multi-platform titles) off from Playstation would be massively value destructive; no wonder Microsoft said it was happy to sign a 10-year deal with Sony to keep Call of Duty on PlayStation.

Just for clarity’s sake, the distinction here from Sony’s strategy is the fact that Microsoft is acquiring these assets. It’s one thing to develop a game for your own platform — you’re building the value yourself, and choosing to harvest it with an ecosystem strategy as opposed to maximizing that games’ profit. An acquirer, though, has to pay for the business model that already exists.

At the same time, though, it’s no surprise that Microsoft has taken in-development assets from its other acquisition like ZeniMax and made them exclusives; that is the Sony strategy, and Microsoft was very clear when it acquired ZeniMax that it would keep cross-platform games cross-platform but may pursue a different strategy for new intellectual property. CEO of Microsoft Gaming Phil Spencer told Bloomberg at the time:

In terms of other platforms, we’ll make a decision on a case-by-case basis.

Given this, it’s positively bizarre that the FTC also claims that Microsoft lied to the E.U. with regards to its promises surrounding the ZeniMax acquisition: the company was very clear that existing cross-platform games would stay cross-platform, and made no promises about future IP. Indeed, the FTC’s claims were so off-base that the European Commission felt the need to clarify that Microsoft didn’t mislead the E.U.; from Mlex:

Microsoft didn’t make any “commitments” to EU regulators not to release Xbox-exclusive content following its takeover of ZeniMax Media, the European Commission has said. US enforcers yesterday suggested that the US tech giant had misled the regulator in 2021 and cited that as a reason to challenge its proposed acquisition of Activision Blizzard. “The commission cleared the Microsoft/ZeniMax transaction unconditionally as it concluded that the transaction would not raise competition concerns,” the EU watchdog said in an emailed statement.

The absence of competition concerns “did not rely on any statements made by Microsoft about the future distribution strategy concerning ZeniMax’s games,” said the commission, which itself has opened an in-depth probe into the Activision Blizzard deal and appears keen to clarify what happened in the previous acquisition. The EU agency found that even if Microsoft were to restrict access to ZeniMax titles, it wouldn’t have a significant impact on competition because rivals wouldn’t be denied access to an “essential input,” and other consoles would still have a “large array” of attractive content.

The FTC’s concerns about future IP being exclusive ring a bit hypocritical given the fact that Sony has been pursuing the exact same strategy — including multiple acquisitions — without any sort of regulatory interference; more than that, though, to effectively make up a crime is disquieting. To be fair, those Sony acquisitions were a lot smaller than Activision, but this goes back to the first point: the entire reason Activision is expensive is because of its already-in-market titles, which Microsoft has every economic incentive to keep cross-platform (and which it is willing to commit to contractually).

Whither Competition

It’s the final FTC concern, though, that I think is dangerous. From the complaint:

These effects are likely to be felt throughout the video gaming industry. The Proposed Acquisition is reasonably likely to substantially lessen competition and/or tend to create a monopoly in both well-developed and new, burgeoning markets, including highperformance consoles, multi-game content library subscription services, and cloud gaming subscription services…

Multi-Game Content Library Subscription Services comprise a Relevant Market. The anticompetitive effects of the Proposed Acquisition also are reasonably likely to occur in any relevant antitrust market that contains Multi-Game Content Library Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

Cloud Gaming Subscription Services are a Relevant Market. The anticompetitive effects of the Proposed Acquisition alleged in this complaint are also likely to occur in any relevant antitrust market that contains Cloud Gaming Subscription Services, including a combined Multi-Game Content Library and Cloud Gaming Subscription Services market.

“Multi-Game Content Library Subscription Services” and “Cloud Gaming Subscription Services” are, indeed, the reason why Microsoft wants to do this deal. I explained the rationale when Microsoft acquired ZeniMax:

A huge amount of discussion around this acquisition was focused on Microsoft needing its own stable of exclusives in order to compete with Sony, but it’s important to note that making all of ZeniMax’s games exclusives would be hugely value destructive, at least in the short-to-medium term. Microsoft is paying $7.5 billion for a company that currently makes money selling games on PC, Xbox, and PS5, and simply cutting off one of those platforms — particularly when said platform is willing to pay extra for mere timed exclusives, not all-out exclusives — is to effectively throw a meaningful chunk of that value away. That certainly doesn’t fit with Nadella’s statement that “each layer has to stand on its own for what it brings”…

Microsoft isn’t necessarily buying ZeniMax to make its games exclusive, but rather to apply a new business model to them — specifically, the Xbox Game Pass subscription. This means that Microsoft could, if it chose, have its cake and eat it too: sell ZeniMax games at their usual $60~$70 price on PC, PS5, Xbox, etc., while also making them available from day one to Xbox Game Pass subscribers. It won’t take long for gamers to quickly do the math: $180/year — i.e. three games bought individually — gets you access to all of the games, and not just on one platform, but on all of them, from PC to console to phone.

Sure, some gamers will insist on doing things the old way, and that’s fine: Microsoft can make the same money ZeniMax would have as an independent company. Everyone else can buy into Microsoft’s model, taking advantage of the sort of win-win-win economics that characterize successful bundles. And, if they have a PS5 and thus can’t get access to Xbox Game Pass on their TVs, an Xbox is only an extra $10/month away.

Microsoft is willing to cannibalize itself to build a new business model for video games, and it’s a business model that is pretty darn attractive for consumers. It’s also a business model that Activision wouldn’t pursue on its own, because it has its own profits to protect. Most importantly, though, it’s a business model that is anathema to Sony: making titles broadly available to consumers on a subscription basis is the exact opposite of the company’s exclusive strategy, which is all about locking consumers into Sony’s platform.

Microsoft's Xbox Game Pass strategy is orthogonal to Sony's

Here’s the thing: isn’t this a textbook example of competition? The FTC is seeking to preserve a model of competition that was last relevant in the PS2/Xbox generation, but that plane of competition has long since disappeared. The console market as it is today is one that is increasingly boring for consumers, precisely because Sony has won. What is compelling about Microsoft’s approach is that they are making a bet that offering consumers a better deal is the best way to break up Sony’s dominance, and this is somehow a bad thing?

What makes this determination to outlaw future business models particularly frustrating is that the real threat to gaming today is the dominance of storefronts that exact their own tax while contributing nothing to the development of the industry. The App Store and Google Play leverage software to extract 30% from mobile games just because they can — and sure, go ahead and make the same case about Microsoft and Sony. If the FTC can’t be bothered to check the blatant self-favoring inherent in these models, at the minimum it seems reasonable to give a chance to a new kind of model that could actual push consumers to explore alternative ways to game on their devices.

For the record, I do believe this acquisition demands careful overview, and it’s completely appropriate to insist that Microsoft continue to deliver Activision titles to other platforms, even if it wouldn’t make economic sense to do anything but. It’s increasingly difficult, though, to grasp any sort of coherent theory to the FTC’s antitrust decisions beyond ‘big tech bad’. There are real antitrust issues in the industry, but that requires actually understanding the industry to tease them out; that sort of understanding applied to this case would highlight Sony’s actual dominance and that having multiple compelling platforms with different business models is the essence of competition.

  1. Ten years later, as a hand-me-down from a relative 

  2. The Fairchild Channel F, which was released in 1976, was the actual first console-based video game system, but the 2600 was by far the most popular. 

AI Homework

It happened to be Wednesday night when my daughter, in the midst of preparing for “The Trial of Napoleon” for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had just been announced by OpenAI a few hours earlier:

A wrong answer from ChatGPT about Thomas Hobbes

This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch. James Madison, while writing the U.S. Constitution, adopted an evolved proposal from Charles Montesquieu that added a judicial branch as a check on the other two.

The ChatGPT Product

It was dumb luck that my first ChatGPT query ended up being something the service got wrong, but you can see how it might have happened: Hobbes and Locke are almost always mentioned together, so Locke’s articulation of the importance of the separation of powers is likely adjacent to mentions of Hobbes and Leviathan in the homework assignments you can find scattered across the Internet. Those assignments — by virtue of being on the Internet — are probably some of the grist of the GPT-3 language model that undergirds ChatGPT; ChatGPT applies a layer of Reinforcement Learning from Human Feedback (RLHF) to create a new model that is presented in an intuitive chat interface with some degree of memory (which is achieved by resending previous chat interactions along with the new prompt).

What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3. The critical factor is, I suspect, that ChatGPT is easy to use, and it’s free: it is one thing to read examples of AI output, like we saw when GPT-3 was first released; it’s another to generate those outputs yourself; indeed, there was a similar explosion of interest and awareness when Midjourney made AI-generated art easy and free (and that interest has taken another leap this week with an update to Lensa AI to include Stable Diffusion-driven magic avatars).

More broadly, this is a concrete example of the point former GitHub CEO Nat Friedman made to me in a Stratechery interview about the paucity of real-world AI applications beyond Github Copilot:

I left GitHub thinking, “Well, the AI revolution’s here and there’s now going to be an immediate wave of other people tinkering with these models and developing products”, and then there kind of wasn’t and I thought that was really surprising. So the situation that we’re in now is the researchers have just raced ahead and they’ve delivered this bounty of new capabilities to the world in an accelerating way, they’re doing it every day. So we now have this capability overhang that’s just hanging out over the world and, bizarrely, entrepreneurs and product people have only just begun to digest these new capabilities and to ask the question, “What’s the product you can now build that you couldn’t build before that people really want to use?” I think we actually have a shortage.

Interestingly, I think one of the reasons for this is because people are mimicking OpenAI, which is somewhere between the startup and a research lab. So there’s been a generation of these AI startups that style themselves like research labs where the currency of status and prestige is publishing and citations, not customers and products. We’re just trying to, I think, tell the story and encourage other people who are interested in doing this to build these AI products, because we think it’ll actually feed back to the research world in a useful way.

OpenAI has an API that startups could build products on; a fundamental limiting factor, though, is cost: generating around 750 words using Davinci, OpenAI’s most powerful language model, costs 2 cents; fine-tuning the model, with RLHF or anything else, costs a lot of money, and producing results from that fine-tuned model is 12 cents for ~750 words. Perhaps it is no surprise, then, that it was OpenAI itself that came out with the first widely accessible and free (for now) product using its latest technology; the company is certainly getting a lot of feedback for its research!

OpenAI has been the clear leader in terms of offering API access to AI capabilities; what is fascinating is about ChatGPT is that it establishes OpenAI as a leader in terms of consumer AI products as well, along with MidJourney. The latter has monetized consumers directly, via subscriptions; it’s a business model that makes sense for something that has marginal costs in terms of GPU time, even if it limits exploration and discovery. That is where advertising has always shined: of course you need a good product to drive consumer usage, but being free is a major factor as well, and text generation may end up being a better match for advertising, given its utility — and thus opportunity to collect first party data — is likely going to be higher than image generation for most people.

Deterministic vs. Probabilistic

It is an open question as to what jobs will be the first to be disrupted by AI; what became obvious to a bunch of folks this weekend, though, is that there is one universal activity that is under serious threat: homework.

Go back to the example of my daughter I noted above: who hasn’t had to write an essay about a political philosophy, or a book report, or any number of topics that are, for the student assigned to write said paper theoretically new, but in terms of the world generally simply a regurgitation of what has been written a million times before. Now, though, you can write something “original” from the regurgitation, and, for at least the next few months, you can do it for free.

The obvious analogy to what ChatGPT means for homework is the calculator: instead of doing tedious math calculations students could simply punch in the relevant numbers and get the right answer, every time; teachers adjusted by making students show their work.

That there, though, also shows why AI-generated text is something completely different; calculators are deterministic devices: if you calculate 4,839 + 3,948 - 45 you get 8,742, every time. That’s also why it is a sufficient remedy for teachers to requires students show their work: there is one path to the right answer and demonstrating the ability to walk down that path is more important than getting the final result.

AI output, on the other hand, is probabilistic: ChatGPT doesn’t have any internal record of right and wrong, but rather a statistical model about what bits of language go together under different contexts. The base of that context is the overall corpus of data that GPT-3 is trained on, along with additional context from ChatGPT’s RLHF training, as well as the prompt and previous conversations, and, soon enough, feedback from this week’s release. This can result in some truly mind-blowing results, like this Virtual Machine inside ChatGPT:

Did you know, that you can run a whole virtual machine inside of ChatGPT?

Making a virtual machine in ChatGPT

Great, so with this clever prompt, we find ourselves inside the root directory of a Linux machine. I wonder what kind of things we can find here. Let’s check the contents of our home directory.

Making a virtual machine in ChatGPT

Hmmm, that is a bare-bones setup. Let’s create a file here.

Making a virtual machine in ChatGPT

All the classic jokes ChatGPT loves. Let’s take a look at this file.

Making a virtual machine in ChatGPT

So, ChatGPT seems to understand how filesystems work, how files are stored and can be retrieved later. It understands that linux machines are stateful, and correctly retrieves this information and displays it.

What else do we use computers for. Programming!

Making a virtual machine in ChatGPT

That is correct! How about computing the first 10 prime numbers:

Making a virtual machine in ChatGPT

That is correct too!

I want to note here that this codegolf python implementation to find prime numbers is very inefficient. It takes 30 seconds to evaluate the command on my machine, but it only takes about 10 seconds to run the same command on ChatGPT. So, for some applications, this virtual machine is already faster than my laptop.

The difference is that ChatGPT is not actually running python and determining the first 10 prime numbers deterministically: every answer is a probabilistic result gleaned from the corpus of Internet data that makes up GPT-3; in other words, ChatGPT comes up with its best guess as to the result in 10 seconds, and that guess is so likely to be right that it feels like it is an actual computer executing the code in question.

This raises fascinating philosophical questions about the nature of knowledge; you can also simply ask ChatGPT for the first 10 prime numbers:

ChatGPT listing the first 10 prime numbers

Those weren’t calculated, they were simply known; they were known, though, because they were written down somewhere on the Internet. In contrast, notice how ChatGPT messes up the far simpler equation I mentioned above:

ChatGPT doing math wrong

For what it’s worth, I had to work a little harder to make ChatGPT fail at math: the base GPT-3 model gets basic three digit addition wrong most of the time, while ChatGPT does much better. Still, this obviously isn’t a calculator: it’s a pattern matcher — and sometimes the pattern gets screwy. The skill here is in catching it when it gets it wrong, whether that be with basic math or with basic political theory.

Interrogating vs. Editing

There is one site already on the front-lines in dealing with the impact of ChatGPT: Stack Overflow. Stack Overflow is a site where developers can ask questions about their code or get help in dealing with various development issues; the answers are often code themselves. I suspect this makes Stack Overflow a goldmine for GPT’s models: there is a description of the problem, and adjacent to it code that addresses that problem. The issue, though, is that the correct code comes from experienced developers answering questions and having those questions upvoted by other developers; what happens if ChatGPT starts being used to answer questions?

It appears it’s a big problem; from Stack Overflow Meta:

Use of ChatGPT generated text for posts on Stack Overflow is temporarily banned.

This is a temporary policy intended to slow down the influx of answers created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Stack Overflow.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need the volume of these posts to reduce and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts. So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.

There are a few fascinating threads to pull on here. One is about the marginal cost of producing content: Stack Overflow is about user-generated content; that means it gets its content for free because its users generate it for help, generosity, status, etc. This is uniquely enabled by the Internet.

AI-generated content is a step beyond that: it does, especially for now, cost money (OpenAI is bearing these costs for now, and they’re | substantial), but in the very long run you can imagine a world where content generation is free not only from the perspective of the platform, but also in terms of user’s time; imagine starting a new forum or chat group, for example, with an AI that instantly provides “chat liquidity”.

For now, though, probabilistic AI’s seem to be on the wrong side of the Stack Overflow interaction model: whereas deterministic computing like that represented by a calculator provides an answer you can trust, the best use of AI today — and, as Noah Smith and roon argue, the future — is providing a starting point you can correct:

What’s common to all of these visions is something we call the “sandwich” workflow. This is a three-step process. First, a human has a creative impulse, and gives the AI a prompt. The AI then generates a menu of options. The human then chooses an option, edits it, and adds any touches they like.

The sandwich workflow is very different from how people are used to working. There’s a natural worry that prompting and editing are inherently less creative and fun than generating ideas yourself, and that this will make jobs more rote and mechanical. Perhaps some of this is unavoidable, as when artisanal manufacturing gave way to mass production. The increased wealth that AI delivers to society should allow us to afford more leisure time for our creative hobbies…

We predict that lots of people will just change the way they think about individual creativity. Just as some modern sculptors use machine tools, and some modern artists use 3d rendering software, we think that some of the creators of the future will learn to see generative AI as just another tool – something that enhances creativity by freeing up human beings to think about different aspects of the creation.

In other words, the role of the human in terms of AI is not to be the interrogator, but rather the editor.

Zero Trust Homework

Here’s an example of what homework might look like under this new paradigm. Imagine that a school acquires an AI software suite that students are expected to use for their answers about Hobbes or anything else; every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system. Moreover, instead of futilely demanding that students write essays themselves, teachers insist on AI. Here’s the thing, though: the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.

What is compelling about this new skillset is that it isn’t simply a capability that will be increasingly important in an AI-dominated world: it’s a skillset that is incredibly valuable today. After all, it is not as if the Internet is, as long as the content is generated by humans and not AI, “right”; indeed, one analogy for ChatGPT’s output is that sort of poster we are all familiar with who asserts things authoritatively regardless of whether or not they are true. Verifying and editing is an essential skillset right now for every individual.

It’s also the only systematic response to Internet misinformation that is compatible with a free society. Shortly after the onset of COVID I wrote Zero Trust Information that made the case that the only solution to misinformation was to adopt the same paradigm behind Zero Trust Networking:

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

A drawing of Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications…In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

I argued that young people were already adapting to this new paradigm in terms of misinformation:

To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.

The biggest mistake in that article was the assumption that the distribution of information is a normal one; in fact, as I noted in Defining Information, there is a lot more bad information for the simple reason that it is cheaper to generate. Now the deluge of information is going to become even greater thanks to AI, and while it will often be true, it will sometimes be wrong, and it will be important for individuals to figure out which is which.

The solution will be to start with Internet assumptions, which means abundance, and choosing Locke and Montesquieu over Hobbes: instead of insisting on top-down control of information, embrace abundance, and entrust individuals to figure it out. In the case of AI, don’t ban it for students — or anyone else for that matter; leverage it to create an educational model that starts with the assumption that content is free and the real skill is editing it into something true or beautiful; only then will it be valuable and reliable.

I wrote a follow-up to this Article in this Daily Update.


Two pieces of news dominated the tech industry last week: Elon Musk and Twitter, and Sam Bankman-Fried and FTX. Both showed how narratives can lead people astray. Another piece of news, though, flew under the radar: yet another development in AI, which is a reminder that the only narratives that last are rooted in product.

Twitter and the Wrong Narrative

I did give Elon Musk the benefit of the doubt.

Back in 2016 I wrote It’s a Tesla, marveling at the way Musk had built a brand that transcended far beyond a mere car company; what was remarkable about Musk’s approach is that said brand was a prerequisite to Tesla, in contrast to a company like Apple, the obvious analog as far as customer devotion goes. From 2021’s Mistakes and Memes:

This comparison works as far as it goes, but it doesn’t tell the entire story: after all, Apple’s brand was derived from decades building products, which had made it the most profitable company in the world. Tesla, meanwhile, always seemed to be weeks from going bankrupt, at least until it issued ever more stock, strengthening the conviction of Tesla skeptics and shorts. That, though, was the crazy thing: you would think that issuing stock would lead to Tesla’s stock price slumping; after all, existing shares were being diluted. Time after time, though, Tesla announcements about stock issuances would lead to the stock going up. It didn’t make any sense, at least if you thought about the stock as representing a company.

It turned out, though, that TSLA was itself a meme, one about a car company, but also sustainability, and most of all, about Elon Musk himself. Issuing more stock was not diluting existing shareholders; it was extending the opportunity to propagate the TSLA meme to that many more people, and while Musk’s haters multiplied, so did his fans. The Internet, after all, is about abundance, not scarcity. The end result is that instead of infrastructure leading to a movement, a movement, via the stock market, funded the building out of infrastructure.

TSLA is not at the level it was during the heights of the bull market, but Tesla is a real company, with real cars, and real profits; last quarter the electric car company made more money than Toyota (thanks in part to a special charge for Toyota; Toyota’s operating profit was still greater). SpaceX is a real company, with real rockets that land on real rafts, and while the company is not yet profitable, there is certainly a viable path to making money; the company’s impact on both humanity’s long-term potential and the U.S.’s national security is already profound.

Twitter, meanwhile, is a real product that has largely failed as company; I wrote earlier this year when Musk first made a bid:

Twitter has, over 19 different funding rounds (including pre-IPO, IPO, and post-IPO), raised $4.4 billion in funding; meanwhile the company has lost a cumulative $861 million in its lifetime as a public company (i.e. excluding pre-IPO losses). During that time the company has held 33 earnings calls; the company reported a profit in only 14 of them.

Given this financial performance it is kind of amazing that the company was valued at $30 billion the day before Musk’s investment was revealed; such is the value of Twitter’s social graph and its cultural impact: despite there being no evidence that Twitter can even be sustainably profitable, much less return billions of dollars to shareholders, hope springs eternal that the company is on the verge of unlocking its potential. At the same time, these three factors — Twitter’s financials, its social graph, and its cultural impact — get at why Musk’s offer to take Twitter private is so intriguing.

Stop right there: can you see where I opened the door for an error of omission as far as my analysis is concerned? Yes, Musk has successfully built two companies, and yes, Twitter is not a successful company; what followed in that Article, though, was my own vision of what Twitter might become. I should have taken the time to think more critically about Musk’s vision…which doesn’t appear to exist.

Oh sure, Musk and his coterie of advisors have narratives: bots are bad and blue checks are about status. And, to be fair, both are true as far as it goes. The problem with bots is self-explanatory, while those who actually need blue checks — brands, celebrities, and reliable news breakers — likely care about them the least; the rest of us were happy to get our checkmark despite the fact there was no real risk of anyone impersonating us in any damaging way just because it made us feel special (speaking for myself anyway: I don’t much care about it now, but I was pretty delighted when I got it back in 2014 or so).

Of course Musk felt these problems more acutely than most: his high profile, active usage of Twitter, and popularity in crypto communities meant Musk tweets were the most likely place to encounter bots on the service; meanwhile Musk’s own grievances with journalists generally could, one imagine, engender a certain antipathy for “Bluechecks”, given that the easiest way to get one was to work for a media organization. The problem, though, is that Musk’s Twitter experience — thought to be an asset, including by yours truly — isn’t really relevant to the actual day-to-day reality of the site as experience by Twitter’s actual users.

And so we got last week’s verified disaster, where Musk could have his revenge on bluechecks by selling them to everyone, with the most eager buyers being those eager to impersonate brands, celebrities, and Musk himself. It was certainly funny, and I believe Musk that Twitter usage was off the charts, but it wasn’t a particularly prudent move for a company reliant on brand advertising in the middle of an economic slowdown.

This is not, to be clear, to criticize Musk for acting, or even for acting quickly: Twitter needed a kick in the pants (and, even had the company not been sold, was almost certainly in line for significant layoffs), and it’s understandable that mistakes will be made; the point of rapid iteration is to learn more quickly, which is to say that Twitter has, for years, not been learning very much at all. Rather, what was concerning about this mistake in particular is the degree to which it was so clearly rooted in Musk’s personal grievances, which (1) were knowable before he acted and (2) were not the biggest problems facing Twitter. That was knowable by me as an analyst, and I regret not pointing them out.

Indeed, these aren’t the only Musk narratives that have bothered me; here is his letter to advertisers posted on his first day on the job:

I wanted to reach out personally to share my motivation in acquiring Twitter. There has been much speculation about why I bought Twitter and what I think about advertising. Most of it has been wrong.

The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence. There is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.

In the relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in the money, but, in doing so, the opportunity for dialogue is lost.

This is why I bought Twitter. I didn’t do it because it would be easy. I didn’t do it to make more money. I did it to try to help humanity, whom I love. And I do so with humility, recognizing that failure in pursuing this goal, despite our best efforts, is a very real possibility.

That said, Twitter obviously cannnot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all, where you can choose your desired experience according to your preferences, just as you can choose, for example, to see movies or play video games ranging from all ages to mature.

I also very much believe that advertising, when done right, can delight, entertain and inform you; it can show you a service or product or medical treatment that you never knew existed, but is right for you. For this to be true, it is essential to show Twitter users advertising that is as relevant as possible to their needs. Low relevancy ads are spam, but highly relevant ads are actually content!

Fundamentally, Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. To everyone who has partnered with us, I thank you. Let us build something extraordinary together.

All of this sounds good, and on closer examination is mostly wrong. Obviously relevant ads are better, but Twitter’s problem is not just poor execution in terms of its ad product but also that it’s a terrible place for ads. I do agree that giving users more control is a better approach to content moderation, but the obsession with the doing-it-for-the-clicks narrative ignores the nichification of the media. And, when it comes to the good of humanity, I think the biggest learning from Twitter is that putting together people who disagree with each other is actually a terrible idea; yes, it is why Twitter will never be replicated, but also why it has likely been a net negative for society. The digital town square is the Internet broadly; Twitter is more akin to a digital cage match, perhaps best monetized on a pay-per-view basis.

In short, it seems clear that Musk has the wrong narrative, and that’s going to mean more mistakes. And, for my part, I should have noted that sooner.

FTX and the Diversionary Narrative

Eric Newcomer wrote on Twitter with regards to the FTX blow-up:1

There are a few different ways to interpret Sam Bankman-Fried’s political activism:2

  • That he believed in the causes he supported sincerely and made a mistake with his business.
  • That he supported the causes cynically as a way to curry favor and hide his fraud.
  • That he believed he was some sort of savior gripped with an ends-justify-the-means mindset that led him to believe fraud was actually the right course of action.

In the end, whichever explanation is true doesn’t really matter: the real world impact was that customers lost around $10 billion in assets, and counting. What is interesting is that all of the explanations are an outgrowth of the view that business ought to be about more than business: to simply want to make money is somehow wrong; business is only good insofar as it is dedicated to furthering goals that don’t have anything to do with the business in question.

To put it another way, there tends to be cynicism about the idea of changing the world by building a business; entrepreneurs are judged by whether their intentions beyond business are sufficiently large and politically correct. That, though, is precisely why Bankman-Fried was viewed with such credulousness: he had the “right” ambitions and the “right” politics, so of course he was running the “right” business; he wasn’t one of those “true believers” who simply wanted to get rich off of blockchains.

In the end, though, the person who arguably comes out of this disaster looking the best is Changpeng Zhao (CZ), the founder and CEO of Binance, and the person whose tweet started the run that revealed FTX’s insolvency.3 No one, as far as I know, holds up CZ as any sort of activist or political actor for anything outside of crypto; isn’t that better? Perhaps had Bankman-Fried done nothing but run Alameda Research and FTX there would have been more focus on his actual business; too many folks, though, including journalists and venture capitalists, were too busy looking at things everyone claims are important but which were, in the end, a diversion from massive fraud.

Crypto and the Theory Narrative

I never wrote about Bankman-Fried; for what it’s worth, I always found his narrative suspect (this isn’t a brag, as I will explain in a moment). More broadly, I never wrote much about crypto-currency based financial applications either, beyond this article which was mostly about Bitcoin,4 and this article that argued that digital currencies didn’t make sense in the physical world but had utility in a virtual one.5 This was mostly a matter of uncertainty: yes, many of the financial instruments on exchanges like FTX were modeled on products that were first created on Wall Street, but at the end of the day Wall Street is undergirded by actual companies building actual products (and even then, things can quite obviously go sideways). Crypto-currency financial applications were undergirded by electricity and collective belief, and nothing more, and yet so many smart people seemed on-board.

What I did write about was Technological Revolutions and the possibility that crypto was the birth of something new; why Aggregation Theory would apply to crypto not despite, but because, of its decentralization; and Internet 3.0 and the theory that political considerations would drive decentralization. That last Article was explicitly not about cryptocurrencies, but it certainly fit the general crypto narrative of decentralization being an important response to the increased centralization of the Web 2.0 era.

What was weird in retrospect is that the Internet 3.0 Article was written a week after the article about Aggregation Theory and OpenSea, where I wrote:

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply…What is striking is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtuous cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Of the three Articles I listed, this one seems to be the most correct, and I think the reason is obvious: that was the only Article written about an actual product — OpenSea — while the other ones were about theory and narrative. When that narrative was likely wrong — that crypto is the foundation of a new technological revolution, for example — then the output that resulted was wrong, not unlike Musk’s wrong narrative leading to major mistakes at Twitter.

What I regret more, though, was keeping quiet about my uncertainty about what exactly all of these folks were creating these complex financial products out of: here I suffered from my own diversionary narrative, paying too much heed to the reputation and viewpoint of people certain that there was a there there, instead of being honest that while I could see the utility of a blockchain as a distributed-but-very-slow database, all of these financial instruments seemed to be based on, well, nothing.

The FTX case is not, technically speaking, about cryptocurrency utility; it is a pretty straight-forward case of fraud. Moreover, it was, as I noted in passing in that OpenSea article, a problem of centralization, as opposed to true DeFi. Such disclaimers do, though, have a whiff of “communism just hasn’t been done properly”: I already made the case that centralization is an inevitability at scale, and in terms of utility, that’s the entire problem. An entire financial ecosystem with a void in terms of underlying assets may not be fraud in a legal sense, but it sure seems fraudulent in terms of intrinsic value. I am disappointed in myself for not saying so before.

AI and the Product Narrative

Peter Thiel said in a 2018 debate with Reid Hoffman:

One axis that I am struck by is the centralization versus decentralization axis…for example, two of the areas of tech that people are very excited about in Silicon Valley today are crypto on the one hand and AI on the other. Even though I think these things are under-determined, I do think these two map in a way politically very tightly on this centralization-decentralization thing. Crypto is decentralizing, AI is centralizing, or if you want to frame in a more ideologically, you could say crypto is libertarian, and AI is communist…

AI is communist in the sense it’s about big data, it’s about big governments controlling all the data, knowing more about you than you know about yourself, so a bureaucrat in Moscow could in fact set the prices of potatoes in Leningrad and hold the whole system together. If you look at the Chinese Communist Party, it loves AI and it hates crypto, so it actually fits pretty closely on that level, and I think that’s a purely technological version of this debate. There probably are ways that AI could be libertarian and there are ways that crypto could be communist, but I think that’s harder to do.

This is a narrative that makes all kind of sense in theory; I just noted, though, that my crypto Article that holds up the best is based on a realized product, and my takeaway was the opposite: crypto in practice and at scale tends towards centralization. What has been an even bigger surprise, though, is the degree to which it is AI that appears to have the potential for far more decentralization than anyone thought. I wrote earlier this fall in The AI Unbundling:

This, by extension, hints at an even more surprising takeaway: the widespread assumption — including by yours truly — that AI is fundamentally centralizing may be mistaken. If not just data but clean data was presumed to be a prerequisite, then it seemed obvious that massively centralized platforms with the resources to both harvest and clean data — Google, Facebook, etc. — would have a big advantage. This, I would admit, was also a conclusion I was particularly susceptible to, given my focus on Aggregation Theory and its description of how the Internet, contrary to initial assumptions, leads to centralization.

The initial roll-out of large language models seemed to confirm this point of view: the two most prominent large language models have come from OpenAI and Google; while both describe how their text (GPT and GLaM, respectively) and image (DALL-E and Imagen, respectively) generation models work, you either access them through OpenAI’s controlled API, or in the case of Google don’t access them at all. But then came this summer’s unveiling of the aforementioned Midjourney, which is free to anyone via its Discord bot. An even bigger surprise was the release of Stable Diffusion, which is not only free, but also open source — and the resultant models can be run on your own computer…

What is important to note, though, is the direction of each project’s path, not where they are in the journey. To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.

Just as the theory of crypto was decentralization but the product manifestation tended towards centralization, the theory of AI was centralization but a huge amount of the product excitement over the last few months has been decentralized and open source. This does, in retrospect, make sense: the malleability of software, combined with the free corpus of data that is the Internet, is much more accessible and flexible than blockchains that require network effects to be valuable, and where a single coding error results in the loss of money.

The relevance to this Article and introspection, though, is that this realization about AI is rooted in a product-based narrative, not theory. To that end, the third piece of news that happened last week was the release of Midjourney V4; the jump in quality and coherence is remarkable, even if the Midjourney aesthetic that was a hallmark of V3 is less distinct. Here is the image I used in The AI Unbundling, and a new version made with V4:

"Paperboy on a bike" with Midjourney V3 and V4

One of the things I found striking about my interview with MidJourney founder and CEO David Holz was how Midjourney came out of a process of exploration and uncertainty:

I had this goal, which was we needed to somehow create a more imaginative world. I mean, one of the biggest risks in the world I think is a collapse in belief, a belief in ourselves, a belief in the future. And part of that I think comes from a lack of imagination, a lack of imagination of what we can be, lack of imagination of what the future can be. And so this imagination thing I think is an important pillar of something that we need in the world. And I was thinking about this and I saw this, I’m like, “I can turn this into a force that can expand the imagination of the human species.” It was what we put on our company thing now. And that felt realistic. So that was really exciting.

Well, your prompt is, “/Imagine”, which is perfect.

So that was kind of the vision. But I mean, there is a lot of stuff we didn’t know. We didn’t know, how do people interact with this? What do they actually want out of it? What is the social thing? What is that? And there’s a lot of things. What are the mechanisms? What are the interfaces? What are the components that you build this experiences through? And so we kind of just have to go into that without too many opinions and just try things. And I kind of used a lot of lessons from Leap here, which was that instead of trying to go in and design a whole experience out of nothing, presupposing that you can somehow see 10 steps into the future, just make a bunch of things and see what’s cool and what people like. And then take a few of those and put them together.

It’s amazing how you try 10 things and you find the three coolest pieces, and you put them together, it feels like a lot more than three things. It kind of multiplies out in complexity and detail and it feels like it has depth, even though it doesn’t seem like a lot. And so yeah, there’s something magic about finding three cool things and then starting to build a product out of that.

In the end, the best way of knowing is starting by consciously not-knowing. Narratives are tempting but too often they are wrong, a diversion, or based on theory without any tether to reality. Narratives that are right, on the other hand, follow from products, which means that if you want to control the narrative in the long run, you have to build the product first, whether that be a software product, a publication, or a company.

That does leave open the question of Musk, and the way he seemed to meme Tesla into existence, while building a rocket ship on the side. I suspect the distinction is that both companies are rooted in the physical world: physics has a wonderful grounding effect on the most fantastical of narratives. Digital services like Twitter, though, built as they are on infinitely malleable software, are ultimately about people and how they interact with each other. The paradox is that this makes narratives that much more alluring, even — especially! — if they are wrong.

  1. I gave a 15 minute overview of the FTX blow-up on Friday’s Dithering

  2. Beyond the conspiracy theories that he was actually some sort of secret agent sent to destroy crypto, a close cousin of the conspiracy theory that Musk’s goal is to actually destroy Twitter; I mean, you can make a case for both! 

  3. Past performance is no guarantee of future results! 

  4. Given Bitcoin’s performance in a high inflationary environment the argument that it is a legitimate store of value looks quite poor 

  5. TBD 

Stratechery Plus Adds Sharp China with Sinocism’s Bill Bishop

In September I announced Sharp Tech and Stratechery Plus:

I am very pleased to announce the latest addition to the Stratechery Plus bundle: Sharp China with Sinocism’s Bill Bishop:

Sharp China with Sinocism's Bill Bishop

Sharp China with Sinocism’s Bill Bishop is a collaboration between Stratechery and Sinocism. Sharp China is, like Sharp Tech, hosted by Andrew Sharp;1 just as Sharp Tech seeks to provide a better understanding of the tech industry through an engaging and approachable conversational format, Sharp China seeks to do the same with everything China-related, and there is no better person to provide this understanding than Sinocism’s Bill Bishop.

Bill Bishop is an entrepreneur and former media executive with more than a decade’s experience living and working in China. Since leaving Beijing in 2015, he has lived in Washington DC. Bishop previously wrote the Axios China weekly newsletter and the China Insider column for the New York Times Dealbook and, in the late 1990s, co-founded

Bishop founded Sinocism in 2012 to provide investors, policymakers, executives, analysts, diplomats, journalists, scholars and others a comprehensive overview of what is happening in China; Bishop reads Chinese fluently, and provides summaries of reports from not just the U.S. but China as well. I personally find Sinocism essential, but what I have always hoped for were more of Bishop’s opinions on the news: I’m excited that Sharp China will give him room for just that.

While Sharp China launched in beta last week for Stratechery Plus and Sinocism subscribers, today we are announcing it to everyone, and making the latest episode about The State of Dynamic Zero-COVID free to listen to. In addition, you can listen to excerpts from the first two shows.

To add the show to your podcast player, please log in to your member account, or listen in Spotify. Sharp China will publish most weeks going forward. You can also email questions for Bill to; I’ve been really pleased with the mailbag segments of Sharp Tech, and I look forward to listening to them on Sharp China as well.2

Once again, to receive every episode of Sharp China, along with Stratechery Updates and Interviews, Sharp Tech, and Dithering, subscribe to Stratechery Plus. I look forward to continuing to make your subscription more valuable.

  1. Sharp China is the first addition to the Stratechery Plus bundles that I do not personally appear on regularly 

  2. If you have any issues adding Sharp China to your podcast player please email 

Meta Myths

What happened to Meta last week — and my response to it — expressed in meme-form:

The GTA meme about "Here we go again" as applied to Facebook

In 2018, the market was panicking about Facebook’s slowing revenue and growing expenses, and was concerned about the negative impact that Stories was having on Facebook’s feed advertising business. I wrote that the reaction was overblown in Facebook Lenses, which looked at the business in five different ways:

  • Lens 1 was Facebook’s finances, which did show troubling trends in terms of revenue and expense growth:

    Facebook's revenue growth is decreasing even as its expense growth increases

    As I noted at the time, I could understand investor trepidation about these trend lines, which is why other lenses were necessary.

  • Lens 2 was Facebook’s products, where I argued that investors were over-indexed on Facebook the app and were ignoring Instagram’s growth potential, and, in the very long run, WhatsApp.
  • Lens 3 was Facebook’s advertising infrastructure, which I argued was very underrated, and which would provide a platform for dramatically scaling Instagram monetization in particular.
  • Lens 4 was Facebook’s moats, including its network, scaled advertising product, and investments in security and content review.
  • Lens 5 was Facebook’s raison d’être — connecting people — where I made the argument that the company’s core competency was in addressing a human desire that wasn’t going anywhere.

I concluded:

To insist that Facebook will die any day now is in some respects to suggest that humanity will cease to exist any day now; granted, it is a company and companies fail, but even if Facebook failed it would only be a matter of time before another Facebook rose to replace it.

That seems unlikely: for all of the company’s travails and controversies over the past few years, its moats are deeper than ever, its money-making potential not only huge but growing both internally and secularly; to that end, what is perhaps most distressing of all to would-be competitors is in fact this quarter’s results: at the end of the day Facebook took a massive hit by choice; the company is not maximizing the short-term, it is spending the money and suppressing its revenue potential in favor of becoming more impenetrable than ever.

The optimism proved prescient, at least for the next three years:

Facebook's stock run-up from 2018 to 2021

Facebook’s stock price increased by 118% between the day I wrote that Article, before peaking on September 15, 2021. Over the past year, though, things have certainly gone in the opposite direction:

Meta's massive drawdown in 2022

Meta, née Facebook, is now, incredibly enough, worth 42% less than it was when I wrote Facebook Lenses, hitting levels not seen since January 2016. It seems the company’s many critics are finally right: Facebook is dying, for real this time.

The problem is that the evidence just doesn’t support this point of view. Forget five lenses: there are five myths about Meta’s business that I suspect are driving this extreme reaction; all of them have a grain of truth, so they feel correct, but the truth is, if not 100% good news, much better than most of those dancing on the company’s apparent grave seem to realize.

Myth 1: Users Are Deserting Facebook

Myspace is, believe it or not, still around; however, it has been irrelevant for so long that I needed to look it up to remember if the name used camel case or not (it doesn’t). It does, though, still seem to loom large in the mind of Meta skeptics certain that the mid-2000’s social network’s fate was predictive for the company that supplanted it.

The problem with this narrative is that Meta is still adding users: the company is up to 2.93 billion Daily Active Users (DAUs), an increase of 50 million, and 3.71 billion Monthly Active Users (MAUs), an increase of 60 million. Moreover, this isn’t all Instagram and WhatsApp: Facebook itself increased its DAUs by 16 million (to 1.98 billion) and its MAUs by 24 million (to 2.96 billion). Granted, all of that growth, at least in the case of Facebook, was in Asia-Pacific and the rest of the world, but the U.S. and Europe were flat, not declining; given that Facebook long ago completely saturated those markets, it is meaningful that the service is not seeing any churn.

This goes back to my fifth lens: Facebook does connect people, and that connection is still meaningful enough for a whole lot of people to continue to use its services, and there is no sign of that desire for connection disappearing or shifting to other apps.

Myth 2: Instagram Engagement is Plummeting

The obvious retort is that sure, users may occasionally open Meta’s apps when they are bored, but they are spending most of their time in other apps like TikTok, and that that time is coming at the expense of Meta’s apps, particularly Instagram.

There is, to be clear, good reason to think that TikTok is having a big impact on Instagram specifically and Facebook broadly, but that impact, to the extent it is being felt, is in depressing growth, not in reversing it. CEO Mark Zuckerberg said at the beginning of his opening remarks on Meta’s earnings call:

There has been a bunch of speculation about engagement on our apps and what we’re seeing is more positive. On Facebook specifically, the number of people using the service each day is the highest it’s ever been — nearly 2 billion — and engagement trends are strong. Instagram has more than 2 billion monthly actives. WhatsApp has more than 2 billion daily actives, also with the exciting trend that North America is now our fastest growing region. Across the family, some apps may be saturated in some countries or some demographics, but overall our apps continue to grow from a large base. We’re also seeing engagement grow — especially strong growth in Reels — and I’ll share more details around that when I discuss our product priorities shortly.

Analysts on the call were skeptical, and asked specifically about the U.S. market; CFO Dave Wehner had good news in that regard as well:

So on time spent, we are really pleased with what we’re seeing on engagement. And as Mark mentioned, Reels is incremental to time spent. Specifically, in terms of aggregate time spent on Instagram and Facebook, both are up year-over-year in both the U.S. and globally. So while we’re not specifically optimizing for time spent, those trends are positive. And we aren’t specifically optimizing for time spent because that would tend to tilt us towards longer-form video, and we’re actually focused more on short-form and other types of content.

Again, TikTok usage is certainly usage that Meta would prefer happen on their platforms; what seems clear, though, is that short-form videos are growing the overall market for user-generated content. In other words, TikTok isn’t eating Meta’s usage, but rather growing the overall pie (and, to be clear, taking more of that pie than Meta is — at least until recently).

Myth 3: TikTok is Dominating

It is frustrating to not know exactly how big that new pie is, or what Meta’s share is relative to TikTok, but the company offered more evidence in line with my takeaway last quarter that Meta has contained the TikTok threat. First, according to Sensor Tower data as reported by Morgan Stanley, TikTok usage appears to be plateauing:

TikTok's growth is plateauing in the U.S.

Growth in the U.S. specifically was around 4%, with half the penetration of Instagram.

Second, Reels usage is still growing: Zuckerberg said on the earnings call:

Our AI discovery engine is playing an increasingly important role across our products — especially as advances enable us to recommend more interesting content from across our networks in feeds that used to be primarily driven just by the people and accounts you follow. This of course includes Reels, which continues to grow quickly across our apps — both in production and consumption. There are now more than 140 billion Reels plays across Facebook and Instagram each day. That’s a 50% increase from six months ago. Reels is incremental to time spent on our apps. The trends look good here, and we believe that we’re gaining time spent share on competitors like TikTok.

It’s fair to be a bit skeptical about that number, particularly as auto-playing Reels take over more of both the Facebook and Instagram feeds; what is perhaps more meaningful is the fact that Reels now has a $3 billion annual run rate (despite the fact it doesn’t monetize nearly as well as Meta’s other ad formats — for now, anyways). TikTok, by comparison, had $4 billion in revenue in 2021, and set a goal of $12 billion this year (I suspect the company won’t reach that goal, thanks to both ATT and the macroeconomic environment; still, it should be a good-sized number).

Meta, to be sure, has a much more fleshed out ad product that almost certainly monetizes better than TikTok; the takeaway here is not that Reels is surpassing TikTok anytime soon, but it is a real product that is almost certainly growing more quickly (which, it’s worth noting, is what Instagram did to Snapchat with Stories: Facebook didn’t take usage back, but it stopped more users from moving, which ultimately resulted in far more usage).

Third, the fact that Reels usage is “incremental to time spent on [Meta] apps” supports the argument above that short-form video is growing the pie for user-generated content; to be sure, all of that TikTok usage is probably the equivalent of tens of billions of revenue if Meta could harvest it, but once again the evidence suggests that the cost of TikTok to Meta is, at least for now, opportunity cost, not actual infringement on the company’s business.

Myth 4: Advertising is Dying

This is probably the point where my statement in the beginning, that all of these myths have a bit of truth to them that makes them believable, is the most important: a good chunk of Meta’s drawdown is justified, and the reason is Apple’s App Tracking Transparency (ATT) policy.

Before ATT, ad measurement, particularly for all-digital transactions like app installs and e-commerce sales, was measured deterministically: this meant that Meta knew with a high degree of certainty which ads led to which results, because it collected that data from within advertisers’ apps and websites (via a Facebook SDK or pixel). This in turn gave advertisers the confidence to spend on advertising not with an eye towards its cost, but rather with an expectation of how much revenue could be generated.

ATT severed that connection between Meta’s ads on one side, and conversions on the other, by labeling the latter as third party data and thus tracking (never mind that none of the data was collected by the app maker or merchant, who were more than happy to deputize Meta for ad-related data collection). This not only made the company’s ads less valuable, it also made them more uncertain: unlike COVID, when return-on-advertising spend (ROAS)-focused advertisers bought up inventory abandoned by brands, the current macroeconomic slowdown has much less of a buffer.

This was, needless to say, a big deal for the entire industry, but what has been fascinating to observe over the last nine months is how few companies want to talk about it (particularly Google in the context of YouTube). Meta’s stock slide, though, shows why: ATT was a secular, structural change in the digital ad market, that absolutely should have a big impact on an affected company’s stock price. Meta, to their credit, admitted that ATT would reduce their revenue by $10 billion a year, and because that impact is primarily felt through lower prices, that is money straight off of the bottom line — and it’s a loss that will only accumulate over time, by extension reducing the terminal value of the company. Again, the stock should be down!

What ATT did not do, though, was kill digital advertising. There are still plenty of ads on Facebook, and mostly not from traditional advertisers from the analog world: entire industries have developed online over the last fifteen years in particular, built for a reality where the entire world as addressable market makes niche products viable in a way they never were previously — as long as the seller can find a customer. Meta is still the best option for that sort of top-of-the-funnel advertising, which is why the company still took in $27 billion in advertising last quarter. Moreover, the fact that number was barely down year-over-year speaks to the fact that digital advertising is still growing strongly: yes, ATT lopped off a big chunk of revenue, but it is not as if Meta revenue actually decreased by $10 billion annually (there is an analogy here to how short-form video has increased the share of time of user-generated content, as opposed to taking time away from Meta).

Meta, of course, is not standing still, either: SKAdNetwork 4 has seen Apple retreat from its most extreme positions with a new ad API that should help larger advertisers in particular; Meta is meanwhile working to move more conversions onto their own platform (which magically makes that data allowable as far as Apple is concerned, even though there is no meaningful difference for merchants beyond losing that much more control of their business).1 It’s also notable that the company’s click-to-message advertising product is itself on a $9 billion run rate, and growing fast. The most important efforts, though, are AI-driven.

Myth 5: Meta’s Spending is a Waste

That revenue and expenses graph I posted in 2018 does look a lot more hairy today:

Facebook's expense growth relative to revenue growth looks worse than ever

Some of this is Metaverse-related, which I will get to in a moment; what also has investors spooked, though, is Facebook’s increasing capital expenditures, which have nothing to do with the Metaverse (Metaverse spending is almost all research and development). Meta expects to spend $32-$33 billion in capital expenditures in 2022, and $34-$39 billion in 2023; that won’t hit the income statement right way (capital expenditures show up as depreciation in cost of revenue), but that just means that longer-term profitability may be increasingly impaired. Facebook’s gross margins were down to 79% last quarter, its lowest mark since 2013, and if revenue growth doesn’t pick back up then those margins will fall further, given that the costs are already built in.

The problem with this line of reasoning is that Meta’s capital expenditures are directly focused on both of the two main reasons for alarm: TikTok and ATT. That is because the answer to both challenges is more AI, and building up AI capacity requires a lot of capital investment.

Start with the second point: Wehner said in his prepared remarks:

We are significantly expanding our AI capacity. These investments are driving substantially all of our capital expenditure growth in 2023. There is some increased capital intensity that comes with moving more of our infrastructure to AI. It requires more expensive servers and networking equipment, and we are building new data centers specifically equipped to support next generation AI-hardware. We expect these investments to provide us a technology advantage and unlock meaningful improvements across many of our key initiatives, including Feed, Reels and ads. We are carefully evaluating the return we achieve from these investments, which will inform the scale of our AI investment beyond 2023.

Meta has huge data centers, but those data centers are primarily about CPU compute, which is what is needed to power Meta’s services. CPU compute is also what was necessary to drive Meta’s deterministic ad model, and the algorithms it used to recommend content from your network.

The long-term solution to ATT, though, is to build probabilistic models that not only figure out who should be targeted (which, to be fair, Meta was already using machine learning for), but also understanding which ads converted and which didn’t. These probabilistic models will be built by massive fleets of GPUs, which, in the case of Nvidia’s A100 cards, cost in the five figures; that may have been too pricey in a world where deterministic ads worked better anyways, but Meta isn’t in that world any longer, and it would be foolish to not invest in better targeting and measurement.

Moreover, the same approach will be essential to Reels’ continued growth: it is massively more difficult to recommend content from across the entire network than only from your friends and family, particularly because Meta plans to recommend not just video but also media of all types, and intersperse it with content you care about. Here too AI models will be the key, and the equipment to build those models costs a lot of money.

In the long run, though, this investment should pay off. First, there are the benefits to better targeting and better recommendations I just described, which should restart revenue growth. Second, once these AI data centers are built out the cost to maintain and upgrade them should be significantly less than the initial cost of building them the first time. Third, this massive investment is one no other company can make, except for Google (and, not coincidentally, Google’s capital expenditures are set to rise as well).

That last point is perhaps the most important: ATT hurt Meta more than any other company, because it already had by far the largest and most finely-tuned ad business, but in the long run it should deepen Meta’s moat. This level of investment simply isn’t viable for a company like Snap or Twitter or any of the other also-rans in digital advertising (even beyond the fact that Snap relies on cloud providers instead of its own data centers); when you combine the fact that Meta’s ad targeting will likely start to pull away from the field (outside of Google), with the massive increase in inventory that comes from Reels (which reduces prices), it will be a wonder why any advertiser would bother going anywhere else.

The one caveat to this happy story is the existential threat of TikTok not just stealing growth but actually stealing users and time, but again the answer there is better recommendation algorithms first and foremost, and that, as noted, is an AI problem. In other words, this is the most important money that Meta can spend.

Maybe True: The Metaverse is a Waste of Time and Money

This isn’t an Article about the Metaverse, which as I noted in Meta Meets Microsoft, may be a real product even as it is potentially a bad business for Meta (as an addendum to that piece, I noted on Dithering that I found John Carmack’s critique of Meta’s approach very compelling; he believes the company should be focused on low-cost low-weight devices, which to my mind makes much more sense for a social network).

It’s worth pointing out, though, that the Metaverse’s costs, which will exceed $10 billion this year and be even more next year, are, relative to Meta’s overall business and overall spending, fairly small. It’s definitely legitimate to decrease your valuation of Meta’s business if you think this investment will never contribute to the bottom line — that’s a lot of foregone profit — but this idea that Meta’s business is doomed and that the Metaverse is a Hail Mary flail to build something out of the ashes simply isn’t borne out by the numbers.

Zuckerberg does, to be sure, deserve blame for this perception: he’s the one that renamed the company and committed to spending all of that money, and made clear that it was his vision that dictated that Meta’s efforts go towards expensive hardware like face-tracking, and the fact that he can’t be replaced has always been worth its own discount. This, though, feels like a rebrand that was too successful: Meta the metaverse company may be a speculative boondoggle, but that doesn’t change the fact that the old Facebook is still a massive business with far more of its indicators pointing up-and-to-the-right than its Myspace-analogizers want to admit.

  1. The news last month about pulling back on Instagram Shopping was about focusing on ad-driven commerce. 

Chips and China

Intel may not be the most obvious place to start when it comes to the China chip sanctions announced by the Biden administration three weeks ago (I covered the ban in the Daily Update here and here); the company recently divested its 3DNAND fab in Dalian, and only maintains two test and assembly sites in Chengdu. Sure, there is an angle about Intel’s future as a foundry and its importance in helping the United States catch up in terms of the most advanced processes currently dominated by Taiwan’s TSMC, but when it comes to exploring the implications and risks of these sanctions I am much more interested in Intel’s past.

Start with the present, though: two weeks ago Intel CEO Pat Gelsinger announced a restructuring of the company, with the goal of putting more distance between its design and manufacturing teams. From the Wall Street Journal:

Intel Corp. plans to create greater decision-making separation between its chip designers and chip-making factories as part of Chief Executive Pat Gelsinger’s bid to revamp the company and boost returns. The new structure, which Mr. Gelsinger disclosed in a letter to staff on Tuesday, is designed to let Intel’s network of factories operate like a contract chip-making operation, taking orders from both Intel engineers and external chip companies on an equal footing. Intel has historically used its factories almost exclusively to make its own chips, something Mr. Gelsinger changed when he launched a contract chip-making arm last year.

Back in 2018 I wrote about Intel and the Danger of Integration:

It is perhaps simpler to say that Intel, like Microsoft, has been disrupted. The company’s integrated model resulted in incredible margins for years, and every time there was the possibility of a change in approach Intel’s executives chose to keep those margins. In fact, Intel has followed the script of the disrupted even more than Microsoft: while the decline of the PC finally led to The End of Windows, Intel has spent the last several years propping up its earnings by focusing more and more on the high-end, selling Xeon processors to cloud providers. That approach was certainly good for quarterly earnings, but it meant the company was only deepening the hole it was in with regards to basically everything else. And now, most distressingly of all, the company looks to be on the verge of losing its performance advantage even in high-end applications.

That article was primarily about Intel’s reliance on high margin integrated processors and its unwillingness/inability to become a foundry serving 3rd-party customers, and how smartphones provided the volume for modular players like TSMC to threaten Intel’s manufacturing dominance. However, it’s worth diving into the implications of Intel’s integrated approach relative to TSMC’s modular approach, because it offers lessons for the long road facing China when it comes to building its own semiconductor industry, highlights why the U.S. is itself vulnerable in semiconductors, and explains why the risk for Taiwan has increased significantly.

TSMC’s Depreciation

Fabs are incredibly expensive to build, while chips are extremely cheap; to put it in economic terms, fabs entail massive fixed costs, while chips have minimal marginal costs. This dynamic is very similar to software, which is why venture capital rose up to support chip companies like Intel, and then seamlessly transitioned to supporting software (Silicon Valley, which is today known for software, is literally named for the material used for chips).

One way to manage these costs is to build a fab once and then run it for as long as possible. TSMC’s Fab 2, for example, the company’s sole 150-millimeter wafer facility, was built in 1990, and is still in operation today. That is one of seven TSMC fabs that are over 20 years old, amongst the company’s 26 total (several more are under construction, including the one in Arizona). The chips in these fabs don’t sell for much, but that’s ok because the fabs are completely depreciated: almost all of the revenue is pure profit.

This may seem like the obvious strategy, but it’s a very path dependent one: TSMC was unique precisely because they didn’t design their own chips. I explained the company’s origin story in Chips and Geopolitics:

A few years later, in 1987, Chang was invited home to Taiwan, and asked to put together a business plan for a new government initiative to create a semiconductor industry. Chang explained in an interview with the Computer History Museum that he didn’t have much to work with:

I paused to try to examine what we have got in Taiwan. And my conclusion was that [we had] very little. We had no strength in research and development, or very little anyway. We had no strength in circuit design, IC product design. We had little strength in sales and marketing, and we had almost no strength in intellectual property. The only possible strength that Taiwan had, and even that was a potential one, not an obvious one, was semiconductor manufacturing, wafer manufacturing. And so what kind of company would you create to fit that strength and avoid all the other weaknesses? The answer was pure-play foundry…

In choosing the pure-play foundry mode, I managed to exploit, perhaps, the only strength that Taiwan had, and managed to avoid a lot of the other weaknesses. Now, however, there was one problem with the pure-play foundry model and it could be a fatal problem which was, “Where’s the market?”

What happened is exactly what Christensen would describe several years later: TSMC created the market by “enabl[ing] independent, nonintegrated organizations to sell, buy, and assemble components and subsystems.” Specifically, Chang made it possible for chip designers to start their own companies:

When I was at TI and General Instrument, I saw a lot of IC [Integrated Circuit] designers wanting to leave and set up their own business, but the only thing, or the biggest thing that stopped them from leaving those companies was that they couldn’t raise enough money to form their own company. Because at that time, it was thought that every company needed manufacturing, needed wafer manufacturing, and that was the most capital intensive part of a semiconductor company, of an IC company. And I saw all those people wanting to leave, but being stopped by the lack of ability to raise a lot of money to build a wafer fab. So I thought that maybe TSMC, a pure-play foundry, could remedy that. And as a result of us being able to remedy that then those designers would successfully form their own companies, and they will become our customers, and they will constitute a stable and growing market for us.

It worked. Graphics processors were an early example: Nvidia was started in 1993 with only $20 million, and never owned its own fab.1 Qualcomm, after losing millions manufacturing its earliest designs, spun off its chip-making unit in 2001 to concentrate on design, and Apple started building its own chips without a fab a decade later. Today there are thousands of chip designers in all kinds of niches creating specialized chips for everything from appliances to fighter jets, and none of them have their own fab.

By creating this new market TSMC ended up with a massive customer base; moreover, most of those customers didn’t need cutting edge chips, but rather the same chip that they started with for as long as they made the product into which that chip went. That, by extension, meant that all of those old foundries had a customer base, enabling TSMC to make money on them long after they had been paid off.

Intel’s Margins

Intel’s path, though, preceded TSMC’s, which is to say that of course Intel both designed and manufactured their own chips (“real men have fabs”, as AMD founder Jerry Sanders once famously put it); to put it another way, the entire reason why Chang saw a market in being just a manufacturer was because every company that proceeded TSMC had done both out of necessity, because a company like TSMC didn’t exist.

And, it’s worth noting, there was no reason for TSMC to exist: Intel’s chips, for the two decades it existed before TSMC, were never good enough: every generation would result in such massive leaps in performance that it simply wouldn’t have made sense to keep the old assembly lines around. Still, this stuff was expensive, which is where being integrated helped.

This was the other way to manage the cost of cutting edge fabs: because Intel was at the cutting edge, it would charge a huge premium for its chips (and thus have the highest margins in the industry that I referenced earlier). At the beginning, when fabs were cheaper, Intel was happy to sell off its old equipment and make a few extra bucks on the back end. Over the last decade, though, as equipment became more and more expensive, and as Intel’s leadership started to care more about finances than about engineering, it increasingly became a priority to re-use equipment to the greatest extent possible. This wasn’t easy, I would note: Intel would stick with (relatively) outdated equipment in not just one fab but also in the fabs it built around the world.

This is where the integration point was critical: because Intel both designed and manufactured its chips, the latter could call the shots for the former; chips had to be designed to work with Intel manufacturing, not the other way around, and this extended to not just the designs themselves but all of the tooling that went into it. Intel, for example, used its own chip design software, and favored suppliers who would do what Intel told them to, and then hand the equipment off to Intel to do with it as they saw fit. Intel would then get everything to work in one fab, and Copy Exactly! that fab in another location: everything was identical, down to the position of the toilets in the bathrooms.

As I noted in the conclusion of Intel and the Danger of Integration, Intel’s strategy worked phenomenally well, right up until it didn’t:

What makes disruption so devastating is the fact that, absent a crisis, it is almost impossible to avoid. Managers are paid to leverage their advantages, not destroy them; to increase margins, not obliterate them. Culture more broadly is an organization’s greatest asset right up until it becomes a curse. To demand that Intel apologize for its integrated model is satisfying in 2018, but all too dismissive of the 35 years of success and profits that preceded it. So it goes.

So it goes, indeed — or rather, the correct conjugation is the past tense: so went Intel’s manufacturing advantage.

ASML’s Rise

I mentioned TSMC’s Fab 2 earlier and its 150-millimeter wafers; that is 1980’s era technology. The 1990s brought 200-millimeter wafers (which are used in seven of TSMC’s fabs). It was the transition to today’s 300-millimeter fabs in the early 2000’s, though, that marked the rise of ASML.

Intel’s partner in the lithography space — the use of light to draw transistors on wafers — was Nikon, and Nikon’s approach to 300-millimeter wafers was to scale up its 200-millimeter process. There was a downside to this approach, though: because the wafers were larger they had to move more slowly (more mass means more force, unless acceleration is decreased). This was fine with Intel, though: they were their own only customer, and their margins were plenty high enough to handle a decrease in throughput (indeed, Intel was well-known for running their machines well below capacity).

Lower speed wasn’t fine for TSMC and Samsung, the other up-and-comer in the space: like any challenger they were operating on much lower margins, and they didn’t want a decrease in throughput — the entire point of larger wafers was to increase the number of chips that could be produced, not to give away that gain by running everything more slowly. ASML saw the opportunity and designed an entirely new process around 300-millimeter wafers, creating dual wafer stage technology that aligned and mapped one wafer while another was being exposed.

TSMC and ASML were already close, in part because both were part of the Philips family tree (Philips was the only external investor in TSMC, which licensed Philips technology to start, and ASML was a joint venture of Philips and ASMI). What was more important is that both were ignored by the dominant players in the industry: the big chip makers, from Intel to Motorola to Texas Instruments, were matched up with Nikon and Canon; the former didn’t want equipment from a new entrant, and the latter didn’t have capacity for a foundry that was not only working on low margins but also, as part of its cost consciousness, wanted to learn how to service the machines themselves (the Japanese companies preferred to deliver black boxes that their own technicians would service).

ASML’s 300-nanometer process, though, required a reworking on the fab side as well. Now TSMC and ASML weren’t simply stuck together like two kids picked last at recess: they were deeply enmeshed in the process of working through the new process’s bugs, designing new fabs to support it, and maximizing output once everything was working. This increase in output had another side effect: TSMC started to make a bit more money, which it started pouring into its own research and development. It was TSMC that pushed ASML towards immersion lithography, where the space between the lens and the wafer was filled with a liquid with a higher refraction index than air. Nikon would eventually be forced to respond with its own lithography machines, but they were never as good as ASML’s, which meant that even Intel had to come calling as a customer.

ASML, meanwhile, had been working for years on a true moonshot: extreme ultraviolet lithography. Here is the Brookings Institution’s description of the process:

A generator ejects 50,000 tiny droplets of molten tin per second. A high-powered laser blasts each droplet twice. The first shapes the tiny tin, so the second can vaporize it into plasma. The plasma emits extreme ultraviolet (EUV) radiation that is focused into a beam and bounced through a series of mirrors. The mirrors are so smooth that if expanded to the size of Germany they would not have a bump higher than a millimeter. Finally, the EUV beam hits a silicon wafer — itself a marvel of materials science — with a precision equivalent to shooting an arrow from Earth to hit an apple placed on the moon. This allows the EUV machine to draw transistors into the wafer with features measuring only five nanometers — approximately the length your fingernail grows in five seconds. This wafer with billions or trillions of transistors is eventually made into computer chips.

An EUV machine is made of more than 100,000 parts, costs approximately $120 million, and is shipped in 40 freight containers. There are only several dozen of them on Earth and approximately two years’ worth of back orders for more. It might seem unintuitive that the demand for a $120 million tool far outstrips supply, but only one company can make them. It’s a Dutch company called ASML, which nearly exclusively makes lithography machines for chip manufacturing.

It’s not just ASML, though: that mirror is made by Zeiss, and the laser is made by TRUMPF using carbon dioxide sources pioneered by Access Laser (a U.S. company later acquired by TRUMPF). They are the two most important of over 800 suppliers for EUV, but it’s the end users that are equally essential.

When TSMC Passed Intel

In 2012 Intel, TSMC, and Samsung all invested in ASML to help the company finish the EUV project that had started 11 years earlier: there were very real questions about whether or not ASML would ever ship, or die trying, while it was clear that immersion lithography was reaching the limits of what was possible. The investment amounts are interesting in retrospect:

Company Intel TSMC Samsung
Investment in stock 15% for $3.1 billion 5% for $1.03 billion 3% for $630 million
Investment in R&D $1 billion $345 million $345 million

Intel, despite investing the most (and having contributed a big chunk of the underlying technology), was convinced it could stick with immersion lithography as it transitioned first to 10-nanometer and then 7-nanometer chips. Yes, those were awfully small lines to be drawing with a light source that was 193-nanometers in width, but it wasn’t clear that EUV yields were going to be high enough, and besides, Intel had a lot of lithography equipment that, if used for one or two more generations, would make for some very fat margins. That was more of a priority for Intel than technological leadership, even as decades of said leadership had created the arrogance to believe that Intel could use quad-patterning — i.e. doing four exposures on a single wafer — to create those ever thinner lines.

TSMC, on the other hand, had three reasons to commit to EUV:

  • First, TSMC had a multi-decade relationship with ASML that included two significant process transitions (to 300-millimeter wafers and immersion lithography).
  • Second, because TSMC was a foundry, it needed to manufacture smaller lots of much greater variety; this meant that fiddly multi-pattern approaches that took many runs to improve yields didn’t make sense. EUV’s 13.5 nanometer light offered the potential for much simpler designs that fit TSMC’s business model.
  • Third, Apple was willing to pay to have the fastest chips in the world, which meant that TSMC had a guaranteed first customer with massive volume whenever it could get EUV working.

In the end, TSMC started using EUV for non-critical layers at 7 nanometers, and for critical layers at 5 nanometers (in 2020); Intel, meanwhile, failed for years to ship 10 nanometer chips (which are closer to TSMC’s 7 nanometer chips), and had to completely rework its 7 nanometer process to incorporate EUV. Those chips are only starting mass production this fall — the same time period when TSMC is shipping new 3 nanometer chips. Intel, by the way, is a customer for TSMC’s 3nm process: the company’s performance was falling too far behind AMD, which abandoned its own fabs in 2009 and has been riding TSMC’s improvements (along with its own new designs) for the last five years.

China’s Integrated Path

Only now, 3,500 words in, do I turn to China, and the country’s path forward to building the sort of advanced chips that the U.S. has just cut off access to. That, though, is the point: the chip industry’s path to today is China’s path to the future.

This is a daunting challenge: it’s not just that China needs to re-create TSMC, but also ASML, Lam Research, Applied Materials, Tokyo Electronic, and all of the other pieces of the foundry supply chain. And, to go one layer deeper, not only does China need to re-create ASML, but also Zeiss, and TRUMPF, and Access Laser, and all of the other pieces of the global supply chain, much of which is not located in China. China’s manufacturing prowess is centered on traditionally labor-centric components; even though Chinese labor is now much more expensive than it was, and automation much more common, path dependency matters, and China’s capability is massive but in some respects limited.

Globalization made all of those Chinese factories extremely valuable, because the world was China’s market. At the same time, globalization also meant that China could buy high-precision capital-intensive goods abroad: it didn’t need to build them itself to get the benefits immediately. By the same token high-precision capital-intensive goods are exactly what Western countries like the U.S., Germany, Netherlands, Japan and Taiwan invested in, in part because they couldn’t compete with China on labor. To put it another way, the principles of comparative advantage governed an infinite number of decisions on the margins that led to the U.S. government having the ability to impose these sanctions on China; the realities of semiconductor manufacturing, where every paradigm shift costs massive amounts of money, years in R&D, and the willingness of partners to take the leap with you, are a further manifestation of comparative advantage: it simply makes the most sense for one company to do lithography, and another to lead the world in fabrication.

In other words, China is going to need to build up these capabilities from the ground up, and it’s going to be a long hard road. Moreover, China will not have the benefit of partnership and distributed expertise that have driven the last decade of innovation: in some respects China is going to need to be Intel, doing too much on its own.

That said, the country does have three big advantages:

  • First, it is much easier to follow a path than to forge a new one. China may not be able to make EUV machines, but at least they know they can be made.
  • Second, China has benefited from all of the technological sharing to date: Semiconductor Manufacturing International Corporation (SMIC) has successfully manufactured 7nm chips (using ASML’s immersion lithography machines), and Shanghai Micro Electronics Equipment (SMEE) has built its own immersion lithography machines. Granted, those 7nm chips almost certainly had poor yields, and the trick is for SMIC to use SMEE on the cutting edge, but that leads to the third point:
  • China has unlimited money and infinite motivation to figure this out.

Money is not a panacea: you can’t simply spend your way to faster chips, but instead must move down the learning curve on both the foundry and equipment level. Money does, though, pay for processes that don’t have great yields: the problem for Intel at 7 nanometer, for example, wasn’t that they couldn’t make chips, but that they couldn’t get yields high enough to make them economically. That won’t be a concern for China when it comes to chips for military applications.

What is more meaningful, though, will be the alignment of China’s private sector behind China’s chip companies: TSMC didn’t only need ASML, it also needed Apple and AMD and Nvidia, end users who were both willing to pay for performance and also work deeply with TSMC to figure out generation after generation of faster chips. Tencent and Alibaba and Baidu will now join Huawei in being the China chip industry’s most demanding customers, in the best possible sense.

China’s Trailing Edge

There is one more advantage China has: remember all of those old fabs that TSMC is still operating? It turns out that as more and more products incorporate microprocessors, trailing edge chips are exploding in demand. This was seen most clearly during the pandemic when U.S. automakers, who foolishly canceled their chip orders when the pandemic hit, suddenly found themselves at the back of the line as demand for basic chips skyrocketed.

In the end it was China that picked up a lot of the slack: the company’s commitment to building its own semiconductor industry is not a new one (just much more pressing), and part of the process of walking the path I detailed above is building more basic chips using older technologies. China’s share of >45 nanometer chips was 23% in 2019, and probably over 35% today; its share of 28-45 nanometer chips was 19% in 2019 and is probably approaching 30% today. Moreover, these chips still make up most of the volume for the industry as a whole: when you see charts like this, which measure market share by revenue, keep in mind that China has achieved 9% market share with low-priced chips:

China's increasing share of chips by revenue

The Biden administration’s sanctions are designed to not touch this part of the industry: the limitations are on high end fabs and the equipment and people that go into them, not trailing edge fabs that make up most of this volume. There is good reason for this: these trailing edge factories are still using a lot of U.S. equipment; for most equipment makers China is responsible for around a third of their revenue. That means cutting off trailing edge fabs would have two deleterious effects on the U.S.: a huge number of the products U.S. consumers buy would falter for lack of chips, even as the same U.S. companies that have built the advantage the administration is seeking to exploit would have their revenue (and future ability to invest in R&D) impaired.

It’s worth pointing out, though, that this is producing a new kind of liability for the U.S., and potentially more danger for Taiwan.

Go back to Intel’s strategy of selling off and/or reusing its old fabs, which again, made sense given the path Intel started on decades ago: that means that Intel, unlike TSMC, doesn’t have any trailing edge capacity (outside of what it acquired in the Tower Semiconductor deal). Global Foundries, the U.S.’s other foundry, had the same model as Intel while it was the manufacturing arm of AMD; Global Foundries acquired trailing edge capacity with its acquisition of Chartered Semiconductor, but there is a reason why the U.S. >45 nanometer market share was only 9% in 2019 (and likely lower today), and 28-45 nanometer market share was a mere 6% (and again, likely lower today).

Again, these aren’t difficult chips to make, but that is precisely why it makes little sense to build new trailing edge foundries in the U.S.: Taiwan already has it covered (with the largest marketshare in both categories), and China has the motivation to build more just so it can learn.

What, though, if TSMC were taken off the board?

Much of the discussion around a potential invasion of Taiwan — which would destroy TSMC (foundries don’t do well in wars) — centers around TSMC’s lead in high end chips. That lead is real, but Intel, for all of its struggles, is only 3~5 years behind. That is a meaningful difference in terms of the processors used in smartphones, high performance computing, and AI, but the U.S. is still in the game. What would be much more difficult to replace are, paradoxically, trailing node chips, made in fabs that Intel long ago abandoned.

China meanwhile, has had good reason to keep TSMC around, even as it built up its own trailing edge fabs: the country needs cutting edge chips, and TSMC makes them. However, if those chips are cut off, then what use is TSMC to China? This isn’t a new concern, by the way; I wrote after the U.S. imposed sanctions on Huawei:

I am, needless to say, not going to get into the finer details of the relationship between China and Taiwan (and the United States, which plays a prominent role); it is less that reasonable people may disagree and more that expecting reasonableness is probably naive. It is sufficient to note that should the United States and China ever actually go to war, it would likely be because of Taiwan.

In this TSMC specifically, and the Taiwan manufacturing base generally, are a significant deterrent: both China and the U.S. need access to the best chip maker in the world, along with a host of other high-precision pieces of the global electronics supply chain. That means that a hot war, which would almost certainly result in some amount of destruction to these capabilities, would be devastating…one of the risks of cutting China off from TSMC is that the deterrent value of TSMC’s operations is diminished.

My worry is that this excerpt didn’t go far enough: the more that China builds up its chip capabilities — even if that is only at trailing nodes — the more motivation there is to make TSMC a target, not only to deny the U.S. its advanced capabilities, but also the basic chips that are more integral to everyday life than we ever realized.

MAD Chips

So is this chip ban the right move?

In the medium term, the impacts will be significant, particularly in terms of the stated target of these sanctions — AI. Only now is it becoming possible to manufacture intelligence, and the means to do so is incredibly processor intensive, both in terms of quality and quantity. Moreover, not only does AI figure to loom large in military applications, but is also likely to spur innovation in its own right, perhaps even in terms of figuring out how to keep pushing the frontier of chip design.

In the long run, meanwhile, the U.S. may have given up what would have been, thanks to the sheer amount of cost and learning curve distance involved, a permanent economic advantage. Absent politics there simply is no reason to compete with TSMC or ASML or any of the other specialized parts of the supply chain; it would simply be easier to buy instead of build. Now, though, it is possible to envision a future where China undercuts U.S. companies in chips just like they once did in more labor-intensive industries, even as its own AI capabilities catch up and, given China’s demonstrated willingness to use technology in deeply intrusive ways, potentially surpass the West with its concerns about privacy and property rights.

The big question that I am raising in this article is the short run: while I have spent most of the last two years cautioning Americans who thought Taiwan was Thailand to not go from 0 to 100 in terms of the China threat, this move has in fact raised my concern level significantly. I am still, on balance, skeptical about a conflict, thanks in large part to how intertwined the U.S. and Chinese economies still are: any conflict would be mutually assured economic destruction.

Chips did, until three weeks ago, fall under the same paradigm; I wrote earlier this year in Tech and War:

This point applies to semiconductors broadly: as long as China needs U.S. technology or TSMC manufacturing, it is heavily incentivized to not take action against Taiwan; when and if China develops its own technology, whether now or many years from now, that deterrence is no longer a factor. In other words, the short-term and longer-term are in opposition to the medium-term…

There is no obvious answer, and it’s worth noting that the historical pattern — i.e. the Cold War — is a complete separation of trade and technology. That is one possible path, that we may fall into by default. It’s worth remembering, though, that dividers in the street are no way to live, and while most U.S. tech companies have flexed their capabilities, the most impressive tech of all is attractive enough and irreplaceable enough that it could still create dependencies that lead to squabbles but not another war.

Those dependencies are being severed; hopefully we still find sufficient reason to go no further than squabbles.

  1. The very first Nvidia chips were manufactured by SGS-Thomson Microelectronics, but have been manufactured by mostly TSMC from the original GeForce on 

Microsoft Full Circle

In last week’s interview with Stratechery, Microsoft CEO Satya Nadella explained why the company was open to partnering with Meta for VR:

The way I come at it, Ben, is that I like to separate out, “What is the system, what are the apps”? Of course, we want to bring the two things together where we can create magic, but at the same time, I also want our application experiences in particular to be available on all platforms, that’s very central to how our strategy is.

For example, when I think about the Metaverse, the first thing I think about is it’s not going to be born in isolation from everything else that’s in our lives, which is you’re going to have a Mac or a Windows PC, you’re going to have an iOS or an Android phone, and maybe you’ll have a headset. So if that is your life, how do we bring, especially Microsoft 365, all of the relationships that are set up, the work artifacts I’ve set up all to life in that ecosystem of devices? That’s at least how I come to it and that’s where when Mark started talking to us about his next generation stuff around Quest was pretty exciting, so it made a lot of sense for us to bring — whether it’s Teams with its immersive meetings experience to Quest or whether it’s even Windows 365 streaming, and then, of course, all our management and security and even Xbox — [to Quest]; that’s what is the motivation behind it.

This seems obvious today in 2022, but it was a fairly radical point of view when Nadella took over Microsoft in 2014. Nadella’s first event in April 2014 centered on the announcement of Microsoft’s iconic Office Suite on Apple’s iPad; the apps had been developed under former CEO Steve Ballmer, but had been withheld from launch until the company had touch-centric versions ready for Windows-based touch devices. From the beginning of Stratechery I was adamant that this was a major mistake driven by Microsoft’s inability to imagine a future without Windows at the center; from 2013’s Services, Not Devices:

The truth is that Microsoft is wrapping itself around an axle of its own creation. The solution to the secular collapse of the PC market is not to seek to prop up Windows and force an integrated solution that no one is asking for; rather, the goal should be the exact opposite. Maximum effort should be focused on making Office, Server, and all the other products less subservient to Windows and more in line with consumer needs and the reality of computing in 2013.

A drawing of The Horizontal Layer of Services

The trouble for Microsoft in the devices layer is that they only know horizontal domination. When there was nothing but PC’s, the insistence on one experience no matter the hardware worked perfectly. However, a Dell and an HP are much more similar than a tablet and a web page, for example, each of which has its own input method, user expectations, and constraints. A multi-device world demands bespoke experiences, not one size fits all. Microsoft simply doesn’t seem to understand that, and the longer they seek to “horizontalize” devices the greater the write-offs will become.

However, look again at that picture: there remains a horizontal layer — services — and it’s there that Microsoft should focus its energy. For Office and Server specifically:

  • Documents remain essential and ubiquitous to all of the world outside of Silicon Valley; an independent Office division should be delivering bespoke experiences on every meaningful platform. Office 365 is a great start that would be even better with a version for iPad.
  • A great many apps are simply front-ends for web-based services; an independent Server division should be delivering best-in-class interfaces and tools for app developers on every meaningful platform.

[…]“Devices and services” is only half right; unfortunately Ballmer picked the wrong half.

This is why it was so important that Office for iPad was Nadella’s first major announcement; I wrote after the event in When CEOs Matter:

This is the power CEOs have. They cannot do all the work, and they cannot impact industry trends beyond their control. But they can choose whether or not to accept reality, and in so doing, impact the worldview of all those they lead.

Four years later Nadella’s reworking of the culture was all but complete, as I wrote in The End of Windows:

The story of Windows’ decline is relatively straightforward and a classic case of disruption…What is more interesting, though, is the story of Windows’ decline in Redmond, culminating with last week’s reorganization that, for the first time since 1980, left the company without a division devoted to personal computer operating systems (Windows was split, with the core engineering group placed under Azure, and the rest of the organization effectively under Office 365; there will still be Windows releases, but it is no longer a standalone business).

This new reality couldn’t have been clearer at last week’s Microsoft Inspire worldwide partner conference: Nadella’s keynote was all about the cloud, from Azure to Teams; Windows was demoted to one section of the company’s Surface announcements held as a precursor to the main event.

Do More With Less

This is how Nadella opened his keynote:

We’re going through a period of historic economic, societal, and technological change. But for all the uncertainty we continue to see in the world, one thing is clear: organizations in every industry are turning to you and your digital capability to help them do more with less, so that they can navigate this change and emerge stronger. You are the change agents who make doing more with less possible. Less time, less cost, less complexity, with more innovation, more agility, and more resilience. Doing more with less doesn’t mean working harder or longer — it’s not going to scale — it means applying technology to amplify what you can do and ultimately what an organization can achieve amidst today’s constraints.

Over the past few years, we have talked extensively about digital transformation. But today we need to deliver on the digital imperative for every organization. It all comes down to how we can help you do this with the Microsoft cloud. No other cloud offers the best of category products, and the best of suite solutions, and that’s what we’ll focus on at Ignite this week as we walk through the five key imperatives.

This “do more with less” message recurred throughout Nadella’s presentation. Three separate times Nadella emphasized how much customers would save by going with a Microsoft bundle, but that was only the “with less” part of the message; each pitch also explained why the Microsoft approach was also better (i.e. “do more”). Start with security:

Protecting is complex and get expensive. Every organization experiences this with so many different devices, connections to partners, and an ever shifting cloud resource deployment. The more agile you become, the more your security team struggles to manage the risk; the more connected we become, the faster a successful attacker can move laterally through the enterprise to their target. For far too long customer have been forced to adopt multiple disconnected solutions from disparate sources that don’t integrate well and leave gaps. We offer a better option: a natively integrated security solution that is supported by a vibrant partner ecosystem…you get a comprehensive solution that closes gaps and works for you at machine speed. On average, customers save more than 60% when they turn to use compared to a multi-vendor solution.

Nadella’s argument: not only can you save money, but because all of the products come from one vendor you can rest assured that they are comprehensive and are designed to work together.

Now let’s turn to data: with our Microsoft Intelligent Data Platform we provide a complete data fabric, from the operational stores to the analytics engines to data governance so that you can spend more time creating value and less time integrating and managing your data estate. Our goal is to provide you with the most comprehensive end-to-end data platforms so you don’t have to wrestle with the complexities of building and operating cloud scale data infrastructure yourself. Analytics alone on our data intelligence platform cost up to 59% less than any other cloud analytics out there.

That bit about “spend more time creating value and less time integrating and managing” is the part of Microsoft’s value proposition that Silicon Valley startups so frequently miss. Slack, perhaps most famously, was so certain its superior chat experience would beat out Teams (and it is superior), that company CEO Stewart Butterfield took out an ad in the New York Times welcoming Microsoft to the space; four years later, after Teams had over six times the daily active users (and before Slack was acquired by Salesforce), I explained in Teams OS and the Slack Social Network what Butterfield got wrong:

This is what Slack — and Silicon Valley, generally — failed to understand about Microsoft’s competitive advantage: the company doesn’t win just because it bundles, or because it has a superior ground game. By virtue of doing everything, even if mediocrely, the company is providing a whole that is greater than the sum of its parts, particularly for the non-tech workers that are in fact most of the market. Slack may have infused its chat client with love, but chatting is a means to an end, and Microsoft often seems like the only enterprise company that understands that.

That end is, to use Nadella’s words, “creating value”; “integrating and managing” is exactly what companies want to avoid.

With Microsoft 365 we provide a complete cloud-first experience that makes work better for today’s digitally connected and distributed workforce. Customers can save more than 60% compared to a patchwork of solutions. Microsoft 365 includes Teams plus the apps you always relied on — Word, Excel, Powerpoint, and Outlook — as well as new applications for creation and expression like Loop, Clipchamp, Stream, and Designer, and it’s all built on the Microsoft graph, which makes available to you the information about people, their relationships, all their work artifacts, meetings, events, documents, in one interconnected system. Thanks to the graph you can understand how work is changing and how your digitally distributed workforce is working. This is so critical, and it all comes alive in the new Microsoft 365 application.

Ah, there are the Office applications I referenced at the beginning. But notice the word that is missing: Office.

From Office to Microsoft

From The Verge:

Microsoft is making a major change to its Microsoft Office branding. After more than 30 years, Microsoft Office is being renamed “Microsoft 365” to mark the software giant’s collection of growing productivity apps. While Office apps like Excel, Outlook, Word, and PowerPoint aren’t going away, Microsoft will now mostly refer to these apps as part of Microsoft 365 instead of Microsoft Office.

Microsoft has been pushing this new branding for years, after renaming Office 365 subscriptions to Microsoft 365 two years ago, but the changes go far deeper now. “In the coming months,, the Office mobile app, and the Office app for Windows will become the Microsoft 365 app, with a new icon, a new look, and even more features,” explains a FAQ from Microsoft. That means if you use any of the dedicated Office apps, they’ll all be branded with Microsoft 365 soon, and with a new logo. The first logo and design changes will appear at in November, followed by the Office app on Windows, iOS, and Android all getting rebranded in January.

I’ll be honest: as an increasingly old man in technology the end of the “Office” name kind of bums me out. My nostalgia is satisfied, though, by a Microsoft that has truly come full circle.

The truth about Microsoft is that while Windows’ relationship with hardware has traditionally been modular (the Surface line notwithstanding), the company’s strategy has always been about integration and bundling. This is why Ballmer was so hesitant to give up on Windows as the center of the company’s go-to-market: sure, people wanted the Office applications on different devices, but it was Windows that tied Office to Outlook to Exchange to Active Directory to Windows Server and on down the line. This, by extension, is why Nadella’s willingness to embrace reality was a risk: Office on its own was a nice business, but it wasn’t the center of enterprise like Windows had been.

It turned out, though, that facing reality brought another benefit: the ability to see and grasp an opportunity when it appeared. Teams, which started development in 2015, a year after Nadella’s announcement, wouldn’t simply be a chat app: it would be the new hub around which Office orbited. Teams (and Outlook) development leader Brian MacDonald said at a press event in 2019:

One of the really key things and drivers of what we wanted to do with Teams was have that be a hub for Office 365. Before what we had done was just taken all those personal productivity workloads and then moved them to the cloud, but we wanted something that was purpose-built for the cloud that could be a hub across all of Office and frankly across the rest of what we’re doing at Microsoft. A lot of the Power BI, Power Apps, and Dynamics tools that James was building, but also third party. So we built a platform for that and the third-party platform and the first-party platform are actually the same.

If that sounds a lot like Windows — a hub that hosted not just Office, but other Microsoft applications and services, and a platform for 3rd-party developers — Nadella agrees with you. From the same event:

Sometimes I think the new OS is not going to start from the hardware, because the classic OS definition, that Tanenbaum, one of the guys who wrote the book on Operating Systems that I read when I went to school was: “It does two things, it abstracts hardware, and it creates an app model”. Right now the abstraction of hardware has to start by abstracting all of the hardware in your life, so the notion that this is one device is interesting and important, it doesn’t mean the kernel that boots your device just goes away, it still exists, but the point of real relevance I think in our lives is “hey, what’s that abstraction of all the hardware in my life that I use?” – some of it is shared, some of it is personal. And then, what’s the app model for it? How do I write an experience that transcends all of that hardware? And that’s really what our pursuit of Microsoft 365 is all about.

Office being on its own gave Teams an easy go-to-market: Microsoft just bundled it in. Today, though, it is Teams and everything built on that scaffolding that is Microsoft’s new Windows. It is the company and its operating system, not its apps, that are back at the center. In this sense, renaming Office 365 to Microsoft 365 is the most natural thing in the world: Office was a ship that set sail from the declining civilization that was Windows, with an uncertain destination. Today, though, that ship is but a footnote in Microsoft’s new empire in the cloud.

Moreover, it seems likely this empire will be more durable than the old Microsoft republic: the entire reason why Windows faltered as a strategic linchpin is that it was tied to a device — the PC — that was disrupted by a paradigm shift in hardware. Microsoft 365, on the other hand, is attached to the customer. Nadella again:

What we are trying to do [with Microsoft 365] is bring home that notion that it’s about the user, the user is going to have relationships with other users and other people, they’re going to have a bunch of artifacts, their schedules, their projects, their documents, many other things, their to-do’s, and they are going to use a variety of different devices.

This is why Microsoft, instead of being late to the iPad, is remarkably early to VR. Why not? Devices are but mere conduits to the cloud, which means that Microsoft is well-placed to navigate this new paradigm if it becomes a major platform — and to not miss a beat if it is not.1 In other words, to say that Microsoft has come full circle may be selling Nadella’s transformation short: the all-encompassing dominant Microsoft of old may be back, but in a version that is even stronger and more resilient than before.

  1. This also, it must be said, casts doubt on Meta’s determination to go in the opposite direction, and give up its position as a user-centric service to be a hardware-dependent platform 

Meta Meets Microsoft

There is an easy to way to write this Article, and a hard way.

This weekend the easy way seemed within reach: I watched Meta’s Connect Keynote (I had early access in order to prepare for an interview with Meta CEO Mark Zuckerberg and Microsoft CEO Satya Nadella) and was, like apparently much of the Internet, extremely underwhelmed. Sure, the new Quest Pro looked cool, and I was very excited about the partnership with Microsoft (more on both in a moment); the presentation, though, was cringe, and seemed to lack any compelling demos of virtual reality.

What was particularly concerning was the entire first half of the keynote, which was primarily focused on consumer applications, including Horizon Worlds; Horizon Worlds was the the app The Verge reported was so buggy that Meta employees working on it barely used it, or more worryingly, was buggy because Meta employees couldn’t be bothered to dogfood it. The concerning part from the keynote was you could see why.

That was why this Article was going to be easy: writing that Meta’s metaverse wasn’t very compelling would slot right in to most people’s mental models, prompting likes and retweets instead of skeptical emails; arguing that Meta should focus on its core business would appeal to shareholders concerned about the money and attention devoted to a vision they feared was unrealistic. Stating that Zuckerberg got it wrong would provide comfortable distance from not just an interview subject but also a company that I have defended in its ongoing dispute with Apple over privacy and advertising.

Indeed, you can sense my skepticism in the most recent episode of Sharp Tech, which was recorded after seeing the video but before trying the Quest Pro. See, that was the turning point: I was really impressed, and that makes this Article much harder to write.

Meetings in VR

I wrote about virtual reality and the Metaverse a number of times last year, including August’s Metaverses, Meta’s keynote and name-change in October, and Microsoft and the Metaverse in November. The most important post though, at least in terms of my conception of the space, was this August Update about Horizon Workrooms (not to be confused with the aforementioned Horizon Worlds):

My personal experience with Workrooms didn’t involve any dancing or fitness; it was simply a conversation with the folks that built Workrooms. The sense of presence, though, was tangible. Voices came from the right place, thanks to Workrooms’ spatial audio, and hand gestures and viewing directions really made it feel like the three of us were in the same room. What was particularly compelling was the way that Workrooms’ virtual reality space seamlessly interfaced with the real world…

I don’t want to go too far given I’ve only tried out Workrooms once, but this feels like something real. And, just as importantly, there is, thanks to COVID, a real use case. Of course companies will need to be convinced, and hardware will need to be bought, but that’s another reason why the work angle is so compelling: companies are willing to pay for tools that increase productivity to a much greater extent than consumers are.

I don’t have much of a company, but I did buy Quest 2’s for the Passport team, and we held one meeting a week in Workrooms. One in particular stands out to me: we made a major decision about the product, and my memory of that decision does not involve me sitting at my desk in Taiwan, but of being in that virtual room. The sense of place and presence was that compelling.

Then one of the developers moved house, temporarily misplaced his headset, and we haven’t used it since.

Microsoft’s Advantage

It was my experience with Workrooms that undergirded my argument that Microsoft was the best placed to succeed with virtual reality. Yes, virtual reality entails putting on a headset and leaving your current environment for a virtual one, but that is not so different from leaving your house and going to the office. Moreover, Microsoft’s shift to Teams as its de facto OS meant it was well-placed to deliver company-specific metaverses:

This integration, though, also means that Microsoft has a big head start when it comes to the Metaverse: if the initial experience of the Metaverse is as an individual self-contained metaverse with its own data and applications, then Teams is already there. In other words, not only is enterprise the most obvious channel for virtual reality from a hardware perspective, but Teams is the most obvious manifestation of virtual reality’s potential from a software perspective.

The shortcoming was hardware: Microsoft had the HoloLens, but that was an augmented reality device. Continuing from that Article:

What is not integrated is the hardware; Microsoft sells a number of third party VR headsets on said webpage, all of which have to be connected to a Windows computer. Microsoft’s success will require creating an opportunity for OEMs similar to the opportunity that was created by the PC. At the same time, this solution is also an advantageous one for the long-term Metaverse-as-Internet vision: Windows is the most open of the consumer platforms, and that applies to Microsoft’s current implementation of VR. The company would do well to hold onto this approach.

This Article seems quite prescient given the announcement that Microsoft is partnering with Meta going forward: Microsoft is bringing its Teams-based ecosystem to Quest, along with enterprise tools like Azure Active Directory and Intune device management, with Xbox Game Pass thrown in for good measure. In doing so Microsoft gets to piggy-back on Meta’s massive investments in hardware.

It’s difficult to overstate what a massive win this feels like for Microsoft: the company will have a privileged position on what is for now the most advanced headset with the most resources behind it, not because it is paying for the privilege but because it is the most obvious go-to-market for this new technology. I argued in that Article that VR adoption would probably look more like the PC than it did smartphones:

Implicit in assuming that augmented reality is more important than virtual reality is assuming that this new way of accessing the Internet will develop like mobile did. Smartphone makers like Apple, though, had a huge advantage: people already had and wanted mobile phones; selling a device that you were going to carry anyway, but which happened to be infinitely more capable for only a few hundred more dollars, was a recipe for success in the consumer market.

PCs, though, didn’t have that advantage: the vast majority of the consumer market had no knowledge of or interest in computers; rather, most people encountered computers for the first time at work. Employers bought their employees computers because computers made them more productive; then, once consumers were used to using computers at work, an ever increasing number of them wanted to buy a computer for their home as well. And, as the number of home computers increased, so did the market opportunity for developers of non-work applications like games.

I suspect that this is the path that virtual reality will take. Like PCs, the first major use case will be knowledge workers using devices bought for them by their employer, eager to increase collaboration in a remote work world, and as quality increases, offer a superior working environment. Some number of those employees will be interested in using virtual reality for non-work activities as well, increasing the market for non-work applications.

This is still my position, and my experience with the Quest Pro only confirmed it. A lot of the new functionality is very impressive: the facial expression detection really works, and the new controllers are shockingly precise (one highlight was playing Operation and feeling like you actually had a chance). The best demo of all, though, was the updated version of Workrooms; the product manager showing me the new features was actually in Austin, Texas (I was in California), but that was the only demo where I actually forgot about the people in the room with me: mentally I was truly in a virtual meeting room.

At the same time, this was a $1,500 device. On one hand, that was actually cheaper than I expected; on the other hand, it still has significant weaknesses: it’s heavy, the battery life is only an hour or two, and the resolution is still disappointingly low. I’m not sure I can justify buying it for my Passport team, even if we were still going through the hassle of pulling on a headset for one meeting a week.

Meta’s Outlook

So where does this leave Meta?

First off, while Quest Pro is a definite leap forward, we still seem a few years away from a device that is truly ready for mass consumption. I noted the big issues above: weight, battery life, and resolution.

A bigger concern I have is software: the most compelling use case is meetings, and that matters for a market — enterprise — that is not only not Meta’s primary focus but that the company is effectively outsourcing to Microsoft. This raises a further strategic concern: the inverse of Microsoft winning by virtue of Meta spending billions on hardware to make Microsoft VR software compelling is that Meta runs the risk of being the IBM to Microsoft’s DOS. Zuckerberg admitted the risks but argued the benefits outweighed them:

I think the pros of the partnership way outweigh the risks. Obviously nothing is risk-free, but at the end of the day, we also have to do our job and deliver world-class services and hardware. If we don’t do that, then obviously we will lose. I do think though that there is this alignment that we talked about before between the things that we primarily care about, which are the aspects of the platform around expression…

I think basically everyone else in the space would focus more on the the single-player experience. Our bet in this is a deep bet that the connection aspect matters more and this has been part of the experience of running the company all along, is that even just growing up we’re told, “Do your homework, then go play with your friends.” I just think at some level that’s wrong. The connection between people is the point, not the thing that you do after everything else.

I do get the vision: while meetings have obvious utility, if you go back to the broader Metaverse vision the idea of, say, watching a basketball game courtside with my friends, despite the fact we are scattered all over the world and no where near a stadium, is a really compelling one. Social experiences in VR, though, require everyone involved to have a compatible VR headset.

Social media on your computer or phone isn’t like this: one of the reasons why an ad model is so compatible with social media is because it enables the service to be free, which makes it much more plausible that your friends are on the platform. Introducing even the slightest barrier to entry — much less a several hundred dollar one — makes it much less likely that a multiplayer experience is even possible.

This by extension means the timing question is an even more daunting one for Meta in particular. I do think that VR has real utility, but it will take time for that utility to be accessible on a cost-effective basis for enterprises and individual users; meaningful social experiences will take longer yet, simply because social experiences depend on the people you want to hang out with being bought in as well.1

This then is where I stand on VR and the metaverse concept, on Meta’s one-year anniversary:

  • VR does have real utility, but I think that utility will be realized in the enterprise first, in part because the value of VR only becomes apparent when you use it, and you’re more likely to use it if your company pays for it (VR really doesn’t demo well, as yesterday’s presentation showed).
  • Microsoft is well-placed to deliver that utility on top of Meta hardware.
  • Meta is likely to be the catalyst for VR becoming a widely used technology but it is much more uncertain as to whether the company will capture sufficient value to justify its massive investment, thanks in part to its focus on social networking.

I very well might be wrong on one or all of these points, in either direction. On one hand, maybe most people will never buy into the idea of putting a headset on; on the other hand, maybe Meta will figure out a go-to-market strategy that somehow communicates VR presence in a way that gets people to buy headsets in sufficient mass to make social experiences viable.

What is clear is that Zuckerberg in particular seems more committed to VR than ever. It may be the case that he is seen as the founding father of the Metaverse, even as Meta is a potential casualty.

You can read an interview I conducted with Zuckerberg and Nadella about yesterday’s announcement here.

  1. Meta’s argument would be that these social experiences will also be accessible via your smartphone or PC, but that means giving up on presence, which is VR’s killer feature 

Nvidia In the Valley

Nvidia investors have been in the valley before:

A drop in Nvidia's stock price

This chart, though, is not from the last two years, but rather from the beginning of 2017 to the beginning of 2019; here is 2017 to today:

Nvidia's current stock price drop

Three big things happened to Nvidia’s business over the last three years that drove the price to unprecedented heights:

  • The pandemic led to an explosion in PC buying generally and gaming cards specifically, as customers had both the need for new computers and a huge increase in discretional income with nowhere to spend it beyond better game experiences.
  • Machine learning applications, which were trained on Nvidia GPUs, exploded amongst the hyperscalers.
  • The crypto bubble led to skyrocketing demand for Nvidia chips to solve Ethereum proof-of-work equations to earn — i.e. mine — Ether.

Crypto isn’t so much a valley as it is a cliff: Ethereum successfully transitioned to a proof-of-stake model, rendering entire mining operations, built with thousands of Nvidia GPUs, worthless overnight; given that Bitcoin, the other major crypto network to use proof-of-work, is almost exclusively mined on custom-designed chips, all of those old GPUs are flooding the second-hand market. This is particularly bad timing for Nvidia given that the pandemic buying spree ended just as the company’s attempt to catch up on demand for its 3000-series of chips were coming to fruition. Needless to say, too much new inventory plus too much used inventory is terrible for a company’s financial results, particularly when you’re trying to clear the channel for a new series:

Nvidia's gaming revenue drop

Nvidia CEO Jensen Huang told me in a Stratechery Interview last week that the company didn’t see this coming:

I don’t think we could have seen it. I don’t think I would’ve done anything different, but what I did learn from previous examples is that when it finally happens to you, just take the hard medicine and get it behind you…We’ve had two bad quarters and two bad quarters in the context of a company, it’s frustrating for all the investors, it’s difficult on all the employees.

We’ve been here before at Nvidia.

We just have to deal with it and not be overly emotional about it, realize how it happened, keep the company as agile as possible. But when the facts presented itself, we just made cold, hard decisions. We took care of our partners, we took care of our channel, we took care of making sure that everybody had plenty of time. By delaying Ada, we made sure that everybody had plenty of time for and we repriced all the products such that even in the context of Ada, even if Ada were available, the products that after it’s been repriced is actually a really good value. I think we took care of as many things as we could, it resulted in two fairly horrific quarters. But I think in the grand scheme of things, we’ll come right back so I think that was probably the lessons from the past.

This may be a bit generous; analysts like Tae Kim and Doug O’Laughlin forecast the stock plunge earlier this year, although that was probably already too late to avoid this perfect storm of slowing PC sales and Ethereum’s transition, given that Nvidia ordered all of those extra 3000-series GPUs in the middle of the pandemic (Huang also cited the increasing lead times for chips as a big reason why Nvidia got this so wrong).

What is more concerning for Nvidia, though, is that while its inventory and Ethereum issues are the biggest drivers of its “fairly horrific quarters”, that is not the only valley its gaming business is navigating. I’m reminded of John Bunyan’s Pilgrim’s Progress:

Now Christian had not gone far in this Valley of Humiliation before he was severely tested, for he noticed a very foul fiend coming over the field to meet him; his name was Apollyon [Destroyer].

Call Apollyon inventory issues; Christian defeated him, as Nvidia eventually will.

Now at the end of this valley there was another called the Valley of the Shadow of Death; and it was necessary for Christian to pass through it because the way to the Celestial City was in that direction. Now this valley was a very solitary and lonely place. The prophet Jeremiah describes it as, “A wilderness, a land of deserts and of pits, a land of drought and of the shadow of death, a land that no man” (except a Christian) “passes through, and where no man dwells.”

What was striking about Nvidia’s GTC keynote last week was the extent to which this allegory seems to fit Nvidia’s ambitions: the company is setting off on what appears to be a fairly solitary journey to define the future of gaming, and it’s not clear when or if the rest of the industry will come along. Moreover, the company is pursuing a similarly audacious strategy in the data center and with its metaverse ambitions as well: in all three cases the company is pursuing heights even greater than those achieved over the last two years, but the path is surprisingly uncertain.

Gaming in the Valley: Ray-Tracing and AI

The presentation of 3D games has long depended on a series of hacks, particularly in terms of lighting. First, a game determines what you actually see (i.e. there is no use rendering an object that is occluded by another); then the correct texture is applied to the object (i.e. a tree, or grass, or whatever else you might imagine). Finally light is applied based on the position of a pre-determined light source, with a shadow map on top of that. The complete scene is then translated into individual pixels and rendered onto your 2D screen; this process is known as rasterization.

Ray tracing handles light completely differently: instead of starting with a pre-determined light source and applying light and shadow maps, ray tracing starts with your eye (or more precisely, the camera through which you are viewing the scene). It then traces the line of sight to every pixel on the screen, bounces it off that pixel (based on what type of object it represents), and continues following that ray until it either hits a light source (and thus computes the lighting) or discards it. This produces phenomenally realistic lighting, particularly in terms of reflections and shadows. Look closely at these images from PC Magazine:

Let’s see how ray tracing can visually improve a game. I took the following screenshot pairs in Square Enix’s Shadow of the Tomb Raider for PC, which supports ray-traced shadows on Nvidia GeForce RTX graphics cards. Specifically, look at the shadows on the ground.

An image with rasterized lighting
Rasterized shadows
Ray-traced shadows
Ray-traced shadows

[…]The ray-traced shadows are softer and more realistic compared with the harsher rasterized versions. Their darkness varies depending on how much light an object is blocking and even within the shadow itself, while rasterization seems to give every object a hard edge. The rasterized shadows still don’t look bad, but after playing the game with ray traced shadows, it’s tough to go back.

Nvidia first announced API support for ray tracing back in 2009; however, few if any games used it because it is so computationally expensive (ray-tracing is used in movie CGI; however, those scenes can be rendered over hours or even days; games have to be rendered in real-time). That is why Nvidia introduced dedicated ray tracing hardware in its GeForce 2000-series line of cards (which were thus christened “RTX”) which came out in 2018. AMD went a different direction, adding ray-tracing capabilities to its core shader units (which also handle rasterization); this is slower than Nvidia’s pure hardware solution, but it works, and, importantly, since AMD makes graphics cards for the PS5 and Xbox, it means that ray tracing support is now industry-wide. More and more games will support ray tracing going forward, although most applications are still fairly limited because of performance concerns.

Here’s the important thing about ray tracing, though: by virtue of calculating light dynamically, instead of via light and shadow maps, it is something developers can get “for free.” A game or 3D environment that depended completely on ray tracing should be easier and cheaper to develop; more importantly, it means that environments could change in dynamic ways that the developer never anticipated, all while having more realistic lighting than the most labored-over pre-drawn environment.

This is particularly compelling in two emerging contexts: the first is in simulation games like Minecraft. With ray tracing it will be increasingly realistic to have highly detailed 3D-worlds that are constructed on the fly and lit perfectly. Future games could go further: the keynote opened with a game called RacerX where every single part of the game was fully simulated, including objects; the same sort of calculations for light were used for in-game physics as well.

The second context is a future of AI-generated content I discussed in DALL-E, the Metaverse, and Zero Marginal Cost Content. All of those textures I noted above are currently drawn by hand; as graphical capabilities — largely driven by Nvidia — have increased, so has the cost of creating new games, thanks to the need to create high resolution assets. One can imagine a future where asset creation is fully automated and done on the fly, and then lit appropriately via ray tracing.

For now, though Nvidia is already using AI to render images: the company also announced version 3 of its Deep Learning Super Sampling (DLSS) technology, which predicts and pre-renders frames, meaning they don’t have to be computed at all (previous versions of DLSS predicted and pre-rendered individual pixels). Moreover, Nvidia is, as with ray-tracing, backing up DLSS with dedicated hardware to make it much more performant. These new approaches, matched with dedicated cores on Nvidia’s GPUs, make Nvidia very well-placed for an entirely new paradigm in not just gaming but immersive 3D experiences generally (like a metaverse).

Here’s the problem, though: all of that dedicated hardware comes at a cost. Nvidia’s new GPUs are big chips — the top-of-the-line AD102, sold as the RTX 4090, is a fully integrated system-on-a-chip that measures 608.4mm2 on TSMCs N4 process;1 the top-of-the-line Navi 31 chip in AMD’s upcoming RDNA 3 graphics line, in comparison, is a chiplet design with a 308mm2 graphics chip on TSMC’s N5 process,2 plus six 37.5mm2 memory chips on TSMC’s N6 process.3 In short, Nvidia’s chip is much larger (which means much more expensive), and it’s on a slightly more modern process (which likely costs more). Dylan Patel explains the implications at SemiAnalysis:

In short, AMD saves a lot on die costs by forgoing AI and ray tracing fixed function accelerators and moving to smaller dies with advanced packaging. The advanced packaging cost is up significantly with AMD’s RDNA 3 N31 and N32 GPUs, but the small fan-out RDL packages are still very cheap relative to wafer and yield costs. Ultimately, AMD’s increased packaging costs are dwarfed by the savings they get from disaggregating memory controllers/infinity cache, utilizing cheaper N6 instead of N5, and higher yields…Nvidia likely has a worse cost structure in traditional rasterization gaming performance for the first time in nearly a decade.

This is the valley that Nvidia is entering. Gamers were immediately up-in-arms after Nvidia’s keynote because of the 4000-series’ high prices, particularly when the fine print on Nvidia’s website revealed that one of the tier-two chips Nvidia announced was much more akin to a rebranded tier-3 chip, with the suspicion being that Nvidia was playing marketing games to obscure a major price increase. Nvidia’s cards may have the best performance, and are without question the best placed for a future of ray tracing and AI-generated content, but at the cost of being the best values for games as they are played today. Reaching the heights of purely simulated virtual worlds requires making it through a generation of charging for capabilities that most gamers don’t yet care about.

AI in the Valley: Systems, not Chips

One reason to be optimistic about Nvidia’s approach in gaming is that the company made a similar bet on the future when it invented shaders; I explained shaders after last year’s GTC in a Daily Update:

Nvidia first came to prominence with the Riva and TNT line of video cards that were hard-coded to accelerate 3D libraries like Microsoft’s Direct3D:

The GeForce line, though, was fully programmable via a type of computer program called a “shader” (I explained more about shaders in this Daily Update). This meant that a GeForce card could be improved even after it was manufactured, simply by programming new shaders (perhaps to support a new version of Direct3D, for example).

[…]More importantly, shaders didn’t necessarily need to render graphics; any sort of software — ideally programs with simple calculations that could be run in parallel — could be programmed as shaders; the trick was figuring out how to write them, which is where CUDA came in. I explained in 2020’s Nvidia’s Integration Dreams:

This increased level of abstraction meant the underlying graphics processing unit could be much simpler, which meant that a graphics chip could have many more of them. The most advanced versions of Nvidia’s just-announced GeForce RTX 30 Series, for example, has an incredible 10,496 cores.

This level of scalability makes sense for video cards because graphics processing is embarrassingly parallel: a screen can be divided up into an arbitrary number of sections, and each section computed individually, all at the same time. This means that performance scales horizontally, which is to say that every additional core increases performance. It turns out, though, that graphics are not the only embarrassingly parallel problem in computing…

This is why Nvidia transformed itself from a modular component maker to an integrated maker of hardware and software; the former were its video cards, and the latter was a platform called CUDA. The CUDA platform allows programmers to access the parallel processing power of Nvidia’s video cards via a wide number of languages, without needing to understand how to program graphics.

Now the Nvidia “stack” had three levels:

The important thing to understand about CUDA, though, is that it didn’t simply enable external programmers to write programs for Nvidia chips; it enabled Nvidia itself.

Much of this happened out of desperation; Huang explained in a Stratechery interview last spring that introducing shaders, which he saw as essential for the future, almost killed the company:

The disadvantage of programmability is that it’s less efficient. As I mentioned before, a fixed function thing is just more efficient. Anything that’s programmable, anything that could do more than one thing just by definition carries a burden that is not necessary for any particular one task, and so the question is “When do we do it?” Well, there was also an inspiration at the time that everything looks like OpenGL Flight Simulator. Everything was blurry textures and trilinear mipmapped, and there was no life to anything, and we felt that if you didn’t bring life to the medium and you didn’t allow the artist to be able to create different games and different genres and tell different stories, eventually the medium would cease to exist. We were driven by simultaneously this ambition of wanting to create a more programmable palette so that the game and the artist could do something great with it. At the same time, we also were driven to not go out of business someday because it would be commoditized. So somewhere in that kind of soup, we created programmable shaders, so I think the motivation to do it was very clear. The punishment afterwards was what we didn’t expect.

What was that?

Well, the punishment is all of a sudden, all the things that we expected about programmability and the overhead of unnecessary functionality because the current games don’t need it, you created something for the future, which means that the current applications don’t benefit. Until you have new applications, your chip is just too expensive and the market is competitive.

Nvidia survived because their ability to do direct acceleration was still the best; it thrived in the long run because they took it upon themselves to build the entire CUDA infrastructure to leverage shaders. This is where that data center growth comes from; Huang explained:

On the day that you become processor company, you have to internalize that this processor architecture is brand new. There’s never been a programmable pixel shader or a programmable GPU processor and a programming model like that before, and so we internalize. You have to internalize that this is a brand new programming model and everything that’s associated with being a program processor company or a computing platform company had to be created. So we had to create a compiler team, we have to think about SDKs, we have to think about libraries, we had to reach out to developers and evangelize our architecture and help people realize the benefits of it, and if not, even come close to practically doing it ourselves by creating new libraries that make it easy for them to port their application onto our libraries and get to see the benefits of it.

The first reason to recount this story is to note the parallels between the cost of shader complexity and the cost of ray tracing and AI in terms of current games; the second is to note that Nvidia’s approach to problem-solving has always been to do everything itself. Back then that meant developing CUDA for programming those shaders; today it means building out entire systems for AI.

Huang said during last week’s keynote:

Nvidia is dedicated to advancing science and industry with accelerated computing. The days of no-work performance scaling are over. Unaccelerated software will no longer enjoy performance scaling without a disproportionate increase in costs. With nearly three decades of a singular focus, Nvidia is expert at accelerating software and scaling computer by a 1,000,000x, going well beyond Moore’s Law.

Accelerated computing is a full-stack challenge. It demands deep understanding of the problem domain, optimizing across every layer of computing, and all three chips: CPU, GPU, and DPU. Scaling across multi-GPUs on multi-nodes is a datacenter-scale challenge, and requires treating the network and storage as part of the computing fabric, and developers and customers want to run software in many places, from PCs to super-computing centers, enterprise data centers, cloud, to edge. Different applications want to run in different locations, and in different ways.

Today, we’re going to talk about accelerated computing across the stack. New chips and how they will boost performance, far beyond the number of transistors, new libraries, and how it accelerates critical workloads to science and industry, new domain-specific frameworks, to help develop performant and easily deployable software. And new platforms, to let you deploy your software securely, safely, and with order-of-magnitude gains.

In Huang’s view, simply having fast chips is no longer enough for the workloads of the future: that is why Nvidia is building out entire data centers using all of its own equipment. Here again, though, a future where every company needs accelerated computing generally, and Nvidia to build it for them specifically — Nvidia’s Celestial City — is in contrast to the present where the biggest users of Nvidia chips in the data center are hyperscalers who have their own systems already in place.

A company like Meta, for example, doesn’t need Nvidia’s networking; they invented their own. What they do need are a lot of massively parallelizable chips to train their machine learning algorithms on, which means they have to pay Nvidia and their high margins. Small wonder that Meta, like Google before them, is building its own chip.

This is the course that all of the biggest companies will likely follow: they don’t need an Nvidia system, they need a chip that works in their system for their needs. That is why Nvidia is so invested in the democratization of AI and accelerated computing: the long term key to scale will be in building systems for everyone but the largest players. The trick to making it through the valley will be in seeing that ecosystem develop before Nvidia’s current big customers stop buying Nvidia’s expensive chips. Huang once saw that 3D accelerators would be commoditized and took a leap with shaders; one gets the sense he has the same fear with chips and is thus leaping into systems.

Metaverse in the Valley: Omniverse Nucleus

In the interview last spring I asked Huang if Nvidia would ever build a cloud service;

If we ever do services, we will run it all over the world on the GPUs that are in everybody’s clouds, in addition to building something ourselves, if we have to. One of the rules of our company is to not squander the resources of our company to do something that already exists. If something already exists, for example, an x86 CPU, we’ll just use it. If something already exists, we’ll partner with them, because let’s not squander our rare resources on that. And so if something already exists in the cloud, we just absolutely use that or let them do it, which is even better. However, if there’s something that makes sense for us to do and it doesn’t make for them to do, we even approach them to do it, other people don’t want to do it then we might decide to do it. We try to be very selective about the things that we do, we’re quite determined not to do things that other people do.

It turns out there was something no one else wanted to do, and that was create a universal database for 3D objects for use in what Nvidia is calling the Omniverse. These objects could be highly detailed millimeter-precise objects for use in manufacturing or supply chains, or they could be fantastical objects and buildings generated for virtual worlds; in Huang’s vision they would be available to anyone building on Omniverse Nucleus.

Here the Celestial City is a world of 3D experiences used across industry and entertainment — an Omniverse of metaverses, if you will, all connected to Nvidia’s cloud — and it’s ambitious enough to make Mark Zuckerberg blush! This valley, by the same token, seems even longer and darker: not only do all of these assets and 3D experiences need to be created, but entire markets need to be convinced of their utility and necessity. Building a cloud for a world that doesn’t yet exist is to reach for heights still out of sight.

There certainly is no questioning Huang and Nvidia’s ambition, although some may quibble with the wisdom of navigating three valleys all at once; it’s perhaps appropriate that the stock is in a valley itself, above and beyond that perfect storm in gaming.

What is worth considering, though, is that the number one reason why Nvidia customers — both in the consumer market and the enterprise one — get frustrated with the company is price: Nvidia GPUs are expensive, and the company’s margins — other than the last couple of quarters — are very high. Pricing power in Nvidia’s case, though, is directly downstream from Nvidia’s own innovations, both in terms of sheer performance in established workloads, and also in its investment in the CUDA ecosystem creating the tools for entirely new ones.

In other words, Nvidia has earned the right to be hated by taking the exact sort of risks in the past it is embarking on now. Suppose, for example, the expectation for all games in the future is not just ray tracing but full-on simulation of all particles: Nvidia’s investment in hardware will mean it dominates the era just as it did the rasterized one. Similarly, if AI applications become democratized and accessible to all enterprises, not just the hyperscalers, then it is Nvidia who will be positioned to pick up the entirety of the long tail. And, if we get to a world of metaverses, then Nvidia’s head start on not just infrastructure but on the essential library of objects necessary to make that world real (objects that will be lit by ray-tracing in AI-generated spaces, of course), will make it the most essential infrastructure in the space.

These bets may not all pay off; I do, though, appreciate the audacity of the vision, and won’t begrudge the future margins that may result in the Celestial City if Nvidia makes it through the valley.

  1. TSMC’s 3rd generation 5nm process 

  2. TSMC’s 1st generation 5nm process 

  3. TSMC’s 3rd generation 7nm process 

Sharp Tech and Stratechery Plus

I am excited to announce both a new podcast and a substantial expansion in the value of a Stratechery subscription. We’ll start with the podcast:

Sharp Tech with Ben Thompson

Sharp Tech with Ben Thompson is a new podcast from Andrew Sharp and myself about how technology works, and the ways it is impacting the world. We will publish one free episode weekly, and there are already six episodes in the catalog:

In addition, there will be a weekly subscriber-only episode that will be built on listener questions and feedback; the first paid episode dropped yesterday. You can get Sharp Tech for Apple Podcasts, Overcast, or the podcast player of your choice by loging in at the Sharp Tech website, or search for it in Spotify.

Here is the good news: Sharp Tech Premium is included with a Stratechery subscription.

That leads to today’s second announcement: the Stratechery Daily Update subscription is transforming into Stratechery Plus:

Stratechery Plus

Stratechery Plus is the same $12/month or $120/year price as the Stratechery Update, but it is now expanded to include not just the Stratechery Update and Stratechery Interviews but also Dithering and Sharp Tech.

Stratechery Plus includes the Stratechery Update, Stratechery Interviews, Sharp Tech, and Dithering

The Stratechery Update consists of substantial analysis of the news of the day delivered via three weekly emails or podcasts (including free bi-weekly Stratechery Articles). If you enjoy Stratechery Articles you will love the Stratechery Update.

Stratechery Interviews include interviews with leading public CEOs like Mark Zuckerberg, Jensen Huang, and Satya Nadella; the Founder Series with private company founders like Parker Conrad, Laura Behrens Wu, and Shishir Mehrotra; and discussions with fellow analysts like Eric Seufert, Matthew Ball, and Bill Bishop.

Dithering is a twice-weekly podcast from Daring Fireball’s John Gruber and myself: 15 minutes an episode, not a minute less, not a minute more. Dithering, which costs $5/month, was previously available as a $3 add-on for Stratechery subscribers; now it is available to all Stratechery subscribers. You can get Dithering for Apple Podcasts, Overcast, or the podcast player of your choice by logging in at the Dithering website.

This is, I hope, only the beginning for Stratechery Plus. Right now the content is obviously very Ben-centric, but my hope is to expand the offering over time. For now, I am delighted to be doing my part to make Stratechery more valuable than ever.