Gaming the Smiling Curve

Another week, another gaming acquisition. First Take-Two acquired Zynga, then Microsoft acquired Activision-Blizzard, and now Sony just announced the acquisition of Bungie.1 Each of these acquisitions is interesting in its own right, but taken as a set they paint a picture of industry evolution that extends far beyond gaming.

Take-Two and Mobile Consolidation

The straightforward explanation for Take-Two’s acquisition of Zynga is the fact that mobile captures more than 50% of gaming industry revenue and it is growing much faster (7% last year) than PC and console gaming (the gaming industry grew 1.4% as a whole); that is a problem for Take-Two given that nearly all of the company’s revenue comes from PC and console series like Grand Theft Auto, NBA 2K, Red Dead, Bordlerlands, and more.

Zynga, meanwhile, was among the least prepared of the major mobile gaming companies for the changes wrought by Apple’s App Tracking Transparency (ATT) policy, which was introduced with iOS 14 and rolled out over the first half of 2021. In the pre-ATT world everyone from e-commerce sellers to app developers could effectively offload the collection and analysis of conversion data and subsequent targeting of advertising to Facebook, to the benefit of everyone involved: individual developers and retailers did not need to bear the risk or expense of collecting and analyzing data, and could instead collectively outsource that job to the Facebook data factory, which had the benefit of making Facebook advertising that much more effective, not only to the benefit of Facebook’s bottom line but also to that of those that relied on its advertising platform.

ATT, meanwhile, didn’t ban data collection or analysis or targeting or any of the other aspects of advertising that many of its supporters object to; what it targeted was doing so collectively. That means that the policy has been a huge boon for fully integrated advertisers (i.e. advertisers that collect data, target, and show advertisements) like Google and Amazon. In this world the natural response has been consolidation; compare Zynga’s stock price over the last year to a company like AppLovin which was ahead of the curve in buying up multiple parts of the ad stack and combining them with its own titles to maximize the value of first party data:

Zynga versus AppLovin stock performance over the last year

AppLovin is down a bit with the general market drawdown, but it’s not an accident the company increased in price last fall while Zynga plummeted (the stock is up from its depths because of the not-yet-closed acquisition): one of the keys to Zynga’s turnaround was buying small independent studios and letting them stay independent; that’s no longer a viable approach in a post-ATT world, and Take-Two will have to centralize Zynga and only then leverage Zynga’s expertise to bring its valuable IP to mobile platforms in a much more complete way than it has previously.

Microsoft and Xbox Game Pass

Take-Two wasn’t the only company taking advantage of a company with a plummeting stock price; Microsoft did the same with Activision Blizzard:

Activision's stock price over the last year

Activision Blizzard does own King Digital, which still provides over $2 billion in revenue from Candy Crush and its various spin-offs, but in this case the stock price decline was primarily due to major issues regarding Activision Blizzard’s internal culture, including a lawsuit by the state of California. Microsoft, though, was well positioned to take advantage of Activision Blizzard’s troubles thanks to Xbox Game Pass.

Whenever a major platform acquires a developer on that platform, the first question users ask is if the platform owner will make the developer’s content exclusive. It’s an obvious question — why else would the platform owner buy it, given that they can still collect revenue from the developer, both directly via platform fees and indirectly via licensing fees? — but the answer is not always straightforward, thanks to the nature of software.

Software, including games, entails a massive investment in upfront development costs. You have to build (or license and adapt to) a game engine, write and build out the story, draw and develop the assets, etc. All of this is work that is both expensive and also only needs to be done once; it follows, then, that it is in the software developer’s economic interest to make the game available as widely as possible; after all, every additional copy of a game has zero marginal costs, which means that every additional copy of a game sold provides nothing but leverage on those fixed costs, and, once covered, pure profit (that noted, there are significant costs associated with supporting multiple platforms — I have seen estimates around 25~40% of extra costs, depending on the game — so going exclusive is not an entirely deadweight cost).

This makes the math around developer acquisitions a bit tricky for platform acquirers: to buy a gaming studio in order to make its games exclusive to the platform entails destroying a significant part of the game studio’s economic value; after all, you are acquiring a property like Call of Duty based on the revenue it grosses from PC, Xbox, and PlayStation — cutting off the latter means you overpaid. This is why it is not a surprise that Microsoft has already committed to keeping Activision Blizzard’s and ZeniMax’s most popular cross-platform games on PlayStation.

What makes Microsoft’s acquisition spree particularly compelling, though, is that Microsoft is trying to create a new business model for gaming: for $15/month you can play all of the games Microsoft owns, and those of any third-party developer who wishes to join up, on any platform that supports them. This includes not only the Xbox console and Windows PCs,2 but also the nascent Xbox streaming service, which makes console-level games available on mobile and PC and, soon enough, smart TVs or a potential Xbox streaming stick. This is a business model that not only makes sense given the evolution of technology towards cloud-centric services, but is also in-line with Microsoft’s core competency and fundamental nature.

More importantly, at least in the context of this Article, is the freedom of movement this gives Microsoft when it comes to acquisitions: all of ZeniMax’s titles, and many of Activision Blizzard’s titles in the future, are still available on PlayStation and Steam as individual purchases; if that is how you want to pay then Microsoft will accept your money, and avoid the loss of cutting you off. All of those games, though, are also available on Xbox Game Pass, because Microsoft is betting that many gamers will realize it’s a pretty good deal, even as it aligns with the company’s corporate goal of creating lifelong customers paying via subscription.

Sony and Exclusives

Major deals like Sony’s acquisition of Bungie, the once-makers of Halo (while owned by Microsoft) and current developers of Destiny (for which they negotiated their freedom), obviously don’t come together in two weeks. It’s tempting, though, to view it as defensive: if Microsoft were to withdraw a property like the afore-mentioned Call of Duty, well, Sony can always take away Destiny.

The truth, though, is that this is simply the most visible of a long line of acquisitions, going back two decades to Sony’s 2001 acquisition of Naughty Dog, of studios that make content for consoles that is worth switching over. Naughty Dog has made Crash Bandicoot, Uncharted, and The Last of Us; Incognito made the Twisted Metal series; Guerrilla Games made the Killzone and Horizon series; Sucker Punch made Sly Cooper, Infamous, and the Ghost of Tsushima; Insomniac made Ratchet and Clank and Spider-Man; Bluepoint Games made Shadow of the Colossus and Demon’s Souls; all were PlayStation exclusives, credited with Sony’s dominance over the last two console generations.

Sony, rather than chasing Microsoft, appears set for a more Nintendo-like trajectory: customers will buy their consoles because they have exclusive games that you can’t get anywhere else; unlike Nintendo, Sony consoles are on the cutting edge technologically, which means that the remaining 3rd-party publishers like EA will continue to support them. Sure, Microsoft has a good subscription, but Sony has games you can’t get anywhere else (and, rumors suggest, a new subscription service of its own, although it may not have the best titles available immediately, which makes sense given Sony’s strategy).

That is why I expect Bungie to continue to support Destiny across multiple platforms (PlayStation, Xbox, and PC), while new content beyond the upcoming Witch Queen expansion is probably going to come out on PlayStation first; whatever title lies beyond that, meanwhile, has a good chance of being PlayStation only (if you haven’t spent the money on supporting multiple platforms, it is an easier choice to be an exclusive).

Update: Bungie made clear in an FAQ that future Destiny content would be available on the first day on all platforms, which makes sense given that Destiny supports cross-play. Moreover, the same FAQ states that any IP beyond Destiny would also be cross-platform; this doesn’t detract from the general point about Sony and exclusives, but it certainly suggests that doesn’t apply in this case. This was an error on my part and I apologize.

Gaming and the Smiling Curve

The Smiling Curve, as I first explained in this 2014 Article, was a concept created by Acer founder Stan Shih to explain where the profits were in technological manufacturing; from Wikipedia:

A smiling curve is an illustration of value-adding potentials of different components of the value chain in an IT-related manufacturing industry…According to Shih’s observation, in the personal computer industry, both ends of the value chain command higher values added to the product than the middle part of the value chain. If this phenomenon is presented in a graph with a Y-axis for value-added and an X-axis for value chain (stage of production), the resulting curve appears like a “smile”.

I argued at the time that this framework was applicable to the publishing industry:

When people follow a link on Facebook (or Google or Twitter or even in an email), the page view that results is not generated because the viewer has any particular affinity for the publication that is hosting the link, and it is uncertain at best whether or not their affinity will increase once they’ve read the article. If anything, the reader is likely to ascribe any positive feelings to the author, perhaps taking a peek at their archives or Twitter feed.

Over time, as this cycle repeats itself and as people grow increasingly accustomed to getting most of their “news” from Facebook (or Google or Twitter), value moves to the ends, just like it did in the IT manufacturing industry or smartphone industry:

The Publishing Smiling Curve

I think this framework is also the best way to think about all of these acquisitions. To generalize the concept, the top right of the curve are companies that have a direct connection with customers, including the Aggregators; the top left are highly differentiated content makers:

The Smiling Curve in gaming

When it comes to mobile gaming, the dominant Aggregators on the top right side of the curve are Apple and Google and their respective App Stores; the most cynical interpretation of ATT is that Facebook was superseding both to become the most important way in which people discovered apps, and Apple, thanks to its OS-level control, cut them off at the knees, pushing Facebook down to the middle. The response from content makers, then, has been to consolidate and increase the leverage that comes from differentiated content.

Xbox Game Pass, meanwhile, is an attempt to build a position as an Aggregator; the initiative will be successful to the extent that gamers play games because they are in Game Pass, and increasingly shun games that have to be purchased individually (incentivizing holdouts to join Microsoft’s subscription). Microsoft is kick-starting this effort by buying its own differentiated content and overlaying their long-term incentives over any individual game studio’s incentives to maximize their short-term revenue by selling a game individually.

Sony is pursuing a similar strategy, but with a different business model: whereas Microsoft is increasingly device-agnostic (of course it helps that they sell both Xbox consoles and Windows), Sony is doubling down on the integration of hardware and software. Their best content is designed to not only make money in its own right but to also persuade customers to buy PlayStation consoles; the more PlayStation consoles there are the more attractive the platform is to 3rd-party developers.

What is much less viable is anything in the middle. The original PlayStation was almost completely dependent on 3rd-party games, relying on technical superiority to build its user base and attract developers; that approach reached its limit with the relative disappointment that was the PlayStation 3, and Sony has been focused on exclusives ever since. Content developers like Zynga, meanwhile, can’t depend on companies in the middle either: Apple’s rules have ensured that anyone who is not an Aggregator has to figure out how to make money on their own. There is still a market for 3rd-party developers on consoles and PCs — Steam is a major Aggregator on the latter, challenged by not just Microsoft but also Epic, who all are competing for the best developers — but it is increasingly important that content be highly differentiated and costs tightly contained.

The Upside of Acquisitions

There are a lot of seemingly scary implications in this analysis, including concepts like exclusives, lock-in, and the sense that the big are getting bigger. I think there is a strong argument, though, that the overall impact on consumers is on balance a positive one. Gaming, to a much greater extent than many other industries, is a zero sum game: time spent playing one title is time not spent playing another one. Moreover, the total cost of ownership for any particular gaming platform, relative to the time spent playing games, is a very favorable one. With regards to the latter point, the price of having access to everything is not an overwhelming one; with regards to the former the incentives to make a game that is truly exceptional, and thus truly differentiated, are higher than ever.

Phil Spencer, CEO of Microsoft Gaming, made an argument along these lines when I challenged him in a Stratechery interview about a potential loss of competition entailed in the company’s Activision Blizzard acquisition:

Phil Spencer: Ah. I mean, that’s maybe where we’ll differ in opinion. Some of this just comes down to the teams that become part of our team and our cultural journey with them that starts long ago. This will sound a little bit like a kind of gaming person, but I’ll say the thing that I have found that drives the teams internally to our organization is they want to do things they’ve never been able to do before. They want to reach more players of their creations than they’ve ever been able to create before. And I might argue the opposite, that the churn of “I need another holiday release next year” and then the year after and then the year after can be more stifle on creativity than the freedom that we’re able to give that says, “It’s not about one business model that works for us, it’s actually about multiple business models. It’s not about one screen that people will consume your game on. You pick the screen that’s right for you. And the input, if you want keyboard or mouse, you want touch, you want controller, you pick. You pick the subject matter that you want to work on.”

And frankly, when I look at the portfolio of games that we’ve been shipped over the last two or three years and some of the subject matter that our creators have decided to tackle, not always in the thinnest definition of what’s marketable at that time, I think those innovations and that kind of risk taking comes from having amazing teams that are thinking about what’s possible or even what’s not possible and how our tools and distribution can help them in creating things that they’ve never been able to create before. That’s what I feel.

Today, what I did after we announced this morning is I got to sit down with our studio leaders. The amount of energy they had for learning from other teams, because creators can get isolated because they’re so focused on the thing they’re doing right now. Now we get to sit there at the broadest level and have discussions about what people are thinking about, whether they’re challenging each other with sharing the learning that they have, what they aspire to go do. The energy in the room was awesome, it was a virtual room in a Teams call, but it was still awesome. I would say that that freedom to innovate, to try new things, because it’s not just down to one business model or one screen or even one device that somebody might buy is the thing that I found is most liberating for the teams here.

Of course that’s the answer I would expect.

PS: (laughing)

I think that one of the early questions about this deal is basically — the big company doesn’t think competition is particularly useful or valuable. “We want to give people freedom to explore.”

PS: Well, let me hit on that one. Sorry, I didn’t mean to interrupt, but let me hit on competition really quick, because I see competition as a little bit different. There’s a ton of competition in the games business. If I rewind 30 years ago, the video game business was dictated by who had shelf space at Egghead because there was such a constriction on distribution and funding and marketing of games that the portfolio of games I could choose from was so limited. I love the fact that when I look at the top 10 games that are being played, how many come from traditional places versus how many now are coming from creators that didn’t even exist 10 years ago. I love that there’s that creative turnover or just the diversity in where great games come from.

And then when I think about the platform side, the largest platforms for playing games or mobile devices, distribution on those devices are controlled by two companies. So for us, it’s how do we go invest in content and community so that we can actually have our distribution through our own content engagement that we have because the competition is out there and it’s so strong? I think the competition you talk about between individual teams and the competition to make the next paycheck, I understand maybe that’s motivating to certain teams, I’ve just found with our teams that they do much better work when our motivation is more about how many customers can we reach.

Forgive the extended excerpt, but I think this is essential, particularly the second part: so much of our thinking about competition is rooted in the analog world, a world of scarcity where there really was limited shelf space or limited telephone lines or limited railroad access; that just isn’t the case on the Internet, where anyone has access to everyone. This has dramatically increased the power of creators, who can not only go direct, but also plays Aggregators off against each other — that is the realm of competition that matters. If we must accept a world where platforms like the App Store have total power within their domains, then the answer is to build up alternative Aggregators that have compelling content of their own, waging a proper fight for the only scarce resource there is on the Internet: time.


  1. I will save the Wordle acquisition for another day! 

  2. Each of which has a standalone $10/month plan 

The Intel Split

Intel’s earnings are not until next Wednesday, but whatever it is that CEO Pat Gelsinger plans to discuss, it seems to me the real news about Intel came from the earnings of another company: TSMC.

From the Wall Street Journal:

Taiwan Semiconductor Manufacturing Co., the world’s largest contract chip maker, said it would increase its investment to boost production capacity by up to 47% this year from a year earlier as demand continues to surge amid a global chip crunch. TSMC said Thursday that it has set this year’s capital expenditure budget at $40 billion to $44 billion, a record high, compared with last year’s $30 billion.

Tim Culpan at Bloomberg described the massive capex figure as a “warning” to fellow chipmakers Intel and Samsung:

From a technology perspective, Samsung is the nearest rival. Yet a comparison is skewed by the fact that the South Korean company also makes display screens and puts most of its semiconductor spending toward commodity memory chips that TSMC doesn’t even bother to make.

Then there’s Intel, the U.S. would-be challenger that’s decided to join the foundry fray. In addition to manufacturing chips under its own brand, Intel Chief Executive Officer Pat Gelsinger last year decided he wants to take on TSMC and Samsung — and a handful of others — by offering to make them for external clients.

But Intel trails both of them in technology prowess, forcing the California company into the ironic position of relying on TSMC to produce its best chips. Gelsinger is confident that he can catch up. Maybe he will, but there’s no way the firm will be able to expand capacity and economies of scale to the point of being financially competitive.

It’s worse than that, actually: by becoming TSMC’s customer Intel is not only denying itself the scale of its own manufacturing needs, but also giving that scale to TSMC, improving the economics of their competitor in the process.

Gelsinger’s Design Tools

One of my favorite quotes from Michael Malone’s The Intel Trinity is about how “Moore’s Law” — the observation by Intel co-founder and second CEO Gordon Moore that transistor counts for integrated circuits doubled every two years — was not a law, but a choice:

[Moore’s Law] is a social compact, an agreement between the semiconductor industry and the rest of the world that the former will continue to strive to maintain the trajectory of the law as long as possible, and the latter will pay for the fruits of this breakneck pace. Moore’s Law has worked not because it is intrinsic to semiconductor technology. On the contrary, if tomorrow morning the world’s great chip companies were to agree to stop advancing the technology, Moore’s Law would be repealed by tomorrow evening, leaving the next few decades with the task of mopping up all of its implications.

Moore made that observation in 1965, and for the next 50 years that choice fell to Intel to make. One of the chief decision-makers was a young man in his 20s named Patrick Gelsinger. Gelsinger joined Intel straight out of high school, and worked on the team developing the 286 processor while studying electrical engineering at Stanford; he was the 4th lead for the 386 while completing his Masters. After he graduated Gelsinger became the lead of the 486 project; he was only 25.

The Intel 486 die
The Intel 486 die

Intel was, at this time, a fully integrated device manufacturer (IDM); while that term today refers to a company that designs and fabricates its own chips (in contrast to a company like Nvidia, which designs its own chips but doesn’t manufacture them, or TSMC, which manufactures chips but doesn’t design them), the level of integration has decreased over time as other companies have come to specialize in different parts of the manufacturing process. Back in the 1980s, though, Intel still had to figure out a lot of things for the first time, including how to actually design ever more microscopic chips. Gelsinger, along with three co-authors, described the problem in a 2012 paper entitled Coping with the Complexity of Microprocessor Design at Intel — a CAD History:

In his original 1965 paper, Gordon Moore expressed a concern that the growth rate he predicted may not be sustainable, because the requirement to define and design products at such a rapidly-growing complexity may not keep up with his predicted growth rate. However, the highly competitive business environment drove to fully exploit technology scaling. The number of available transistors doubled with every generation of process technology, which occurred roughly every two years. As shown in Table I, major architecture changes in microprocessors were occurring with a 4X increase of transistor count, approximately every second process generation. Intel’s microprocessor design teams had to come up with ways to keep pace with the size and scope of every new project.

Processor Intro Date Process Transistors Frequency
4004 1971 10 um 2,300 108 KHz
8080 1974 6 um 6,000 2 MHz
8086 1978 3 um 29,000 10 MHz
80286 1982 1.5 um 134,000 12 MHz
80386 1985 1.5 um 275,000 16 MHz
Intel486 DX 1989 1 um 1.2 M 33 MHz
Pentium 1993 0.8 um 3.1 M 60 MHz

This incredible growth rate could not be achieved by hiring an exponentially-growing number of design engineers. It was fulfilled by adopting new design methodologies and by introducing innovative design automation software at every processor generation. These methodologies and tools always applied principles of raising design abstraction, becoming increasingly precise in terms of circuit and parasitic modeling while simultaneously using ever-increasing levels of hierarchy, regularity, and automatic synthesis. As a rule, whenever a task became too painful to perform using the old methods, a new method and associated tool were conceived for solving the problem. This way, tools and design practices were evolving, always addressing the most labor-intensive task at hand. Naturally, the evolution of tools occurred bottom-up, from layout tools to circuit, logic, and architecture. Typically, at each abstraction level the verification problem was most painful, hence it was addressed first. The synthesis problem at that level was addressed much later.

This feedback loop between design and implementation is exactly what is necessary at the cutting edge of innovation. Clayton Christensen explained in The Innovator’s Solution:

When there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.

This is what Gelsinger did with the 486; from the afore-linked paper:

While the 386 design heavily leveraged the logic design of the 286, the 486 was a more radical departure with the move to a fully pipelined design, the integration of a large floating point unit, and the introduction of the first on-chip cache – a whopping 8K byte cache which was a write through cache used for both code and data. Given that substantially less of the design was leveraged from prior designs and with the 4X increase in transistor counts, there was enormous pressure for yet another leap in design productivity. While we could have pursued simple increases in manpower, there were questions of the ability to afford them, find them, train them and then effectively manage a team that would have needed to be much greater than 100 people that eventually made up the 486 design team…For executing this visionary design flow, we needed to put together a CAD system which did not exist yet.

To make a new chip, Intel needed to make new tools, as part of an overall integrated effort that ran from design to manufacturing.

Intel’s Ossification

Fast forward three decades and Intel is no longer on the cutting edge; instead the leading chip manufacturer in the world is TSMC, a company built on the idea that it does not do design; Morris Chang told the Computer History Museum:

When I was at TI and General Instrument, I saw a lot of IC [Integrated Circuit] designers wanting to leave and set up their own business, but the only thing, or the biggest thing that stopped them from leaving those companies was that they couldn’t raise enough money to form their own company. Because at that time, it was thought that every company needed manufacturing, needed wafer manufacturing, and that was the most capital intensive part of a semiconductor company, of an IC company. And I saw all those people wanting to leave, but being stopped by the lack of ability to raise a lot of money to build a wafer fab. So I thought that maybe TSMC, a pure-play foundry, could remedy that. And as a result of us being able to remedy that then those designers would successfully form their own companies, and they will become our customers, and they will constitute a stable and growing market for us.

This was skating to Christensen’s puck; again from The Innovator’s Solution:

Overshooting does not mean that customers will no longer pay for improvements. It just means that the type of improvement for which they will pay a premium price will change. Once their requirements for functionality and reliability have been met, customers begin to redefine define what is not good enough. What becomes not good enough is that customers can’t get exactly what they want exactly when they need it, as conveniently as possible. Customers become willing to pay premium prices for improved performance along this new trajectory of innovation in speed, convenience, and customization. When this happens, we say that the basis of competition in a tier of the market has changed.

TSMC was willing to build any chip that these new fabless companies could come up with; what is notable is that it was Gelsinger and Intel who made this modular approach possible, thanks to the tools they built for the 486. Again from that paper:

The combination of all these tools was stitched together into a system called RLS1 which was the first RTL2 to layout system ever employed in a major microprocessor development program…RLS succeeded because it combined the power of three essential ingredients:

  • CMOS (which enabled the use of a cell library)
  • A Hardware Description Language (providing a convenient input mechanism to capture design intent)
  • Synthesis (which provided the automatic conversion from RTL to gates and layout)

This was the “magic and powerful triumvirate”. Each one of these elements alone could not revolutionize design productivity. A combination of all three was necessary! These three elements were later standardized and integrated by the EDA3 industry. This kind of system became the basis for all of the ASIC4 industry, and the common interface for the fabless semiconductor industry.

The problem is that Intel, used to inventing its own tools and processes, gradually fell behind the curve on standardization; yes, the company had partnerships with EDA companies like Synopsys and Cadence, but most of the company’s work was done on its own homegrown tools, tuned to its own fabs. This made it very difficult to be an Intel Custom Foundry customer; worse, Intel itself was wasting time building tools that once were differentiators, but now were commodities.

This bit about tools isn’t new; Gelsinger announced Intel’s support for Synopsis and Cadence at last March’s announcement of Intel Foundry Services (IFS), and of course they did: Intel can’t expect to be a full-service foundry if it can’t use industry-standard design tools and IP libraries.

What I have come to appreciate, though, is exactly why Gelsinger announced IFS as only one part of Intel’s broader “IDM 2.0” strategy:

  • Part 1 was Intel’s internal manufacturing of its own designs; this was basically IDM 1.0.
  • Part 2 was Intel’s plan to use 3rd-party manufacturers like TSMC for its cutting edge products.
  • Part 3 was Intel Foundry Services.

I had been calling for something like IFS since the beginning of Stratechery, and had even gone so far as to advocate a split of the company a month before Gelsinger’s presentation. “IDM 2.0” suggested that Intel wasn’t going to go quite that far — and understandably so, given the history of Part 1 — but the more I think about Part 2, and how it connects to the other pieces, I wonder if I was closer to the mark than I realized.

Microsoft and Intel

In 2018, when I traced the remarkable turnaround Satya Nadella had led at Microsoft in an article entitled The End of Windows, I noted that Nadella’s first public event was to announce Office on iPad. This was an effort that was launched years previously under former CEO Steve Ballmer, but hadn’t launched because the Windows Touch version wasn’t ready. That, though, was precisely why Nadella’s launch was meaningful: he was signaling to the rest of the company that Windows would no longer be the imperative for Office; that same week Nadella renamed Windows Azure to Microsoft Azure, sending the exact same message.

I thought of this timing when Gelsinger spoke about TSMC making chips for Intel; that too was an initiative launched under a predecessor (Bob Swan). The assumption made by nearly everyone, though, was that Intel’s partnership with TSMC would only be a stopgap while the company got its manufacturing house in order, such that it could compete directly; indeed, that is the assumption underlying the opening of this Article.

This, though, is why TSMC’s announcement about its increased capital expenditure was such a big deal: a major driver of that increase appears to be Intel, for whom TSMC is reportedly building a custom fab. From DigiTimes:

TSMC plans to have its new production site in the Baoshan area in Hsinchu, northern Taiwan make 3nm chips for Intel, according to industry sources. TSMC will have part of the site converted to 3nm process manufacturing, the sources said. The facilities of the site, dubbed P8 and P9, were originally designed for an R&D center for sub-3nm process technologies. The P8 and P9 of TSMC’s Baoshan site will be capable of each processing 20,000 wafers monthly, and will be dedicated to fulfilling Intel’s orders, the sources indicated.

TSMC intends to differentiate its chip production for Intel from that for Apple, and has therefore decided to separate its 3nm process fabrication lines dedicated to fulfilling orders from these two major clients, the sources noted. The moves are also to protect the customers’ respective confidential products, the sources said. Intel’s demand could be huge enough to persuade TSMC to modify the pure-play foundry’s manufacturing blueprints, the sources indicated. The pair’s partnership is also likely to be a long-term one, the sources said.

Intel and Microsoft are bound by history, of course, but obviously their businesses are at the opposite ends of the computing spectrum: Intel deals in atoms and Microsoft in bits. What was common to both, though, was an unshakeable belief in the foundations of their business model: for Microsoft, it was the leverage over not just the desktop, but also productivity and enterprise servers, delivered by Windows; for Intel it was the superiority of their manufacturing. Nadella, before he could change tactics or even strategy, had to shake the Windows hangover that had corrupted the culture; Gelsinger needed to do the same to Intel, which meant taking on his own factories.

Think about the EAD issue I explained above: it must have been a slog to cajole Intel’s engineers into abandoning their homegrown solutions in favor of industry standards when the only beneficiaries were potential foundry customers — that is one of the big reasons why Intel Custom Foundry failed previously. However, if Intel were to manufacture its chips with TSMC, then it would have no choice but to use industry standards. Moreover, just as Windows needed to learn to compete on its own merits, instead of expecting Office or Azure to prop it up, Intel’s factories, denied monopoly access to cutting edge x86 chips, will now have to compete with TSMC to earn not just 3rd-party business, but business from Intel’s own design team.

The Intel Split

When I wrote that Intel should be broken up I focused on incentives:

This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a strait-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.

The key thing to understand about chips is that design has much higher margins; Nvidia, for example, has gross margins between 60~65%, while TSMC, which makes Nvidia’s chips, has gross margins closer to 50%. Intel has, as I noted above, traditionally had margins closer to Nvidia, thanks to its integration, which is why Intel’s own chips will always be a priority for its manufacturing arm. That will mean worse service for prospective customers, and less willingness to change its manufacturing approach to both accommodate customers and incorporate best-of-breed suppliers (lowering margins even further). There is also the matter of trust: would companies that compete with Intel be willing to share their designs with their competitor, particularly if that competitor is incentivized to prioritize its own business?

The only way to fix this incentive problem is to spin off Intel’s manufacturing business. Yes, it will take time to build out the customer service components necessary to work with third parties, not to mention the huge library of IP building blocks that make working with a company like TSMC (relatively) easy. But a standalone manufacturing business will have the most powerful incentive possible to make this transformation happen: the need to survive.

Intel is obviously not splitting up, but this TSMC investment sure makes it seem like Gelsinger recognizes the straitjacket Intel was in, and is doing everything possible to get out of it. To that end, it seems increasingly clear that the goal is to de-integrate Intel: Intel the design company is basically going fabless, giving its business to the best foundry in the world, whether or not that foundry is Intel; Intel the manufacturing company, meanwhile, has to earn its way (with exclusive access to x86 IP blocks as a carrot for hyperscalers building their own chips), including with Intel’s own CPUs.

Gelsinger’s Grovian Moment

It’s not clear that this will work, of course; indeed, it is incredibly risky, given just how expensive fabs are to build, and how critical it is that they operate at full capacity. Moreover, Intel is making TSMC stronger (while TSMC benefits from having another tentpole customer to compete with Apple). Given Intel’s performance over the last decade, though, it might have been more risky to stick with the status quo, in which Intel’s floundering fabs take down Intel’s design as well. In that regard it makes sense; in fact, one is reminded of the famous story of Moore and Gelsinger’s mentor, Andy Grove, deciding to get out of memory, which I wrote about in 2016:

Intel was founded as a memory company, and the company made its name by pioneering metal-oxide semiconductor technology in first SRAM and then in the first commercially available DRAM. It was memory that drove all of Intel’s initial revenue and profits, and the best employees and best manufacturing facilities were devoted to memory in adherence to Intel’s belief that memory was their “technology driver”, the product that made everything else — including their fledgling microprocessors — possible. As Grove wrote in Only the Paranoid Survive, “Our priorities were formed by our identity; after all, memories were us.”

The problem is that by the mid-1980s Japanese competitors were producing more reliable memory at lower costs (allegedly) backed by unlimited funding from the Japanese government, and Intel was struggling to compete…

Grove explained what happened next in Only the Paranoid Survive:

I remember a time in the middle of 1985, after this aimless wandering had been going on for almost a year. I was in my office with Intel’s chairman and CEO, Gordon Moore, and we were discussing our quandary. Our mood was downbeat. I looked out the window at the Ferris Wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why don’t you and I walk out the door, come back in and do it ourselves?”

Gelsinger was once thought to be next-in-line to be Intel’s CEO; he literally walked out the door in 2009 and for a decade Intel floundered under business types who couldn’t have dreamed of building the 486 or the tools that made it possible; now he has come back home, and is doing what must be done if Intel is to be both a great design company and a great manufacturing company: split them up.

I wrote a follow-up to this Article in this Daily Update.


  1. RTL to Layout Synthesis 

  2. Register-Transfer Level 

  3. Electronic Design Automation 

  4. Application-specific Integrated Circuit 

OpenSea, Web3, and Aggregation Theory

This was originally sent as a subscriber-only Update.

From Eric Newcomer:

The NFT-marketplace OpenSea is in talks to raise at a $13 billion valuation in a deal led by Coatue, sources tell me. Paradigm will also co-lead the $300 million funding round, according to a spokesperson for the firm. Kathryn Haun’s new crypto fund, which is currently operating under Haun’s initials “KRH,” is also participating in the funding round, sources tell me. Dan Rose at Coatue is spearheading the round and may take a board observer seat.

OpenSea confirmed the news on their blog:

In 2021, we saw the world awaken to the idea that NFTs represent the basic building blocks for brand new peer-to-peer economies. They give users greater freedom and ownership over digital goods, and allow developers to build powerful, interoperable applications that provide real economic value and utility to users. OpenSea’s vision is to become the core destination for these new open digital economies to thrive, building the world’s friendliest and most trusted NFT marketplace with the best selection.

This is, of course, a story about NFTs, at least in part, and by extension, a story about the so-called Web 3 née crypto economy that its fiercest advocates say is the future. But there are two other parts of this story that are very much at home on the Internet as it exists in the present: $13 billion, and “core destination.”

OpenSea’s Value

First, two more OpenSea stories from over the break. From Be In Crypto:

NFT marketplace OpenSea has frozen $2.2 million worth of Bored Ape (BAYC) NFTs after they were reported as being stolen. The NFTs on the marketplace now have a warning saying that it is “reported for suspicious activity.” Buying and selling of such items are suspended…

Meanwhile, there is a bit of a squabble happening over OpenSea over the Phunky Ape Yacht Club (PAYC). The NFT platform banned this NFT series because it was based on the Bored Ape Yacht Club NFTs. PAYC is virtually identical to BAYC, except for the fact it is mirrored.

This excerpt isn’t technically complete: buying and selling of the stolen NFTs — or of the PAYC NFTs — are suspended on OpenSea. The fact of the matter is that (references to) NFTs are, famously, stored on the blockchain (Ethereum in this case), and once those BAYC NFTs were transferred, by consent of their previous owner or not, the transaction cannot be undone without the consent of their new owner; said owner can buy or sell the NFTs to someone else, but not on OpenSea. It’s the same thing with with the BAYC rip-offs: they exist on the blockchain, whether or not OpenSea lists them for sale or not.

This, according to crypto advocates, is evidence of the allure of Web 3: because the blockchain is open and accessible by anyone, the stolen BAYC NFTs and the PAYC rip-offs can be sold on another market, or if one cannot be found, in a private transaction (leave aside, for the sake of argument and the brevity of this update, the question as to whether the fact that these transactions are irreversible is a feature or a bug).

Here’s the thing, though: this isn’t a new concept. What is the first answer given to anyone who is banned from Twitter, or demonetized on YouTube — two of the go-to examples Web 3 advocates give about the problem of centralized power on the Internet today? Start your own Twitter, or start your own blog, or set up a Substack. These answers are frustrating because they are true: the web is open.

Indeed, if this frustration sounds familiar, it is because it is the frustration of the regulator insisting that Aggregators are monopolies, that Google is somehow forcing users to not use Bing like some sort of railroad baron extorting farmers simply seeking to move grain to market, or that Facebook has a monopoly on social networking, ignoring the fact that we have far more ways to communicate than ever before.

In fact, what gives Aggregators their power is not their control of supply: they are not the only way to find websites, or to post your opinions online; rather, it is their control of demand. People are used to Google, or it is the default, so sites and advertisers don’t want to spend their time and money on alternatives; people want other people to see what they have to say, so they don’t want to risk writing a blog that no one reads, or spending time on a social network that because it lacks the network has no sense of social.

This is why regulations focused on undoing the control of supply are ineffective: the marginal cost nature of computing and the zero distribution cost of the Internet made it viable for the first time — and far more profitable — to control demand, not by forcing people to act against their will, but by making it easy for them to accomplish whatever it is they wished to do, whether that be find a website, buy a good, talk to their friends, or give their opinion. And now, to buy or sell NFTs.

This, then, is the reason that OpenSea received its $13 billion valuation: it is by far the dominant market for NFTs; should the market exist in the long run, the most likely entryway for end users will be OpenSea. This is a very profitable position to be in, even if alternatives are only a click away. It’s not like that reduces the profitability of a Google or a Facebook.

It is also why OpenSea’s bans have some amount of teeth to them: as I noted, you can still buy and sell these stolen and rip-off NFTs, just as you can still go to a website that is not listed in Google, communicate with a friend kicked off of Facebook, or state your opinions somewhere other than Twitter. The reduced demand, though, lowers the price, whether that price be traffic, convenience, or attention. Or, in the case of NFTs, ETH: not having access to OpenSea means there is less demand for these NFTs, and less demand means lower prices.

In short, OpenSea has power not because it controls the NFTs in question, but because it controls the vast majority of demand.

Crypto’s Aggregators

One of the reasons that crypto is so interesting, at least in a theoretical sense, is that it seems like a natural antidote to Aggregators; I’ve suggested as such. After all, Aggregators are a product of abundance; scarcity is the opposite. The OpenSea example, though, is a reminder that I have forgotten one of my own arguments about Aggregators: demand matters more than supply.

To that end, which side of the equation is impacted by the blockchain? The answer, quite obviously is supply. Indeed, one need only be tangentially aware of crypto to realize that the primary goal of so many advocates is to convert non-believers, the better to increase demand. This has the inverse impact of OpenSea’s ban: increased demand increases prices for scarce supply, which is to say, in terms that are familiar to any Web 3 advocate, that the incentives of Web 3’s most ardent evangelists are very much aligned.

The most valuable assets in crypto remain tokens; Bitcoin and Ethereum lead the way with market caps of $874 billion and $451 billion, respectively. What is striking, though, is that the primary way that most users interact with Web 3 are via centralized companies like Coinbase and FTX on the exchange side, Discord for communication and community, and OpenSea for NFTs. It is also not a surprise: centralized companies deliver a better user experience, which encompasses everything from UI to security to at least knocking down the value of your stolen assets on your behalf; a better user experience leads to more users, which increases power over supply, further enhancing the user experience, in the virtual cycle described by Aggregation Theory.

That Aggregation Theory applies to Web 3 is not some sort of condemnation of the idea; it is, perhaps, a challenge to the insistence that crypto is something fundamentally different than the web. That’s fine — as I wrote before the break, the Internet is already pretty great, and its full value is only just starting to be exploited. And, as I argued in The Great Bifurcation, the most likely outcome is that crypto provides a useful layer on what already exists, as opposed to replacing it.

Moreover, as I explained in a follow-up to The Great Bifurcation, this view of crypto’s role relative to the web places it firmly in an ongoing progression away from technical lock-in and towards network effects:

Technical lock-in has decreased while network lock-in has increased

This is a dramatic simplification, to be clear, but I think it is directionally correct; the long-term trend, all of the hysteria around tech notwithstanding, is towards more openness and less lock-in. At the same time, this doesn’t mean that companies are any less dominant; rather, their means of dominance has shifted from the technical to the sociological.

Crypto, still largely valued on nothing more than the collective belief of its users, is the ultimate example to date of the power of a network, in every sense of the word. That that collective belief is a point of leverage for companies that can aggregate believers is the most natural outcome imaginable, even if it means that the lack of technical lock-in will likely prove to be more of an occasionally invoked escape hatch (and a welcome one at that) as opposed to the defining characteristic for the majority of users.

The 2021 Stratechery Year in Review

There is a perspective in which 2021 was an absolute drag: COVID dragged on, vaccines became politicized, new variants emerged, and while tech continued to provide the infrastructure that kept the economy moving, it also provided the infrastructure for all of those things that made this year feel so difficult.

At the same time, 2021 also provided a glimpse of a future beyond our current smartphone and social media-dominated paradigm: there was the Metaverse, and crypto, and my contention they are related. Sure, the old paradigm is increasingly dominated by regulation and politics, a topic that is so soul-sucking that it temporarily made me want to post less, but the brilliance of the Internet, and of business-models like that which undergirds Stratechery, is that freedom to not only write what you want, but build what you want, and be what you want, is greater than ever.

A drawing of Internet 3.0 and Open Protocols

This year Stratechery published 40 free Weekly Articles and 121 Daily Updates, including 13 interviews. I also launched Passport, the new back-end for Stratechery and Dithering, the twice-a-week for-pay podcast I host with John Gruber; Passport remains under development, so stay tuned for updates in 2022.

Passport is the new back-end for Stratechery

Today, as per tradition, I summarize the most popular and most important posts of the year on Stratechery.

You can find previous years here: 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013

On to 2021:

The Five Most-Viewed Articles

The five most-viewed articles on Stratechery according to page views:

  1. Clubhouse’s Inevitability — I was very bullish on Clubhouse; unfortunately, it looks like I got this one completely wrong. I admitted to my error and explained my thinking in this Update.
  2. The Relentless Jeff Bezos — Jeff Bezos is retiring, and will go down as one of the great CEO’s in tech history, in part because of how he transformed Amazon into a tech company in every respect.
  3. Intel Problems — Intel is in much more danger than its profits suggest; the problems are a long time in the making, and the solution is to split up the company. See also, Intel Unleashed, Gelsinger on Intel, IDM 2.0 and Intel vs. TSMC, How Samsung and TSMC Won, MAD Chips.
  4. Internet 3.0 and the Beginning of (Tech) History — The actions taken by Big Tech have a resonance that goes beyond the context of domestic U.S. politics. Even if they were right, they will still push the world to Internet 3.0.
  5. Apple’s Mistake — While it’s possible to understand Apple’s motivations behind its decision to enable on-device scanning, the company had a better way to satisfy its societal obligations while preserving user privacy. See also, Apple Versus Governments, Apple’s Legitimate Privacy Claims, Privacy and Paranoia, Apple’s Point-of-View, NFTs and Status, NFTs and Standard Formats, and Facebook Messenger Updates, WhatsApp vs. Apple, Facebook’s CSAM Approach.

The Next Revolution

Several posts throughout the year wrestled with the theory of the next revolution, from memes to metaverses:

  • Internet 3.0 and the Beginning of (Tech) History — The actions taken by Big Tech have a resonance that goes beyond the context of domestic U.S. politics. Even if they were right, they will still push the world to Internet 3.0.
  • Mistakes and Memes — Information on the Internet is conveyed by memes, which can be anything and everything. The real world impacts are only now being understood.
  • The Death and Birth of Technological Revolutions — Carlota Perez documents technological revolutions, and thinks we’re in the middle of the current one; what, though, if we are nearing its maturation? Is crypto next?
  • Sequoia Productive Capital — Sequoia’s transformation of its venture capital model is actually a shift from financial capital to productive capital.
  • The Great Bifurcation — Tracing the evolution of tech’s three eras, and why the fourth era — the Metaverse — is defined by its bifurcation with the physical world.

The Metaverse

Several posts this year worked to define the Metaverse and understand its role in the future:

A drawing of Unity and Weta's Convergence

Politics & Regulation

A lot of time — perhaps too much — was spent on politics and regulation:

A drawing of Clubhouse's Similarity to Twitter, Instagram, and TikTok

The App Store

The regulatory question that received the most attention this year was the App Store, thanks in large part to Apple’s legal tussle with Epic:

Creator Power

A yearly theme on Stratechery is creator power, and the opportunities unlocked by the Internet:

A drawing of the Creator Carrying Audiences to Different Mediums

Company Analysis

While most Stratechery company analysis happens in the Update, there were a number of relevant Articles as well:

A drawing of Cloudflare's Modular Cloud

Stratechery Interviews

This year’s Stratechery interviews included:

A drawing of Authoritarianism Hill, Dime-Store Ditch, and Freedom Mountain

Plus, an interview of me on the Good Time Clubhouse Show.

The Year in Daily Updates

Fifteen of my favorite Daily Updates:

A drawing of The Square/Afterpay Network Effect


I am so grateful to the Stratechery (and Dithering!) subscribers that make it possible for me to do this as a job. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2022!

The Great Bifurcation

For the first several years of Stratechery I would write a year end article about “The State of Consumer Technology”; the last one I wrote, though, was in 2018, because consumer technology, dominated as it was by Apple and Google on the device side, and Google and Facebook on the services side, seemed rather stale and destined to descend into the world of politics and regulation (I was more optimistic about the enterprise, both in terms of the ongoing shift to the public cloud and the opportunity for SaaS companies).

That has largely proven to be the case, but it’s not the first time this has happened to technology; the pattern has happened twice before, and in each case the seeds of the next era were planted — usually by incumbents — while the previous era stagnated. And, in every case, the transition was marked by a reduction in lock-in and the devolvement of increasing amounts of autonomy to the individual user.

Tech 1.0: From Invention to IBM

The transistor, the foundation of modern computing, was invented at Bell Labs in 1947 by the solid state physics group led by William Shockley; nine years later Shockley moved to Mountain View, California to be close to his ailing mother in Palo Alto, where he started Shockley Semiconductor Laboratory. Eight of the researchers he hired, led by Bob Noyce, left the increasingly erratic Shockley a year later to found Fairchild Semiconductor, and in 1968 founded Intel with the support of Arthur Rock, one of the first venture capitalists.

The West Coast, though, was a sideshow compared to New York, where IBM had switched to transistors for the 7000 Series mainframe (as opposed to the 700 Series’ vacuum tubes); the real breakthrough was the modular and expandable System/360, which was the first computer bought by most companies, including the fictional SC&P from Mad Men:

There certainly was a connection to be drawn between IBM and the moon: IBM helped develop and track NASA’s initial exploratory flights and the eventual lunar mission. Here on earth, though, the Justice Department decided in 1969 that the company was in violation of antitrust laws; the case would be dropped 13 years later, but not before IBM voluntarily unbundled its software and services from its hardware, creating the first market for software.

Tech 2.0: King of the Hill

Notice those dates: by the time the Department of Justice sued IBM in 1969, Intel had already been founded; two years later an Intel engineer named Frederico Faggin designed the first microprocessor, the Intel 4004, which shrunk many of the functions of IBM’s room-sized computers to a single chip. Ten years after that IBM released the IBM PC, powered by Intel’s 8088 microprocessor.

The open nature of the IBM PC platform — at least once Compaq backward-engineered IBM’s BIOS — commoditized PCs; the real points of leverage in the PC value chain were Intel for chips and Windows for the operating system. The latter was a two-sided market: because so many businesses bought the Windows DOS-powered IBM PC, developers were motivated to make software for DOS; the more software for DOS, and later Windows (which was backwards compatible), the more that businesses sought out DOS/Windows-based computers. Over time more and more people who first used computers at work wanted similar functionality at home, which meant that DOS/Windows dominated the consumer market as well.

Thus was born another Justice Department lawsuit, this time against Microsoft’s alleged monopoly; that case was also eventually dismissed (although it lived on in various forms in the E.U. for years). Once again, though, the next paradigm that rendered the seeming monopolist’s lock-in immaterial was already in place: the Internet could be accessed from any computer, no matter its operating system. Moreover, in an echo of IBM’s voluntary unbundling of hardware and software, which created the conditions for tech’s next evolution, it was Microsoft that introduced the XMLHttpRequest API to Internet Explorer, which undergirded the Ajax web app architecture and Tech 3.0.

Tech 3.0: Software Eats the World

If Tech 1.0 was about hardware, and 2.0 software, 3.0 was about services. On the enterprise side this meant the development of the public cloud and software-as-a-service applications that required nothing more than a browser and a credit card; Marc Andreessen’s famous 2011 essay, Software Is Eating the World, is really about this transformation from software you installed on your computer to software you accessed over the Internet:

Companies in every industry need to assume that a software revolution is coming. This includes even industries that are software-based today. Great incumbent software companies like Oracle and Microsoft are increasingly threatened with irrelevance by new software offerings like Salesforce.com and Android (especially in a world where Google owns a major handset maker).

In some industries, particularly those with a heavy real-world component such as oil and gas, the software revolution is primarily an opportunity for incumbents. But in many industries, new software ideas will result in the rise of new Silicon Valley-style start-ups that invade existing industries with impunity. Over the next 10 years, the battles between incumbents and software-powered insurgents will be epic. Joseph Schumpeter, the economist who coined the term “creative destruction,” would be proud.

One way to think about this is that tech, for the first 50 years of its existence, mostly competed with itself: who could make the best operating system, the best database, the best ERP system. All of these were then adopted by legacy businesses who saw large efficiency gains. The SaaS revolution, though, saw tech turning its sights on the markets those legacy businesses served, bringing to bear entirely new ways of solving problems that started with the malleability and scalability of software as a core principle, not simply as a tool to do the same thing more efficiently.

The consumer space, meanwhile, has arguably been a decade behind the enterprise; e-commerce, for example, was primarily an Amazon story, and social media was all about Facebook. Both took analog concepts and digitized them: Amazon was the Sears and Roebucks catalog with far more products and far faster delivery; Facebook was literally named after a physical artifact, Harvard House face books.

What made Facebook so popular — and why the product retains its stickiness, even today — is that it is first and foremost the online representation of your offline relationships, whether those be family, classmates, friends, or co-workers. This also explains why the most successful Facebook add-ons, like Marketplace and Groups, are themselves often rooted in the physical world. These relationships represent a network effect — the more people you know who are on Facebook, the more valuable it is to you — and once again regulators have come knocking, in this case the FTC.

Note, though, that these cases are, if anything, becoming weaker over time, at least in terms of traditional antitrust concerns: while IBM’s market power was based on top-to-bottom integration that completely foreclosed competitors, Microsoft’s was about a two-sided network that was wide open for developers and users. Facebook, meanwhile, only has its users and their relationships to each other; that is the only thing preventing anyone from using another service.1

Which, of course, they are.

Social Networking 2.0

Last December I wrote what was, in retrospect, an important precursor to this piece (consider it my 2020 State of Technology article): Social Networking 2.0.

[My Bucks DM group is] not my only online community: while the writing of Stratechery is a solo affair, building new features like the Daily Update Podcast or simply dealing with ongoing administrative affairs requires a team that is scattered around the world; we hang out in Slack. Another group of tech enthusiast friends is in another Slack, and a third, primarily folks from Silicon Valley, is in WhatsApp. Meanwhile, I have friends and family centered in Wisconsin (we use iMessage), and, of course Taiwan (LINE for family, WhatsApp for friends). The end result is something I am proud of:

A drawing of Ben's Communities

The pride arises from a piece of advice I received when I announced I was moving back to Taiwan seven years ago: a mentor was worried about how I would find the support and friendship everyone needs if I were living halfway around the world; he told me that while it wouldn’t be ideal, perhaps I could piece together friendships in different spaces as a way to make do. In fact, not only have I managed to do exactly that, I firmly believe the outcome is a superior one, and reason for optimism in a tech landscape sorely in need of it.

The argument in that piece is that Facebook and Twitter represented Social Networking 1.0, where you were expected to be your whole self online; that expectation, though, was like a legacy company using computers to run their analog business model: it may have been more efficient, but it wasn’t at all an optimal use of technology. The entire magic of software is that it is malleable and scalable, and those qualities extend to users creating completely different personas and experiences based on the particular online community that they have joined.

Facebook, incidentally, like IBM and Microsoft in previous eras, has contributed to this evolution, particularly its acquisition and continued support of WhatsApp: while groups exist on all sorts of platforms, from Twitter to Facebook Groups to iMessage, WhatsApp seems to have a major share of these ad hoc private groups, particularly internationally but increasingly in the U.S. as well. What is notable about WhatsApp is that the key identifier is not your account, but rather your phone number; any sort of technological lock-in has completely disappeared.

Tech 4.0: The Metaverse

One of the biggest topics of 2021 has been the Metaverse, thanks in large part to Facebook’s pivot to Meta (even if Microsoft was first). It has been tricky, though, to define exactly what the Metaverse is: everyone has a different definition.

I think, though, I have settled on mine, and it starts with this comment from Meta CEO Mark Zuckerberg:

I think that the phrase “the real world” is interesting. I think that there’s a physical world and there’s a digital world, and increasingly those are sort of being overlaid and coming together, but I would argue that increasingly the real world is the combination of the digital world and the physical world and that the real world is not just the physical world. That, I think, is an interesting kind of frame to think about this stuff going forward.

I both agree and disagree with Zuckerberg; on one hand, I think he is absolutely correct that using “the real world” to only apply to the physical world is a mistake. Think back to those communities I described above that provide so much meaning to my life: those are almost completely online, but the sense of belonging is very real to me. Or think of this Article you are reading: it is nothing but endlessly replicable bits on the Internet, yet it is my career.

Where I disagree is with the idea that the physical world and the digital world are increasingly “being overlaid and coming together”; in fact, I think the opposite is happening: the physical world and digital world are increasingly bifurcating. Again, to use myself as an example, my physical reality is defined by my life in Taiwan with my family; the majority of my time and energy, though, is online, defined by interactions with friends, co-workers, and customers scattered all over the world.

For a long time I felt somewhat unique in this regard, but COVID has made my longstanding reality the norm for many more people. Their physical world is defined by their family and hometown, which no longer needs to be near their work, which is entirely online; everything from friends to entertainment has followed the same path.

Thus my definition: the Metaverse is the set of experiences that are completely online, and thus defined by their malleability and scalability, which is to say that the Metaverse is already here. Sure, today’s experience is largely denominated by text and 2D, but video is already a major medium, first in the form of entertainment and now a vital tool for work. This is a trajectory that, in my estimation, inexorably leads to virtual reality: if all that matters is digital, why wouldn’t you want the most immersive experience possible?

Crypto’s Role

This also explains why crypto is interesting. Stephen Diehl, in a scathing article entitled Web3 is Bullshit, writes:

At its core web3 is a vapid marketing campaign that attempts to reframe the public’s negative associations of crypto assets into a false narrative about disruption of legacy tech company hegemony. It is a distraction in the pursuit of selling more coins and continuing the gravy train of evading securities regulation. We see this manifest in the circularity in which the crypto and web3 movement talks about itself. It’s not about solving real consumer problems. The only problem to be solved by web3 is how to post-hoc rationalize its own existence.

The first part isn’t entirely unfair; scams and ponzi schemes are everywhere, and it seems clear that we are in the middle of an ever-inflating bubble. It’s also the case that an entire set of legitimate use cases are in reality regulatory arbitrage; crypto advocates are far too quick to ascribe all of the issues with the current monetary system to greed and corruption, without acknowledging that complex systems arise for very good reasons. And, along those same lines, Web3 evangelists often sound like overbearing regulators ascribing the dominance of the biggest tech companies to illegal lock-in, without acknowledging that Aggregators win because they provide the user experience consumers want (and which crypto applications currently sorely lack).

The second part, though, is less compelling to me, even if it is a restatement of the most common crypto criticism: “None of this digital stuff has any real world value.” The real world, of course, is the physical world, and I get the critique; I am very skeptical that crypto currencies are going to replace fiat currencies, or be otherwise useful in the physical world, or that DAOs are going to replace LLCs or corporations for real world companies.

Remember, though, my definition of the Metaverse as a set of experiences that are completely online. It is here that physical world constraints don’t make any sense. I can’t, for example, have a conversation with multiple, distinct groups of people at the same time, yet I do exactly that every day — I even have an entire monitor on my desktop devoted to nothing but chat clients! In this world having one account that represents my whole self, like that offered by Facebook, is a pain in the rear end; for now I have multiple accounts for each individual service, but as I noted, some of them are already based on my unique phone number. How much better if every account were based on a digital identifier unique to me and owned by no one, and now you understand the case for crypto wallets.

This gets to the other mistake Diehl makes in that article, which, ironically, echoes a similar mistake made by many crypto absolutists: there is no reason why the Metaverse, or any web application for that matter, will be built on the blockchain. Why would you use the world’s slowest database when a centralized one is far more scalable and performant? It is not as if WhatsApp or Signal are built on top of the plain old telephone service; they simply leverage the fact that phone numbers are unique and thus suitable as identifiers. This is the type of role blockchains will fill: provide uniqueness and portability where necessary, in a way that makes it possible to not just live your life entirely online, but as many lives simultaneously as your might wish, locked in nowhere.

The Great Bifurcation

I noted above that the physical and digital worlds are bifurcating, and this is happening to tech as well. Yesterday Elon Musk was named TIME’s 2021 Person of the Year, and while he is known for his tweets about Dogecoin and on-and-off-again support of Bitcoin, his biggest contributions to the world — electrical cars and reusable rockets — are very much physical. In fact, you could make the case that Tesla and SpaceX aren’t tech companies at all, but rather another example of tech-first companies set on remaking industries that only ever saw computers as a tool, not the foundation.

The Metaverse, in contrast, is not about eating the world; it’s about creating an entirely new one, from entertainment to community to money to identity. If Elon Musk wants to go to the moon, Mark Zuckerberg wants to create entirely new moons in digital space. This is a place where LLCs make no sense, where regulations are an after-thought, easily circumvented even if they exist. This is a place with no need for traditional money, or traditional art; the native solution is obviously superior. To put it another way, “None of this real world stuff has any digital world value” — the critique goes both ways.

In the end, the most important connection between the Metaverse and the physical world will be you: right now you are in the Metaverse, reading this Article; perhaps you will linger on Twitter or get started with your remote work. And then you’ll stand up from your computer, or take off your headset, eat dinner and tuck in your kids, aware that their bifurcated future will be fundamentally different from your unitary past.

I wrote a follow-up to this Article in this Daily Update.


  1. When it comes to e-commerce, the shift is similar: not only are other large e-commerce providers like Walmart a click away, individual merchants, largely powered by Shopify, are growing even faster than Amazon. 

The Amazon Empire Strikes Back

It seems like a Christmas miracle. From CNBC:

For years, Amazon has been quietly chartering private cargo ships, making its own containers, and leasing planes to better control the complicated shipping journey of an online order. Now, as many retailers panic over supply chain chaos, Amazon’s costly early moves are helping it avoid the long wait times for available dock space and workers at the country’s busiest ports of Long Beach and Los Angeles…

By chartering private cargo vessels to carry its goods, Amazon can control where its goods go, avoiding the most congested ports. Still, Amazon has seen a 14% rise in out-of-stock items and an average price increase of 25% since January 2021, according to e-commerce management platform CommerceIQ…

Amazon has been on a spending spree to control as much of the shipping process as possible. It spent more than $61 billion on shipping in 2020, up from just under $38 billion in 2019. Now, Amazon is shipping 72% of its own packages, up from less than 47% in 2019 according to SJ Consulting Group. It’s even taking control at the first step of the shipping journey by making its own 53-foot cargo containers in China. Containers are in short supply, with long wait times and prices surging from less than $2,000 before the pandemic to $20,000 today.

The intended takeaway of this article — which originated as a digital special report — is clear: Amazon is better and smarter and richer than any other retailers, and isn’t suffering the same sort of supply chain challenges that everyone else is this Christmas.

I’ll get to who the intended target for this message is in a moment, but there is an important question that needs to be asked first: is this even true?

Shipping and the Supply Chain

Start with those containers; the vast majority of shipping containers come in two sizes: 20 foot (i.e. 1 TEU — twenty-foot equivalent unit) and 40 foot (2 TEUs); there are a small number of 45 foot containers, but there is one number that does not exist — 53 foot containers. This isn’t simply a matter of will; what made containers so revolutionary was their standardization, which not only extends to ships but also the gantry cranes used to load and unload them, the truck chassis used to move them around ports, and the storage racks that hold them until they can be unloaded and their contents transferred onto semi tractor-trailers.

Guess, by the way, how long those trailers are? 53 feet. In other words, yes, Amazon has been making a whole bunch of containers, but those containers are not and can not be used for shipping; they are an investment into Amazon’s domestic delivery capability.

This investment is vast: Amazon is also leasing planes, just opened a new Air Hub in Cincinnati (Amazon was previously leasing DHL’s package hub during off hours), built (or is building) over 400 distribution and sortation centers in the United States alone, and has farmed out a huge delivery fleet to independent operators who exist only to serve Amazon, all with the goal of cost efficiently getting orders to customers within two days, and eventually one.

This vertical integration, though, stops at the ocean, for reasons that are obvious once you think through the economics of shipping. While container ships can range as high as 18,000 TEUs, a typical trans-Pacific ship is about 8,000 TEUs (which is around 6,500 fully loaded containers), and costs around $100 million. Right off the bat you can see that shipping has a massive fixed cost component that dictates that the asset in question be utilized as much as possible. More than that, the marginal costs per trip — it takes around seven weeks to do a trans-Pacific round trip in normal times — is significant as well, around $5 million. This means that the ship has to be as fully loaded as possible.

The only way that this works is if a smaller number of shipping companies are serving a larger number of customers, and doing so on a set schedule such that those customers can easily coordinate their logistics (which means that all of those capex and opex numbers have be multiplied by 7, to guarantee a sailing-per-week). Amazon couldn’t profitably leverage an investment of this magnitude any more than Apple could profitably run its own foundry; in that case TSMC can justify an investment in a $20 billion fab because its broad customer base gives the company confidence it can fully utilize that investment not just in 2023 but for many years into the future. It’s the same thing with shipping, which is to say that Amazon is in the same boat as everyone else — mostly.

Amazon Freight-Forwarding

That CNBC story follows on a similar story in Bloomberg last month that opens with an anecdote about one of Amazon’s workarounds:

Most cargo ships putting into the port of Everett, Washington, brim with cement and lumber. So when the Olive Bay docked in early November, it was clear this was no ordinary shipment. Below decks was rolled steel bound for Vancouver, British Columbia, and piled on top were 181 containers emblazoned with the Amazon logo. Some were empty and immediately used to shuffle inventory between the company’s warehouses. The rest, according to customs data, were stuffed with laptop sleeves, fire pits, Radio Flyer wagons, Peppa Pig puppets, artificial Christmas trees and dozens of other items shipped in directly from China — products Amazon.com Inc. needs to keep shoppers happy during a holiday season when many retailers are scrambling to keep their shelves full.

This actually isn’t quite as meaningful as it seems; these multi-purpose ships (which are usually used for commodities) carry a fraction of the cargo of those big container ships anchored off the coast (and the costs are extremely high); airplanes that cross the Pacific, which Amazon is also leasing, carry even less (and cost even more):

Still, even if these stories get some specifics wrong, they do get the big picture right: Amazon is one of the best ways to get your goods to customers. The key thing is that this capacity is not due to last minute ship charters or airplane leases, but rather investments and initiatives Amazon undertook years ago.

In the case of the trans-Pacific trade the key move was Amazon’s 2016 establishment of a freight-forwarding business. A freight-forwarder is basically a middleman in the supply chain which buys a guaranteed amount of space on ships, often via multi-year contracts at guaranteed rates with guaranteed space, and then re-sells that space to people who need it. This means that Amazon could, if they wanted to, make massive profits by reselling space they negotiated last year, but instead it appears that the company is offering space at cost to its resellers; this is the most important part of that Bloomberg article:

This logistical prowess hasn’t been lost on the merchants who sell products on Amazon’s sprawling marketplace. For years, they resisted using the company’s global shipping service because doing so means sharing information about pricing and suppliers, data they fear the company could use to compete with them. But container shortages in the leadup to the holiday season persuaded many of them to overcome their qualms and entrust their cargos to the world’s largest online retailer. “Amazon had space on ships and I couldn’t say no to anyone,” says David Knopfler, whose Brooklyn-based Lights.com sells home décor and lighting fixtures. “If Kim Jong Un had a container, I might take it, too. I can’t be idealistic.” Knopfler says Amazon’s prices were “phenomenal,” $4,000 to ship a container from China compared with the $12,000 demanded by other freight forwarders.

This gets at the real challenge facing smaller merchants: getting delayed outside of Los Angeles is preferable to not even getting on a ship in the first place, and while no one, not even Amazon, can unsnarl the ports, the e-commerce giant is ready to take care of everything else (including working around the edges for high margin goods).

This also gets at the question I raised at the beginning: who is the target for these stories, which have all the hallmarks of being planted by Amazon PR? The Bloomberg piece concludes:

“Amazon will stick to its guns and get things to customers,” says David Glick, a former Amazon logistics executive who is now chief technology officer at Seattle logistics startup Flexe. “It’s going to be expensive but, in the long term, builds customer trust.”

That is certainly a big benefit: to the extent that Amazon has goods in stock this Christmas, and other retailers don’t, the more likely it is that customers will start their shopping at Amazon.com. This is critical because aggregating customer demand is the foundation of Amazon’s moat. I think though, that is secondary; Amazon’s more important target are those third party merchants benefiting from the company’s investment, who were previously incentivized to go it alone.

The Empire Strikes Back

I wrote about Amazon’s logistics integration in the context of 3rd-party merchants in 2019:

In 2006 Amazon announced Fulfillment by Amazon, wherein 3rd-party merchants could use those fulfillment centers too. Their products would not only be listed on Amazon.com, they would also be held, packaged, and shipped by Amazon. In short, Amazon.com effectively bifurcated itself into a retail unit and a fulfillment unit…

A drawing of Amazon and Aggregation

Despite the fact that Amazon had effectively split itself in two in order to incorporate 3rd-party merchants, this division is barely noticeable to customers. They still go to Amazon.com, they still use the same shopping cart, they still get the boxes with the smile logo. Basically, Amazon has managed to incorporate 3rd-party merchants while still owning the entire experience from an end-user perspective.

That article was about Shopify and the Power of Platforms, and the point was to explain how Shopify was taking a very different approach: by being in the background the Canadian company was enabling merchants to build their own brands and own their own customers; this was the foundation of the Anti-Amazon Alliance, which by 2020, in light of news that Amazon was leveraging 3rd-party merchant data for its own products, seemed more attractive than ever:

3rd-party merchants, particularly those with differentiated products and brands, should seek to leave Amazon’s platform sooner-rather-than-later. It is hard to be in the Anti-Amazon Alliance if you are asking Amazon to find you your customers, stock your inventory, package your products, and deliver your goods; there are alternatives and — now that Google is all-in — the only limitation is a merchant’s ability to acquire and keep customers in a world where their products are as easy to buy as bad PR pitches are easy to find.

These supply chain challenges, though, are enabling the empire to strike back; again from the Bloomberg article:

Amazon also simplifies the process since it oversees the shipment from China to its U.S. warehouses. Other services have lots of intermediaries where cargo swaps hands, presenting opportunities for miscommunication and delays. “It’s a one-stop-shop from Asia to Amazon,” says Walter Gonzalez, CEO of Miami-based GOJA, which sells various products on Amazon including Magic Fiber cleaner for glasses. “It reduces the gray areas where the shipping process might fail.” Gonzalez says his company, which has been using Amazon’s global logistics service, has about 95% of the inventory it needs to meet holiday demand.

It’s more than that: it’s a one-stop shop from Chinese ports to customers’ front doors, thanks to all of those domestic investments in logistics, and as long as the supply chain is a challenge, the Anti-Amazon Alliance will be at a big disadvantage.

This isn’t the only area where Amazon is increasingly attractive to third-party merchants: Apple’s App Tracking Transparency (ATT) changes favor Amazon in a big way as well. What makes a modular ecosystem possible is platforms that allow disparate parts to work together, such that the sum is greater than the whole. This very much applies to Facebook’s ad ecosystem, where disparate e-commerce sellers effectively pool all of their conversion data in one place — Facebook — such that they all benefit from better targeting. ATT, though, is targeted at this sort of cooperation, while Amazon is immune because of its integration. Sure, the company collects plenty of data about consumers (often to the consternation of those 3rd party merchants), but because Amazon controls both ad inventory on its site and apps, and also the conversion, it isn’t covered by ATT (or other privacy laws like GDPR).

This doesn’t apply to just Amazon: Google has similar advantages on the web, and Apple with apps. The reality is that the more that information (in the case of advertising) and goods (in the case of the supply chain) are able to move freely, the better chance smaller competitors have against integrated giants. The world, though, both online and off, is moving in the opposite direction.

Twitter Has a New CEO; What About a New Business Model?

From CNBC:

Twitter CEO Jack Dorsey is stepping down as chief of the social media company, effective immediately. Parag Agrawal, Twitter’s chief technology officer, will take over the helm, the company said Monday. Shares of Twitter closed down 2.74% on the day.

Dorsey, 45, was serving as both the CEO of Twitter and Square, his digital payments company. Dorsey will remain a member of the board until his term expires at the 2022 meeting of stockholders, the company said. Salesforce President and COO Bret Taylor will become the chairman of the board, succeeding Patrick Pichette, a former Google executive, who will remain on the board as chair of the audit committee.

“I’ve decided to leave Twitter because I believe the company is ready to move on from its founders,” Dorsey said in a statement, though he didn’t provide any additional detail on why he decided to resign.

On one hand, congratulations to Twitter for its first non-messy CEO transition in its history; on the other hand, this one was a bit weird in its own way: CNBC broke the news at 9:23am Eastern, just in time for the markets to open and the stock to shoot up around 10% as feverish speculation broke out about who the successor was; two hours and 25 minutes later Dorsey confirmed the news and announced Agrawal as his successor, and the sell-off commenced.

The missing context in Dorsey’s announcement was Elliott Management, the activist investor that took a stake in Twitter in early 2020 and demanded that Dorsey either focus on Twitter (instead of Square, where he is still CEO) or step down; Twitter gave Elliott and Silver Lake, who was working with Elliott, two seats on the board a month later. That agreement, though, came with the condition that Twitter grow its user base, speed up revenue growth, and gain digital ad market share.

Twitter has made progress: while the company’s monthly active users have been stagnant for years — which is probably why the company stopped reporting them in 2019 — its “monetizable daily active users” have increased from 166 million in Q1 2020 to 211 million last quarter, and its trailing twelve-month revenue has increased from $3.5 billion in Q1 2020 to $4.8 billion in Q3 2021. The rub is digital ad market share: Snap, for example, grew its TTM revenue from $1.9 billion to $4.0 billion over the same period, as the pandemic proved to be a massive boon for many ad-driven platforms.

That boon was driven by the surge in e-commerce, which is powered by direct response marketing, where there is a tight link between seeing an ad and making a purchase; Twitter, though, has struggled for years to build a direct response business, leaving it dependent on brand advertising for 85% of its ad revenue. That meant the company was not only not helped by the pandemic, but hurt worse than most (and, on the flip side, was less affected by Apple’s iOS 14 changes). If in fact Dorsey’s job depended on taking digital ad market share, he didn’t stand a chance.

That perhaps explains yesterday’s weird timing; Casey Newton speculated that the board may have leaked the news to ensure that Dorsey didn’t get cold feet. It also, I suspect, explains the market’s cool reaction to the appointment of an insider: Agrawal was there for all of those previously failed attempts to build a direct response marketing business, so it’s not entirely clear what is going to be different going forward.

Twitter’s Advertising Problem

The messiness I alluded to in Twitter’s previous CEO transitions is merely markers on a general run of mismanagement from the company’s earliest days. I’ve long contended that Twitter’s core problem is that the product was too perfect right off the bat; from 2014’s Twitter’s Marketing Problem:

One of the most common Silicon Valley phrases is “Product-Market Fit.” Back when he blogged on a blog, instead of through numbered tweets, Marc Andreessen wrote:

The only thing that matters is getting to product/market fit…I believe that the life of any startup can be divided into two parts: before product/market fit (call this “BPMF”) and after product/market fit (“APMF”).

When you are BPMF, focus obsessively on getting to product/market fit.

Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required.

When you get right down to it, you can ignore almost everything else.

I think this actually gets to the problem with Twitter: the initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit. It just happened, and they’ve been riding that match for going on eight years.

The problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company-equivalent of a lottery winner who never actually learns how to make money, and now they are starting to pay the price.

Seven years on and Twitter has finally started to implement some of the proposals from that article, including leaning heavily into recommendations and topics; in theory the machine learning understandings driving those recommendations should translate into more effective advertising as well. That hasn’t really happened, though, and I’m not sure it ever will, for reasons that go beyond the effectiveness of Twitter’s management (or lack thereof).

Think about the contrast between Twitter and Instagram; both are unique amongst social networks in that they follow a broadcast model: tweets on Twitter and photos on Instagram are public by default, and anyone can follow anyone. The default medium, though, is fundamentally different: Twitter has photos and videos, but the heart of the service is text (and links). Instagram, on the other hand, is nothing but photos and video (and link in bio).

The implications of this are vast. Sure, you may follow your friends on both, but on Twitter you will also follow news breakers, analysts, insightful anons, joke tellers, and shit posters. The goal is to mainline information, and Twitter’s speed and information density are unparalleled by anything in the world. On Instagram, though, you might follow brands and influencers, and your chief interaction with your friends is stories about their Turkey Day exploits. It’s about aspiration, not information, and the former makes a lot more sense for effective advertising.

It’s more than just the medium though; it’s about the user’s mental state as well. Instagram is leisurely and an escape, something you do when you’re procrastinating; Twitter is intense and combative, and far more likely to be tied to something happening in the physical world, whether that be watching sports or politics or doing work:

Instagram is a lean-back experience; on Twitter you lean forward

This matters for advertising, particularly advertising that depends on a direct response: when you are leaning back and relaxed why not click through to that Shopify site to buy that knick-knack you didn’t even know you needed, or try out that mobile game? When you are leaning forward, though, you don’t have either the time or the inclination.

Someone is wrong on the Internet

That ties into Twitter’s third big problem: the number of people who actually want to experience the Internet this way is relatively small. There is a reason that Twitter’s userbase is only a fraction of Instagram’s, and it’s not a lack of awareness; the reality is that most people are visual, and Twitter is textual. Which, of course, is exactly why Twitter’s most fervent users can’t really imagine going anywhere else.

Twitter’s Place in Culture

What makes Twitter such a baffling company to analyze is that the company’s cultural impact so dramatically outweighs its financial results; last quarter Twitter’s $1.3 billion in revenue amounted to 4.4% of Facebook’s $29.0 billion, and yet you can make the case — and I believe it — that Twitter’s overall impact on the world is just as big, if not larger than its drastically larger peer. Facebook hollowed out the gatekeeper position of the media, but that void was filled by Twitter, both in terms of news being made, and just as critically, elite opinion and narrative being shaped.

Given that impact, I can see why Elliott Management would look at Twitter and wonder why it is that the company can’t manage to make more money, but the fact that Twitter is the nexus of online information flow reflects the reality of information on the Internet: massively impactful and economically worthless, particularly when ads — which themselves are digital information — can easily be bought elsewhere.

Twitter is more than just news, though: I wrote last year in Social Networking 2.0 about the rise of private networks that supplemented and, for many use cases, replaced Facebook and Twitter.

A drawing of v1 vs v2 Social Networks

Twitter, even more than Facebook, remains crucial to this new ecosystem: what WhatsApp group or Telegram chat isn’t filled with tweets posted for the purpose of discussion or disparagement, or links discovered via Twitter? It is as if these private groups are a fortress on the frontier; Twitter is the wild where you forage for content morsels, and, of course, where you do battle with the enemy.

Don’t underrate that last part: one of the biggest challenges facing would-be Twitter clones is not simply that a complete lack of moderation leads to an overwhelming amount of crap, but also that the sort of person who thrives on Twitter very much wants to know everything that is happening in the world, including amongst those outside of their circle. Being stuck on a text-based social network that only has some of the information to be consumed is lame; having access to anyone and everything, for better or worse, is a value prop that only Twitter can provide.

This, then, is the other thing that often baffles analysts: Twitter has one of the most powerful moats on the Internet. Sure, Facebook has ubiquity, Instagram has influencers, and TikTok has homegrown stars, but I find it easier to imagine any of those fading before Twitter’s grip on information flow disappears (in part, of course, because Twitter has shown that it’s a pretty crappy business).

A Paid Social Network

So let’s review: there is both little evidence that Twitter can monetize via direct response marketing, and reason to believe that the problem is not simply mismanagement. At the same time, Twitter is absolutely essential to a core group of users who are not simply unconcerned with the problems inherent to Twitter’s public broadcast model (including abuse and mob behavior), but actually find the platform indispensable for precisely those reasons: Twitter is where the news is made, shaped, and battled over, and there is very little chance of another platform displacing it, in large part because no one is economically motivated to do so.

Given this, why not charge for access?

This may seem obvious to you, but it’s a huge leap for me; back when Stratechery first started it was fairly popular to argue that social networks should charge users instead of selling ads, which never made sense. I wrote in 2014’s Ello and Consumer-Friendly Business Models:

When it comes to social networks, on the other hand, advertising is clearly the best option: after all, a social network is only as good as the number of friends that are on it, and the best way to get my friends on board is to offer a kick-ass product for free. In other words, the exact opposite of the feature-limited product that Ello is proposing…

If…you care about making a successful social network that users will find useful over the long run, then actually build something that is as good as you can possibly make it and incentivize yourself to earn and keep as many users as possible.

I still stand by that analysis generally, but I increasingly question whether or not it applies to Twitter. Twitter has long since penetrated the awareness of just about everyone on earth; the vast majority gave the platform a try and never came back, content to consume the tweets that show up everywhere from news articles to cable news. The core that remains, meanwhile, simultaneously bemoans that Twitter is terrible even as they can’t rip their eyes away, addicted as they are to that flow of information that is and will for the foreseeable future be unmatched by any other service.

And yet, despite this impact and indispensability and impenetrable moat, Twitter makes an average of $22.75 per monetizable daily active user per year (and given that some of Twitter’s most hard core users use third-party Twitter clients, and thus aren’t monetizable, the revenue per addicted daily active user is even lower). That’s just under $2/month, an absolutely paltry sum.

Actually charging for Twitter would, of course, reduce the userbase to some degree; moreover, there are a lot of users with multiple accounts, and plenty of non-human users on Twitter. And, of course, Apple and Google would take their share. Still, even if you cut the userbase by a third to 141 million daily addicted users — which I think vastly overstates Twitter’s elasticity of demand amongst its core user base — Twitter would only need to charge $4/month (including App Store fees) to exceed the $4.8 billion in revenue it made over the last twelve months.

And, in fact, that overstates the situation for another reason: only $4.2 billion of Twitter’s last twelve months of revenue came from ads; the rest came from data licensing and other revenue. There is an alternate world where data licensing is Twitter’s primary revenue model: just think about how valuable it is to be the primary protocol for real time information sharing, particularly if you can package and distribute that information in an intelligent way?

Twitter could still do that, and pursue other initiatives like its revitalized API, offering developers the opportunity to build entirely new experiences on Twitter’s information flow (including unmoderated ones). The difference from the first go-around is that Twitter won’t have an advertising business to protect, and thus will have its interests much better aligned with developers who can pay for access. After all, that would be Twitter’s business model.

I also think this makes Twitter’s other subscription offerings, like Super Follows, Revue, etc., more attractive, not less; the biggest challenge in running a subscription business is earning that first dollar, but once a user is paying it’s relatively easy to charge for more.


This could certainly all go horribly wrong; the absolute fastest way to get your users to explore alternatives is to ask them to pay for your service, and there is the matter of acquiring new users, users who can’t afford to pay, etc. Growth matters, fewer users means less vitality, and I’m honestly getting cold feet even proposing this! Certainly existing users would howl and insist they were leaving and never coming back. I think, though, that Twitter is so unique, and its userbase is so locked in, that it is the one social networking service that could potentially pull this off.

Moreover, the fact of the matter is that Twitter has now had one business model and five CEOs (counting Dorsey twice); maybe it’s worth changing the former before the next activist investor demands yet another change to the latter.

Follow-up: Why Subscription Twitter Is a Terrible Idea

Unity, Weta, and Faceless Platforms

At the beginning of a video announcing Unity’s acquisition of Weta Digital, Peter Jackson, filmmaker extraordinaire and the founder of Weta, and Prem Akkaraju, the CEO of Weta Digital, explained why they were excited to hand Weta Digital over (beyond, of course, the $1.63 billion):

Peter Jackson: We knew that digital effects offered so much possibility for us to be able to create the worlds and the creatures that we were imagining.

Prem Akkaraju: Now we’re taking those tools that we have created and are handing them over the Unity to market them for the entire world.

Peter Jackson: Together Unity and Weta Digital can create a pathway for any artist in any industry who will now be able to leverage these incredible creative tools.

The creation, development, and now sale of Weta Digital is a great example of how the ideal approach to innovation in an emerging field changes over time; understanding that journey also explains why Unity is a great home for Weta.

Weta’s Integrated History

Peter Jackson has said that he was inspired to start Weta Digital after seeing Jurassic Park and realizing that CGI was the future of movies; the first movie Weta worked on was Heavenly Creature, which Jackson says didn’t even need CGI. From an oral history of Weta Digital:

We used the film as an excuse to buy one computer, just to put our toe in the water. And we had a guy, one guy, who could figure out how to work it, and we got a couple of bits of software, and we did the effects in Heavenly Creatures really just for the sake of doing some CGI effects so we’d actually just start to figure it out. And then on The Frighteners, which was the next movie we [did], we went from one computer to 30 computers. It was a pretty big jump.”

Weta, as you would expect given how nascent computer graphics were, was integrated from top-to-bottom: Jackson’s team didn’t make software to make graphics for Jackson’s films; Jackson actually made a scene in a film that needed graphics to give his team a reason to get started on making software. That bet paid off a few years later when Weta played a pivotal role in the Lord of the Rings trilogy. The feedback loop between movie-making and software-development was tight in a way that is only possible with a fully integrated approach, resulting in dramatic improvements from film to film. Jackson said of Gollum:

Now the first Lord of the Rings film, Fellowship of the Ring, which has Gollum in about two shots, he’s just a little glimpse, and it’s completely different to the Gollum that’s in the next film, The Two Towers. [That first] version of Gollum was pretty gnarly, pretty crude. It was as good as we could do at that time, and we had to have something in the movie, but we put a very deep shadow, and you know that was on purpose because he didn’t look very good. But, nonetheless, we ran out of time, and that was what we had to have in the movie. We still had another year to get him really good for the Two Towers, where he was half the film, so the Two Towers Gollum was a complete overhaul of the one from the first film. It finally got to the place that we wanted to go.”

Over the intervening years, though, Weta gradually branched out, working on films beyond Jackson’s personal projects, including Avatar and Avengers. This too makes sense: developing software is both incredibly capital intensive and benefits from high rates of iteration; that means there is both an economic motive to serve more customers, increasing the return on the initial investment, and a product motive, since every time the software is used there is the potential for improvement. Still, software development and software application were still integrated, because there was so much more improvement to be had; Jackson again:

Over the years, we’ve written about a hundred pieces of code for a wide variety of different things, and it drives me a little bit crazy because we keep writing code for the same things every time. Like, we do Avatar a few years ago, and we wrote software to create grass, and trees, leaves, that can blow with the wind, that sort of stuff. And then I come along with The Hobbit, and I want some grass or some shrubs or some trees, so I [ask], Why can’t we use the stuff that we wrote [already]? I mean it was fine for the [prior] film, but we wanted to be better. I mean there was some kind of code, unspoken code – and it’s not anything to do with the ownership of the software, because it belongs to us – but the guys themselves, wanna do everything better than they did the last time. So they don’t want the Avatar grass, or the leaves; they want to do grass and leaves better than last time. They set about writing a whole new grass software, whole new leaf software, and it happens over and over again, all the time. It sort of drives me crazy.

What is interesting, though, is that in that same oral history Jackson starts to signal that the plane of improvement was starting to shift; he says at the end:

We live in in an age where everything is possible, really, with CGI. There’s nothing that you cannot imagine in your head or read on a script, or whatever, that you can’t actually do now. What you’ve got now is you’ve simply got faster – faster and cheaper. In terms of computers, you know, every year they get cheaper, and they get twice as fast. It is important because when you’re doing visual effects, it’s like any form of filmmaking, you often want a take two, take three, take four. You do the fire shot with a little flame once and you know you’re not necessarily gonna get it looking great on the first time, so it’s good to have a second go, and maybe a third go, maybe a fourth go. So when you’ve actually got computers that are getting quicker and faster, it gives you more goes at doing this stuff. Ultimately you get a better looking result.

Notice the transition here; at the beginning everything was integrated from the movie shot to the development process to the software to the individual computer:

Weta's integration

Over the ensuing 28 years, though, each of these pieces has been broken off and modularized, increasing the leverage that can be gained from the software itself; Unity’s approach of selling tools to the world is the logical endpoint.

Unity’s Modular History

When 3D games first became possible in the 1990s 3D game engines were developed for a specific game; the most famous developer was id Software, which was founded in 1991, and id’s most famous engine was built for its Quake series of games; id’s rival was Epic, which built the Unreal engine for Unreal Tournament. These were integrated efforts: the engine was built for the game, and the game was built for the engine; once development was completed then the engine was made available to other developers to build their own games, without having to recreate all of the work that id or Epic had already done.

Unity, though was different: its founders started with the express goal of “democratizing game development”; the company’s only game, GooBall, was intended as a proof of concept. It also, like Unity, only ran on the Mac, which wasn’t a great recipe for commercial success in the games business. That all flipped in 2008, though, with the opening of the iPhone App Store: that was one of the greatest gold rushes of all time, and a tool at home on Apple’s technologies was particularly well-placed to profit. Unity was off to the races, quickly becoming the most used engine on iOS specifically and mobile gaming broadly, a position it still holds today: 71% of the top 1000 mobile games on the market run on Unity, and over 3 billion people play games it powers.

Mobile was a perfect opportunity for Unity for reasons that went beyond its Apple roots. Phones, particularly back then, were dramatically underpowered relative to PCs with dedicated graphics cards; that meant that the primary goal for an engine was not to produce cutting edge graphics, but to deliver good enough graphics in a power-constrained environment. Moreover, most of the developers taking advantage of the mobile opportunity were small teams and independents; they couldn’t afford the big PC engines even if they could fit in an iPhone’s power envelope. Unity, though, wasn’t just Apple friendly, but also built from the ground-up for independent developers, both in terms of usability and also pricing.

Over time, as mobile gaming has become the largest part of the market, and as the power of mobile phones has grown, Unity has grown its capabilities and performance; the company has made increasing in-roads into console and PC gaming (albeit mostly with casual games), and has invested significantly in virtual reality. What remains a constant, though, is Unity’s position as a developer’s partner, not a competitor.

Unity’s Business Model

This has opened up further opportunities for Unity: it turned out that mobile gaming had a completely different business model than PC or console gaming as well, which traditionally sold a game for a fixed price. Mobile, though, thanks to its always connected nature and massive market size, moved towards a combination of advertising and in-app purchases. Unity found itself perfectly placed to support both: first, all mobile game developers had the same problem that Unity could solve once and make available to all of its customers, and second, advertising in particular requires far more scale than any one game developer could build on its own. Unity, though, could build an advertising market across all of the games it supported — which again, was most of them — and its customers could rely on Unity knowing that their interests were aligned: if developers made money, Unity made money.

Today it is actually Unity’s Operate Solutions, including its advertising network and in-app purchase support, plus other services like hosting and multiplayer support, that makes up 65% of Unity’s revenue; Create Solutions, which is primarily monetized via a SaaS-subscription to Unity’s development tools, is only 29%. The latter, though, is an on-ramp to the former; CEO John Riccitiello said on Unity’s most recent earnings call:

We can hook [creators] at the artist level when they’re just building out their first visual representation of what they want to do. We can hook them lots of different ways. And at Unity, it almost always seems the same. They start as some sort of a experimental customer doing not a lot. And then we get inside and they do a lot more and then they do a lot more, then they write bigger contracts, they start moving more of our services. I would hate to say this in front our customers although they know it, land and expand is like endemic to Unity. It’s exactly what we’re doing.

This has resulted in a net expansion rate of 142%; that means that Unity has negative churn because its existing customers increase their spending with Unity so substantially. I love a good cohort analysis and Unity had a great one in their S-1:

Unity's S-1 cohort analysis

This shows just how important it is that Unity’s tools attract customers: the company has the ability to grow revenue from its existing customers indefinitely; the primary limiting factor to the company’s growth rate is how many new customers it can bring on board.

Enter Weta.

Unity + Weta

It is striking how the fundamental strengths and weaknesses of Weta and Unity are mirror images of each other: Weta has cutting edge technology, but it’s only available to Weta; Unity’s technology, meanwhile, continues to improve, but its biggest asset is the number of developers on its platform and integration with all of the other components a developer needs to build a business.

What you see in this acquisition, then, is the intersection of these two paths:

  • Weta’s software is increasingly mature and ready to be productized and leveraged across as many customers who wish to use it.
  • Unity’s developer offering is increasingly full-featured and only limited by its ability to acquire new customers.

The logic should be obvious: Weta increases Unity’s market from not just developers but to artists, who can be plugged into Unity’s land-and-expand model. Weta, meanwhile, immediately gains leverage on all of the investment it has made in its software tools.

The convergence of Unity and Weta's paths

There is also a third path flowing through this intersection: the convergence and increase of computing power across devices of all sizes. Unity benefited at the beginning from serving an underpowered mobile market; Weta, in contrast, was so limited by underpowered computers that its developers had to be tightly integrated with artists to make things that were never before possible.

Today, though, phones and computers are increasingly comparable in power, to the benefit of Unity; from Weta’s perspective, that power makes it possible to use its tools iteratively, lowering the learning curve and increasing the market of artists who can figure them out on their own.

That’s why Unity leaped at this opportunity; CEO John Riccitiello told me in an interview:

It’s an incredible opportunity for the world’s artist to get to something that this incredible collection of tools that they’re going to want to use, and they’re going to want to use it in the film industry. But we’ve got thousands of engineers inside of Unity and we’re going to work at making these tools. Some of them already are real time and usable in video games, make most all of them real time so they can be used in video games, but they can be used in many vertical industry circumstances, whether it’s the auto industry for car configurators or design or architecture, engineering construction, or in digital twins.

These tools are pretty darned amazing, I’m proud that we were able to come to an agreement with Peter and the team around him. I’m also excited about the prospect of bringing these tools to a global marketplace…But what I would tell you is this is one of those things where you write down the strategy and then there’s something that accelerates you multiple years, and this is exactly that. We’re thrilled.

It’s going to take time to see all of this come to fruition, and there are obvious challenges in bringing together two companies that are so different; it is precisely those differences, though, that make this acquisition so compelling.

Faceless Platforms

One additional point: to me the most analogous company to Unity is not Epic, its biggest competitor in the game engine business, but TSMC. TSMC, like Unity, was built from the beginning to be a partner to chip designers, not a competitor, in large part because the company had no other route to market. In the process, though, TSMC democratized chip development and then, with the rise of smartphones, gained the volume necessary to overtake integrated competitors like Intel.

By 2019 it was TSMC that was the first to bring chips to market based on Extreme Ultraviolet (EUV) lithography technology; EUV was devilishly difficult and exceptionally expensive, requiring not just years of development by ASML, but also customers like Apple willing to pay for the absolute most cutting edge chips at massive volume. Now TSMC is the most advanced fab in the industry and the 10th most valuable company in the world, and, absent geopolitical risk, a clear winner in a world increasingly experienced via technology.

The ultimate manifestation of that is the metaverse, and here Unity is particularly well-placed; Nvidia CEO Jenson Huang at last week’s GTC keynote described today’s web as being 2D, and the future as being 3D, and now Unity owns the best 3D tools in the world. More importantly, unlike Metaverse aspirants like Facebook or Microsoft, Unity isn’t competing for end users, which means it can partner with everyone building those new experiences — including Facebook and Microsoft and Apple — just as TSMC can build chips for everyone, even Intel.

It is companies like Unity and TSMC — other examples include Stripe or Shopify or the public clouds — that are the most important to the future. The most valuable prize in technology has always been platforms, but in the beginning era of technology the most important platforms were those that interfaced directly with users; in technology’s middle era the most important platforms will be faceless, empowering developers and artists and creators of all types to create completely new experiences on top of the best technology in the world, created and maintained and improved on by companies that aren’t competing with them, but partnering to achieve the scale necessary to accelerate the future.

Microsoft and the Metaverse

It’s certainly the question of the season: what is the Metaverse? I detailed the origin of the term in August; that, though, was about Neal Stephenson’s vision and how it might apply to the future. For the purpose of this Article I am going to focus on my personal definition of the Metaverse, how I think it will come to market, and explain why I think Microsoft is so well placed for this opportunity.

Here is the punchline: the Metaverse already exists, it just happens to be called the Internet. Consider the seven qualities Matthew Ball used to define the Metaverse; the Internet satisfies all of them:1

  • The Internet is persistent
  • The Internet is synchronous and live
  • The Internet has no cap to concurrent users, while also providing each user with an individual sense of “presence”
  • The Internet has a fully functioning economy
  • The Internet is an experience that spans both the digital and physical worlds, private and public networks/experiences, and open and closed platforms
  • The Internet offers unprecedented (although not perfect) interoperability of data, digital items/assets, content, etc.
  • The Internet is populated by “content” and “experiences” created and operated by an incredibly wide range of contributors

I really don’t see anyone creating some sort of grand nirvana that beats what we currently have on any of these metrics. The entire reason the Internet is as open and interoperable as it is is because it was built in a world without commercial imperative or political oversight; all future efforts will be led by companies seeking profits and regulated by governments seeking control, both of which result in centralization and lock-in. Crypto pushes in the opposite direction, but it is a product of the Internet that relies on many of the qualities in Ball’s list, not a replacement.

What makes “The Metaverse” unique, then, is that it is the Internet best experienced in virtual reality. This, though, will take time; I expect that the first virtual reality experiences will be individual metaverses, tied together by the Internet as we experience it today.

Mobile and the Physical World

Forecasts, particularly those that extend multiple years into the future, are always a dangerous enterprise; look no further than January 2020, when I argued in The End of the Beginning that mobile + cloud represented the culmination of the last fifty years in tech history:

A drawing of The Evolution of Computing

The idea behind this article is right there in the title: “The End of the Beginning”. Tech innovation wasn’t over, it was only beginning, but everything in the future would happen on top of the current paradigm.

Then COVID happened, and now I’m not so sure if that’s the entirety of the story.

Implicit in the assumption that mobile + cloud is the endpoint is the preeminence of the physical world. After all, what makes the phone the ultimate expression of a “personal computer” is that it is with us everywhere, from home to work to every moment in-between. That is what allows for continuous computing everywhere.

At the same time, for well over a year a huge portion of people’s lives was primarily digital. The primary way to connect with friends and family was via video calls or social networking; the primary means of entertainment was streaming or gaming; for white collar workers their jobs were online as well. This certainly wasn’t ideal: the first thing people want to do as the world opens up is see their friends and family in person, go to a movie or see a football game as a collective, or take a trip. Work, though, has been a bit slower to come back: even if the office is open, many meetings are still online given that some of the team may be working remote — for many companies, permanently.

This last case presents a scenario where the physical is not pre-eminent; an Internet connection is. In this online-only world a phone is certainly essential as a means to stay connected while moving around; it is not, though, the best way to be online. Virtual reality could be.

Work in VR

There have been two conventional pieces of wisdom about virtual reality that I used to agree with, but now I think both were off-base.

The first one is that virtual reality’s first and most important market will be gaming. The reasoning is obvious: gamers already buy dedicated equipment, and gaming is an immersive activity that could justify the hassle of putting on and taking off a headset. One problem, though, is that gamers buying dedicated equipment are going to care the most about performance, and VR quality is still behind; the bigger problem, though, is that there simply isn’t a big enough market of people with headsets to justify investment from game makers. This is the chicken-and-egg problem that bedevils all new platforms: if you don’t have users you won’t have developers, but if you don’t have developers you won’t have users.

The second assumption is that augmented reality would be a larger and more compelling market than virtual reality, just like the phone is a larger and more compelling market than games (excluding mobile games, of course). This is because the phone is with you all of the time — an accompaniment to your daily life — whereas more immersive experiences like console games are a destination: because they require your full attention, they have access to less of your time.

However, this is why I discussed the COVID-accelerated devaluation of the physical: for decades work was a physical destination; now it is a virtual one. While it used to be that the knowledge worker day was broken up between solitary work on a computer and in-person meetings, over the last two years in-person meetings were transformed into a link on a calendar invite that opened a video-conferencing call. This is the demotion of the physical I referred to above:

The demotion of the physical

Meanwhile, new products like Facebook’s Horizon Workrooms and Microsoft’s Mesh for Microsoft Teams make it possible to hold meetings in virtual reality. While I have not used Microsoft’s offering, one of the things I found compelling about Horizon Workrooms was how it managed mixed reality: not only could you bring your computer into the virtual environment (via a daemon running on your computer that projected the screen into your virtual reality headset), meeting participants without virtual headsets simply appeared on video screens, no different than workers calling in to an in-person meeting in the physical world:

Horizon Workrooms

I am very impressed — and my opinion is colored — by the experience of Horizon Workrooms. I now understand what CEO Mark Zuckerberg means when he talks about “presence”; there really is a tangible sense of being in the same room as everyone else, and not only in terms of focused discussion: something that is surprisingly familiar is noticing the person next to you not at all paying attention and instead doing email on their computer.

At the same time, it’s not an experience that you would want to use all of the time. For one, the tech isn’t quite good enough; the Quest 2, while a big leap in terms of a standalone device, is still too low resolution and has too limited of battery life to wear for long, and a good number of people still get dizzy after prolonged usage. The bigger problem, though, is that putting on the headset for a call is a bit of a pain; you have to unplug the headset, turn it on, log in, find the Horizon Workrooms app, and join the meeting, and while this only takes a couple of minutes, it’s just so much easier to click that link on your calendar and join a video call.

What, though, if you already had the headset on?

Think again over the last couple of years: most of those people working from home were hunched over a laptop screen; ideally one was able to connect an external monitor, but even that is relatively limited in size and resolution. A future VR headset, though, could contain as many monitors as you could possibly want — or your entire field of view could be one massive monitor. Moreover, the fact that a headset shifts your senses out of your physical environment is actually an advantage if said physical environment has nothing to do with your work.

In this world joining a meeting does not entail shifting your context from a computer to a headset, but simply clicking a button or entering a virtual door; now all of the advantages of virtual reality — the sense of presence in particular — comes for free. What will seem anachronistic is using a traditional laptop or desktop computer; those, like a headset, keep you stationary, without any of the benefits of virtual reality (of course not everyone will necessarily use a fully contained headset like an Oculus; those with high computing needs would use a headset tethered to their computer).

PCs and the Enterprise Market

Here is the most important thing: if virtual reality really is better for work, then that solves the chicken-and-egg problem.

Implicit in assuming that augmented reality is more important than virtual reality is assuming that this new way of accessing the Internet will develop like mobile did. Smartphone makers like Apple, though, had a huge advantage: people already had and wanted mobile phones; selling a device that you were going to carry anyway, but which happened to be infinitely more capable for only a few hundred more dollars, was a recipe for success in the consumer market.

PCs, though, didn’t have that advantage: the vast majority of the consumer market had no knowledge of or interest in computers; rather, most people encountered computers for the first time at work. Employers bought their employees computers because computers made them more productive; then, once consumers were used to using computers at work, an ever increasing number of them wanted to buy a computer for their home as well. And, as the number of home computers increased, so did the market opportunity for developers of non-work applications like games.

I suspect that this is the path that virtual reality will take. Like PCs, the first major use case will be knowledge workers using devices bought for them by their employer, eager to increase collaboration in a remote work world, and as quality increases, offer a superior working environment. Some number of those employees will be interested in using virtual reality for non-work activities as well, increasing the market for non-work applications.

All of these work applications will, to be clear, still be accessible via regular computers, phones, etc. None of them, though, will be dependent on any one of those devices. Rather these applications will be Internet-first, and thus by definition, Metaverse-first.

Microsoft’s Opportunity

This means that the company that is, in my opinion, the most well-placed to capitalize on the Metaverse opportunity is Microsoft. Satya Nadella brought about The End of Windows as the linchpin of Microsoft’s strategy, but that doesn’t mean that Microsoft abandoned the idea of owning the core application around which a workplace is organized; their online-first device-agnostic cloud operating system is Teams.

It is a mistake to think of Teams as simply Microsoft’s rip-off of Slack; while any demo from a Microsoft presentation is obviously an idealized scenario, this snippet from last week’s Microsoft Ignite keynote shows how much more ambitious Microsoft’s vision is:

Microsoft can accomplish this vision with Teams in large part because Microsoft makes so many of the component pieces; this gives the company a powerful selling proposition to businesses around the world whose focus is on their actual business, not on being systems integrators. So many Silicon Valley enterprise companies miss this critical point: they obsess over the user experience of their individual application, without considering how that app fits in the context of a company for whom their app is a means to an end.

This integration, though, also means that Microsoft has a big head start when it comes to the Metaverse: if the initial experience of the Metaverse is as an individual self-contained metaverse with its own data and applications, then Teams is already there. In other words, not only is enterprise the most obvious channel for virtual reality from a hardware perspective, but Teams is the most obvious manifestation of virtual reality’s potential from a software perspective (this promotional video is from the Mess for Microsoft Teams webpage):

What is not integrated is the hardware; Microsoft sells a number of third party VR headsets on said webpage, all of which have to be connected to a Windows computer. Microsoft’s success will require creating an opportunity for OEMs similar to the opportunity that was created by the PC. At the same time, this solution is also an advantageous one for the long-term Metaverse-as-Internet vision: Windows is the most open of the consumer platforms, and that applies to Microsoft’s current implementation of VR. The company would do well to hold onto this approach.

Meta’s Challenge

Meta née Facebook is integrated in a different direction: Meta is spending billions of dollars on not just software but also hardware, and while Workrooms is obviously an enterprise application, Meta has to date been very much a consumer company (Workplace notwithstanding). The analogy to the PC era, then, is Apple and the Mac, and that is a reason to be a bit bearish relative to Microsoft.

Meta, however, has a big advantage that the original Mac did not: the Internet already exists. This is where Workroom’s integration of your computer into virtual reality is particularly clever: when I am using my computer in virtual reality, I have access to all of my applications, data, etc.; perhaps that, along with Workroom’s meeting capabilities, will be sufficient.

Meta, though, should shoot for something more. First off, if I am right, and the enterprise is the first big market for VR, then some of that billions of dollars should go towards building an enterprise go-to-market team that can compete with Microsoft. Second, there remains a huge opportunity long squandered by Google: to be the enterprise platform that competes with Microsoft’s integrated offering by effectively tying together best-of-breed independent SaaS offerings into a cohesive whole.

This is, to be honest, probably unrealistic; Meta is starting from scratch in the enterprise space, without any of the identity and email capabilities that Google possesses, much less Microsoft. More importantly, it’s just very difficult seeing the company having the culture to pull off being an enterprise platform. That, though, places that much more burden on Meta making the best hardware, and keeping its integrated operating system truly open. To that end, it is worth noting that Meta is focused on its headsets being standalone, while Microsoft is still tied to Windows; this gives Meta more freedom-of-movement in terms of working with all of the platforms that already exist.

What is clear, though, is that Facebook needed to change its name: no one wants to use a consumer social network for work. And, as I noted in the context of the name change, Meta is still founder-driven. That may give an execution and vision advantage that other companies can’t match. Again, though, that could mean too much focus on a consumer market that might take longer than Meta hopes to be convinced of why exactly they should buy a VR headset.

Apple and AR

Apple seems like it should be a strong competitor. The company is clearly the most advanced as far as hardware goes, particularly when it comes to powerful-yet-power-efficient chips, which is a big advantage in a power constrained environment. Moreover, Apple can leverage the fact it controls the phone, just as it does with the Apple Watch.

However, I am bearish on Apple’s prospects in this space for three reasons:

  • First, rumors suggest that Apple is focusing on augmented reality, not virtual reality; as I detailed above, though, I think that virtual reality will be the larger market, at least at first.
  • Second, Apple’s iPhone-centricity could be a liability, much as Microsoft’s Windows-centricity was a liability once mobile came along. It is very hard to fully embrace a new paradigm if the biggest part of your businesses is rooted in another; indeed, the fact that Apple is focused on augmented reality reflects an assumption that the world will continue to be one in which the physical has preeminence over the virtual.
  • Third, because both virtual reality and augmented reality will be new-to-the-world interfaces, the importance of developers will likely be more important than in the case of the phone. People bought iPhones first, and developers followed; Apple may have trouble if the chicken-and-egg problem runs in the opposite direction.

Apple Watch is the counter to all of these objections: it’s a device for the physical world, it benefits from the iPhone, and Apple delivered the core customer propositions — notifications and fitness — on its own. Perhaps a better way to state my position is that Apple is likely well placed for augmented reality, but not virtual reality, but I have changed my mind about which is more important.

The Field

It’s hard to see any other hardware-based startups emerging as VR platforms; I think the best opportunity for a startup is riding on Microsoft’s coattails and offering an alternative operating system for the hardware that is produced for Windows. Valve is obviously already doing this with Steam, but there may be a place for a more general purpose alternative, probably based on Android (which, I suppose, Google could build, but the company seems awfully content these days).

Snap and Niantic, meanwhile, are focused on augmented reality, but will be handicapped by the inability to effectively offload compute onto the phone in the same way Apple will be able to, and again, the trick will be getting consumers to care.

Roblox, meanwhile, is arguably the Teams of the consumer space: it is a 2D-metaverse that is device-agnostic; the company is working to keep people connected even when they aren’t playing games, including buying Discord competitor Guilded. Discord, meanwhile, is a bit of a metaverse in its own right, with more connections to external applications; this could be a candidate for the aforementioned company that rides on the Microsoft ecosystem’s coattails.

Again, though, none of this is so different from the world as it exists today, because the Internet already exists (and yes, that includes crypto). That is one of the things I still stand by from The End of the Beginning: technology doesn’t move in step changes, but rather evolves on a spectrum towards more continuous computing. Name changes, whether that be from Facebook to Meta or from Internet to Metaverse, are a marker of that evolution, not a punctuated equilibrium.

I wrote a follow-up to this Article in this Daily Update.


  1. These bullets points are quoted from Ball’s list 

Meta

The obvious analogy to Facebook’s announcement that it is renaming itself Meta and re-organizing its financials to separate “Family of Apps” — Facebook, Messenger, Instagram, and WhatsApp — from “Facebook Reality Labs” is Google’s 2015 reorganization to Alphabet, which separated “Google”, including the search engine, YouTube, and Google Cloud, from “Other Bets.” The headline for investors is just how much Facebook is spending on Reality Labs — $10 billion this year, and that amount is expected to grow — but next quarter’s financials will also emphasize just how good Facebook’s core business is; if it plays out like Alphabet, this could be a boon for the stock.

At the same time, while the mechanics may be similar, it is the differences that suggest the implications of this transformation are much more meaningful. Start with the name: “Alphabet” didn’t really mean anything in particular, and that was the point; Larry Page said in the announcement post:

What is Alphabet? Alphabet is mostly a collection of companies. The largest of which, of course, is Google. This newer Google is a bit slimmed down, with the companies that are pretty far afield of our main Internet products contained in Alphabet instead. What do we mean by far afield? Good examples are our health efforts: Life Sciences (that works on the glucose-sensing contact lens), and Calico (focused on longevity). Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related. Alphabet is about businesses prospering through strong leaders and independence.

“Meta”, on the other hand, is explicit: CEO Mark Zuckerberg said that Facebook is now a metaverse company, and the name reflects that. It is also focused: Alphabet included a host of ventures, many of which had no real connection to Google; Facebook Reality Labs is a collection of efforts, from virtual reality to augmented reality to electromyography systems, all in service to a singular vision where instead of looking at the Internet, we live in it.

The biggest difference, though, is Zuckerberg: while Page and Sergey Brin, as I wrote at the time, “may be abandoning day-to-day responsibilities at Google, [they have] no intention of abandoning Google’s profits” to pursue whatever new initiatives caught their eye, Zuckerberg quite clearly remains fully committed to both the “Family of Apps” and “Reality Labs”; more than that, Meta is, as Zuckerberg said in an interview with Stratechery, a continuation of the same vision undergirding Facebook:

I think that this is going to unlock a lot of the product experiences that I’ve wanted to build since even before I started Facebook. From a business perspective, I think that this is going to unlock a massive amount of digital commerce, and strategically I think we’ll have hopefully an opportunity to shape the development of the next platform in order to make it more amenable to these ways that I think people will naturally want to interact.

Another comparison is Microsoft: the Redmond company never changed its name, but under former CEO Steve Ballmer it might as well have been called the Windows company; that’s how you ended up with names like Windows Azure, a misnomer born of a misguided strategy that sought to leverage Microsoft’s thriving productivity business and burgeoning cloud offering to prop up the product that had made the company rich, famous, and powerful. Zuckerberg made a similar mistake last year, forcing Oculus users to login with their Facebook account, which not only upset Oculus users but also handcuffed products like Horizon Workrooms, Facebook’s VR solution for business meetings.

Satya Nadella’s great triumph as CEO of Microsoft was breaking Windows hold over the company, freeing the company to not just emphasize Azure’s general purpose cloud offerings, but to also build a new OS centered on Teams that was Internet-centric, and device agnostic. Indeed, that is why I don’t scoff at Nadella’s invocation of the enterprise metaverse; sure, Microsoft has the HoloLens, but that is just one way to access a work environment that exists somewhere beyond any one device or any one app.

Meta seems like Zuckerberg’s opportunity to make the same break: Facebook benefited from being just an app (until it didn’t), but until today Facebook was also the company, and as long as that was the case the metaverse vision was going to be fundamentally constrained by what already exists.

There is a third comparison, though, and that is Apple generally, and Steve Jobs specifically. While Jobs’ tenure in retrospect looks like a string of new innovative products after new innovative products, from the Mac to the iPhone, the latter was in many respects Jobs’ opportunity to truly deliver on the vision he had for the former. The Mac was a computer built for end users, but it launched in an era dominated by enterprises; that is why it was initially a failure from a business perspective, which helped drive Jobs out of the company he had founded. Fast forward 23 years and the iPhone launched in an era where end users dominated the market; it was enterprise players like Microsoft that scrambled, and failed, to catch up.

The analogy to Facebook is the company’s failure to build a phone; the company’s biggest problem is that it was simply too late — iPhone and Android were already firmly established by the 2013 launch of the Facebook First phone and Facebook Home Android launcher — but I also thought that Zuckerberg’s conception of what a phone should be was fundamentally flawed. One of the very first articles on Stratechery was Apps, People, and Jobs to Be Done, where I took issue with Zuckerberg’s argument that people, not apps, should be the organizing principle for mobile; I concluded:

Apps aren’t the center of the world. But neither are people. The reason why smartphones rule the world is because they do more jobs for more people in more places than anything in the history of mankind. Facebook Home makes jobs harder to do, in effect demoting them to the folders on my third screen [in favor of social].

I have long assumed that augmented reality would be a bigger opportunity than virtual reality precisely because augmented reality fits in the same lane as the smartphone; I wrote in The Problem with Facebook and Virtual Reality:

That is the first challenge of virtual reality: it is a destination, both in terms of a place you go virtually, but also, critically, the end result of deliberative actions in the real world. One doesn’t experience virtual reality by accident: it is a choice, and often — like in the case of my PlayStation VR — a rather complicated one.

That is not necessarily a problem: going to see a movie is a choice, as is playing a video game on a console or PC. Both are very legitimate ways to make money: global box office revenue in 2017 was $40.6 billion U.S., and billions more were made on all the other distribution channels in a movie’s typical release window; video games have long since been an even bigger deal, generating $109 billion globally last year.

Still, that is an order of magnitude less than the amount of revenue generated by something like smartphones. Apple, for example, sold $158 billion worth of iPhones over the last year; the entire industry was worth around $478.7 billion in 2017. The disparity should not come as a surprise: unlike movies or video games, smartphones are an accompaniment on your way to a destination, not a destination in and of themselves.

That may seem counterintuitive at first: isn’t it a good thing to be the center of one’s attention? That center, though, can only ever be occupied by one thing, and the addressable market is constrained by time. Assume eight hours for sleep, eight for work, a couple of hours for, you know, actually navigating life, and that leaves at best six hours to fight for. That is why devices intended to augment life, not replace it, have always been more compelling: every moment one is awake is worth addressing.

In other words, the virtual reality market is fundamentally constrained by its very nature: because it is about the temporary exit from real life, not the addition to it, there simply isn’t nearly as much room for virtual reality as there is for any number of other tech products.

Meta’s vision, to be clear, is that one ought to be able to access the metaverse from anywhere, including your phone, computer, AR glasses, and of course a VR headset. What is worth considering, though, are the ways in which the technological revolution I wrote about yesterday are changing society; I think that the term “working from home”, for example, is another misnomer: for some number of people work is virtual, which means it can be done from anywhere — your home is just one of many options. And in that case, an escape from physical reality is actually the goal, not a burden; why wouldn’t the same attraction exist in terms of social interactions generally, particularly as more and more communities exist only on the Internet?

Here Facebook the app was again a limitation: the product digitized offline relationships, which is why it grew so quickly; many of the challenges that have placed Zuckerberg in the hot seat currently stem from grafting on purely digital interactions and relationships to a product that was always more reality-rooted than its competitors. The metaverse, though, by its very definition is rooted in the digital world, and if the primary driver is to interact with people virtually, whether that be for work or for play, then Zuckerberg’s people-centric organizing principle — like Apple’s focus on the end-user — may be a viewpoint that was originally too early, and then right on time.

This is all about as favorable a spin as you can put on Meta, of course; there is a lot of grumbling from investors this week about the effect the effort is having on Facebook’s margins, and my previously-voiced suspicion that Zuckerberg just wants to own a platform very much lines up with Facebook currently feeling the pain from its Apple-dependency in particular. And, needless to say, Facebook is dealing with plenty of other issues in the media and Washington D.C. that not only concern the “Family of Apps”, but also threaten the reception of whatever it is that “Reality Labs” ultimately produces.

Zuckerberg, though, is a founder, which both means he decides, thanks to his super-voting shares, and also that he has the credibility to pull investors along; more than that, though, is the clear founder ethos that Zuckerberg is bringing to Meta. Zuckerberg told me:

One of the things that I’ve found in building the company so far is that you can’t reduce everything to a business case upfront. I think a lot of times the biggest opportunity is you kind of just need to care about them and think that something is going to be awesome and have some conviction and build it. One of the things that I’ve been surprised about a number of times in my career is when something that seemed really obvious to me and that I expected clearly someone else is going to go build this thing, that they just don’t. I think a lot of times things that seem like they’re obvious that they should be invested in by someone, it just doesn’t happen.

I care about this existing, not just virtual and augmented reality existing, but it getting built out in a way that really advances the state of human connection and enables people to be able to interact in a different way. That’s sort of what I’ve dedicated my life’s work to. I’m not sure, I don’t know that if we weren’t investing so much in this, that would happen or that it would happen as quickly, or that it would happen in the same way. I think that we are going to kind shift the direction of that.

Meta exists because Zuckerberg believes it needs to exist, and he is devoted to making the metaverse a reality; it’s his call, for better or worse. In that it reminds me of an increasingly popular phrase on FinTwit: House of Zuck. It has been adopted by a set of investors that are eternally bullish on Facebook not because of its results per se, but because of their conviction that those results come from Zuckerberg’s leadership. It is a belief that is being tested as never before.

Facebook has always been unique amongst the Big 5 tech companies because it is the one company that does not have a monopoly-like moat in the market in which it competes; today it is also unique in that it is the only one of the five that is still founder-led. I don’t think that is a coincidence.

The fact that Facebook is uniquely held responsible for the societal problems engendered by the Internet does, I suspect, stem from the fact that Zuckerberg is an obvious target. How many people concerned about anti-vax rhetoric, for example, can even name the person in charge of YouTube, a far more potent vector? Page and Brin were wise to step aside once Google was established, to make Google a less tempting target; the same with Jeff Bezos. Facebook doesn’t have that luxury. Kara Swisher made explicit what seems obvious: the way for Facebook to escape its current predicament is for Zuckerberg to hand the reins to someone else. Only a founder, though, can admit, as Zuckerberg did on Facebook’s recent earnings call, that the company is losing ground with young people, and not just pivot the “Family of Apps”, but the entire company towards a future vision that establishes Meta as a dominant platform in its own right.

That’s also why I considered titling this article “House of Zuck”; that, more than ever, is what Meta née Facebook is. Today’s Facebook Connect keynote is entirely about a future that doesn’t yet exist; believing that it will happen rests on the degree to which you believe that Zuckerberg the founder can accomplish more than any mere manager.

This is where I come back to Jobs: it’s hard to remember now, but the Apple founder had some very rough edges; his exile from Apple was terrible for the company, but good for Jobs’ maturation into an executive with a founder’s vision and drive that could bring Apple to greater heights than ever before. Zuckerberg doesn’t have the luxury of a decade in the wilderness, but he has certainly undergone a trial by fire; Meta’s ultimate success, or lack thereof, will answer the question if that is enough.

You can read an interview I conducted with Zuckerberg about Facebook’s plan for the metaverse here.