Regretful Accelerationism

Ready Player One, the book that was issued to every new Oculus employee once upon a time, describes its world in a way that was perhaps edgy in 2011 but seems rather cliché today:

“You’re probably wondering what happened before you got here. An awful lot of stuff, actually. Once we evolved into humans, things got pretty interesting. We figured out how to grow food and domesticate animals so we didn’t have to spend all of our time hunting. Our tribes got much bigger, and we spread across the entire planet like an unstoppable virus. Then, after fighting a bunch of wars with each other over land, resources, and our made-up gods, we eventually got all of our tribes organized into a ‘global civilization.’ But, honestly, it wasn’t all that organized, or civilized, and we continued to fight a lot of wars with each other. But we also figured out how to do science, which helped us develop technology. For a bunch of hairless apes, we’ve actually managed to invent some pretty incredible things. Computers. Medicine. Lasers. Microwave ovens. Artificial hearts. Atomic bombs. We even sent a few guys to the moon and brought them back. We also created a global communications network that lets us all talk to each other, all around the world, all the time. Pretty impressive, right?

“But that’s where the bad news comes in. Our global civilization came at a huge cost. We needed a whole bunch of energy to build it, and we got that energy by burning fossil fuels, which came from dead plants and animals buried deep in the ground. We used up most of this fuel before you got here, and now it’s pretty much all gone. This means that we no longer have enough energy to keep our civilization running like it was before. So we’ve had to cut back. Big-time. We call this the Global Energy Crisis, and it’s been going on for a while now.

What is striking about this depiction is not the concept of a global energy crisis, or the lack of imagination about alternative energy and the tremendous progress that has been made over the last decade in separating emissions from energy production. Rather, it’s the disconnect between the global communications network and any sort of negative externalities; the former just happened to come about at the same time the real world fell apart.

This is a theme throughout the book, and many other depictions of virtual reality in science fiction; the physical world is a hellscape, while the online world is this oasis (pun intended) of vitality and adventure, and, crucially, one that is programmed and consistent. The central conceit of Ready Player One is that the creator of OASIS (I told you it was a pun!), the virtual world in which most of the story happens, left an easter egg in said world, the discovery of which would mean ownership of the company that made OASIS available to anyone on earth.

I’ve expressed my skepticism of a unitary shared environment previously, in 2021’s Metaverses; in that case I questioned a similar conceit in Snow Crash, the other origin text in terms of the Metaverse.

In this way the Metaverse is actually a unifying force for Stephenson’s dystopia: there is only one virtual world sitting beyond a real world that is fractured between independent entities. There are connections in the real world — roads and helicopters and airplanes exist — but those connections are subject to tolls and gatekeepers, in contrast to the interoperability and freedom of the Metaverse.

In other words, I think that Stephenson got the future exactly backwards: in our world the benevolent monopolist is the reality of atoms. Sure, we can construct borders and private clubs, just as the Metaverse has private property, but interoperability and a shared economy are inescapable in the real world; physical constraints are community. It is on the Internet, where anything is possible, that walled gardens flourish. Facebook has total control of Facebook, Apple of iOS, Google of Android, and so on down the stack. Yes, HTTP and SMTP and other protocols still exist, but it’s not an accident those were developed before anyone thought there was money to be made online; today’s APIs have commercial intent built-in from first principles.

I think this was directionally correct: the real world is one place, and the online world many, but what I didn’t appreciate even as recently as two years ago was that the online world as I knew it then was subject to more constraints than I realized; it is only as those constraints disappear that the idea of the Internet as a place of refuge seems ever more dubious.

Why Web Pages Suck

In 2016 I set out to answer a simple question: Why Web Pages Suck.

From the publishers’ perspective, the fixed cost of a printing press not only provided a moat from competition, it also meant that publishers displayed ads on their terms. To use the Conservation of Attractive Profits model that I discussed last week, publishers were exceptionally profitable for having integrated content and ads in this way:

print

As the description of programmatic advertising should make clear, though, that is no longer the case. Ad spots are effectively black boxes from the publisher perspective, and direct windows to the user from the ad network’s perspective. This has both modularized content and moved ad networks closer to users:

internet

Here’s the simple truth: if you’re competing in a modular market, as today’s publishers are, profits are slim at best, and you generally take what you can get from a revenue perspective. To put it another way, publishers today have about as much bargaining power as do Uber drivers, and we’ve seen how that has gone.

The very next week I would write Aggregation Theory:

The last several articles on Stratechery have formed an unintentional series:

  • Airbnb and the Internet Revolution described how Airbnb and the sharing economy have commoditized trust, enabling a new business model based on aggregating resources and managing the customer relationship
  • Netflix and the Conservation of Attractive Profits placed this commodification/aggregation concept into Clay Christensen’s Conservation of Attractive Profits framework, which states that profits are earned by the integrated provider in a value chain, and that profits shift when another company successfully modularizes the incumbent and integrates another part of the value chain
  • Why Web Pages Suck was primarily about the effect of programmatic advertising on web page performance, but in the conclusion I noted that the way in which ad networks were commoditizing publishers also fit the “Conservation of Attractive Profits” framework

In retrospect, there is a clear thread. In fact, I believe this thread runs through nearly every post on Stratechery, not just the last three. I am calling that thread Aggregation Theory.

In a world of abundance like the web, economic power came from marshaling demand, and that demand was marshaled by being better at discovery, not distribution (after all, distribution was free; that’s why there was so much abundance in the first place!). Entities that controlled demand, then, had power in the value chain, which meant they were best placed to integrate into advertising in particular, leaving everyone else in the value chain as modularized pieces without any meaningful pricing power.

In this world Google and Facebook were the biggest winners — I called them Super Aggregators — but they were different when it came to the suppliers of their respective value chains. Facebook’s content was user-generated, and exclusive to Facebook. What was so compelling about this economically is that user-generated content is free, which meant that Facebook was more fully integrated than Google, which relied on the rest of the web to provide the content that made its search engine useful.

Free AI

The web does, of course, include lots of free content, much of which accrues to Google’s benefit: Wikipedia, Reddit, blogs, etc., are themselves user-generated content but on the open web. Lots of other free content, though, was monetized by ads, produced by publications employing professional writers. This was an inherently difficult business, though, thanks to that free distribution: that meant there was infinite competition, which meant the only route to profitability was continuing to cut costs.

What, then, should we have expected to happen once the world gained the means of generating human-level content at zero marginal cost? From Futurism:

There was nothing in Drew Ortiz’s author biography at Sports Illustrated to suggest that he was anything other than human. “Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature,” it read. “Nowadays, there is rarely a weekend that goes by where Drew isn’t out camping, hiking, or just back on his parents’ farm.”

The only problem? Outside of Sports Illustrated, Drew Ortiz doesn’t seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he’s described as “neutral white young-adult male with short brown hair and blue eyes.”…

According to a second person involved in the creation of the Sports Illustrated content who also asked to be kept anonymous, that’s because it’s not just the authors’ headshots that are AI-generated. At least some of the articles themselves, they said, were churned out using AI as well.

What makes this article particularly poignant is the property involved: Sports Illustrated was an icon of the print era; it transitioned to the web somewhat, in a partnership with CNN, but over the last several years it has laid off staff and passed from hand to increasingly unethical hand. Unethical, that is, if you prioritize journalistic integrity over making money: it’s hard to escape the sense that the two are irreconcilable. Journalism costs money, which means an uncompetitive cost structure, and Sports Illustrated isn’t the only one. Continuing from Futurism:

As powerful generative AI tools have debuted over the past few years, many publishers have quickly attempted to use the tech to churn out monetizable content…We caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism; in the ensuing storm of criticism, CNET issued corrections to more than half its AI-generated articles. G/O Media also published AI-generated material on its portfolio of sites, resulting in embarrassing bungles at Gizmodo and The A.V. Club. We caught BuzzFeed publishing slapdash AI-generated travel guides. And USA Today and other Gannett newspapers were busted publishing hilariously garbled AI-generated sports roundups that one of the company’s own sports journalists described as “embarrassing,” saying they “shouldn’t ever” have been published.

These, of course, are the companies that were caught; in time, the AI will become good enough that no one will know what is real and what is not.

Google’s Missing Constraints

This wasn’t the only AI-generated content story of the week, though; this thread on X went viral as well:

That Google faces a challenge with SEO spam is obvious to anyone who uses the search engine. What is notable about this fight is that it is, from a certain perspective, simply too much of a good thing. Google is so important that every site on the Internet works to optimize itself for Google search; in other words, Google’s suppliers are incentivized to work for Google.

That was all fine and good in the early 2000s when Google came to prominence, and content on the Internet was yes, freely distributed, but required significant marginal costs to produce (in time if not in money). What changed is that advertising became sufficiently lucrative that it was worth spending that marginal cost in a systemic way to get more traffic; thus began the cat-and-mouse game that is SEO optimization and Google algorithm updates (which, I should note, have already demoted the site featured in that thread).

AI-generated content, though, will likely push the situation past the breaking point: yes, the amount of money that can be made from advertising by any individual page is continually decreasing, but if pages can be produced for no marginal cost then the number of pages that will be produced is effectively infinite.

This is one update to my thinking: when I wrote AI and the Big Five at the beginning of this year, I expressed the most concern about Google not because I doubted their AI chops, but rather because a chatbot approach seemed to threaten their advertising model:

Google has long been a leader in using machine learning to make its search and other consumer-facing products better (and has offered that technology as a service through Google Cloud). Search, though, has always depended on humans as the ultimate arbiter: Google will provide links, but it is the user that decides which one is the correct one by clicking on it. This extended to ads: Google’s offering was revolutionary because instead of charging advertisers for impressions — the value of which was very difficult to ascertain, particularly 20 years ago — it charged for clicks; the very people the advertisers were trying to reach would decide whether their ads were good enough.

If there aren’t links to click — because you simply got the answer — then the ads won’t be worth as much; what is even worse is if the links are all unreliable. In this view generative AI answers are actually a way out for Google in the long run: if it can no longer trust the web for supply, it will need to integrate backwards into its own.

Social Media Inhumanity

That, then, is the first constraint on the online world that is slipping away: the elimination of marginal cost for content creation. The second has been happening longer, and is represented by TikTok and its assault on Meta’s seemingly impregnable dominance of social media: user-generated content used to be constrained by who you knew, but TikTok (and YouTube) simply surfaced the most compelling content across the entire network. I’ve already written about the potential intersection of these two trends: custom content generated specifically for every user.

There are already examples of AI influencers and Meta itself is experimenting with AI celebrities; one of the fastest growing AI startups, meanwhile, is reportedly character.ai, which lets you interact with your own AI friend. Just last week Roblox CEO David Baszucki spoke favorably to me about the possibility of interactive NPCs helping boost Roblox from not just a gaming platform but to a “3D communications platform.”

Still, as Baszucki made clear, the goal is still actual social networking: surely that will always be better than interacting with an AI! Or will it? It seems to me that perhaps the most important constraint on the web — to actually interact with people as if they are, well, people — disappeared a long time ago. It doesn’t take much time or prominence on X or any other social networking platform to realize that it is nothing like real life, and is only tolerable if you view the entire enterprise as something to be laughed at and, still, occasionally, learned from.

I do strongly believe that an essential quality for success, both on the Internet and off, is to not take social media too seriously. Humans simply weren’t meant to get feedback from thousands or sometimes millions of anonymous strangers all at once; the most successful creators I know are the most wary of getting sucked in to the online maelstrom. One wonders — hopes — that we can someday reach a similar conclusion collectively, and start treating X in particular more like the comments section and less like an assignment editor.

The Current Thing

In this the demise of the ad-supported Internet may be a blessing: the most sustainable model for media to date is subscriptions, and subscriptions mean answering to your subscribers, not social media generally. This isn’t perfect — we end up with never-ending niches that demand a particular point of view from their publications of choice — but it is at least a point of view that is something other than the amorphous rage and current thing-ism that dominates the web. I wrote in an Article about The Current Thing in 2022:

This dynamic is exactly what the meme highlights: sure, the Internet makes possible a wide range of viewpoints — you can absolutely find critics of Black Lives Matter, COVID policies, or pro-Ukraine policies — but the Internet, thanks to its lack of friction and instant feedback loops, also makes nearly every position but the dominant one untenable. If everyone believes one thing, the costs of believing something else increase dramatically, making the consensus opinion the only viable option; this is the same dynamic in which publishers become dependent on Google or Facebook, or retailers on Amazon, just because that is where money can be made.

Again, to be very clear, that does not mean the opinion is wrong; as I noted, I think the resonance of this meme is orthogonal to the rightness of the position it is critiquing, and is instead concerned with the sense that there is something unique about the depth of sentiment surrounding issues that don’t necessarily apply in any real-life way to the people feeling said sentiment.

There was a “current thing” in Ready Player One: the easter egg, and the protagonist’s progress in finding it stirred up worldwide interest. Again, though, this portrayal doesn’t match reality: we don’t have a unitary online world designed by a master architect driving offline interest; we have a churning mass of users absent their humanity coalescing around schelling points that are in many respects incidental to the mass hysteria they produce. The result is out of anyone’s control.

To put it more bluntly, despite the fact my personal and professional life are centered on — and blessed by — the Internet, I’m increasingly skeptical that it can be, as it was in Ready Player One, portrayed as a distinct development from a world increasingly in turmoil. Correlation may not be causation, but sometimes it absolutely is.

In this I do, with reluctance, adopt an accelerationist view of progress; call it r/acc: regretful accelerationism. I suspect we humans do better with constraints; the Internet stripped away the constraint of physical distribution, and now AI is removing the constraint of needing to actually produce content. That this is spoiling the Internet is perhaps the best hope for finding our way back to what is real. Let the virtual world be one of customized content for every individual, with the assumption it is all made-up; some may lose themselves to the algorithm and AI friends, but perhaps more will realize that the only way to survive online is to pay it increasingly little heed.

OpenAI’s Misalignment and Microsoft’s Gain

I have, as you might expect, authored several versions of this Article, both in my head and on the page, as the most extraordinary weekend of my career has unfolded. To briefly summarize:

  • On Friday, then-CEO Sam Altman was fired from OpenAI by the board that governs the non-profit; then-President Greg Brockman was removed from the board and subsequently resigned.
  • Over the weekend rumors surged that Altman was negotiating his return, only for OpenAI to hire former Twitch CEO Emmett Shear as CEO.
  • Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft.

This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.1

Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).

The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.

OpenAI’s Non-Profit Model

OpenAI was founded in 2015 as a “non-profit intelligence research company.” From the initial blog post:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

I was pretty cynical about the motivations of OpenAI’s founders, at least Altman and Elon Musk; I wrote in a Daily Update:

Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has a mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence.

Whatever Altman and Musk’s motivations, the decision to make OpenAI a non-profit wasn’t just talk: the company is a 501(c)3; you can view their annual IRS filings here. The first question on Form 990 asks the organization to “Briefly describe the organization’s mission or most significant activities”; the first filing in 2016 stated:

OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.

Two years later, and the commitment to “openly share our plans and capabilities along the way” was gone; three years after that and the goal of “advanc[ing] digital intelligence” was replaced by “build[ing] general-purpose artificial intelligence”.

In 2018 Musk, according to a Semafor report earlier this year, attempted to take over the company, but was rebuffed; he left the board and, more critically, stopped paying for OpenAI’s operations. That led to the second critical piece of background: faced with the need to pay for massive amounts of compute power, Altman, now firmly in charge of OpenAI, created OpenAI Global, LLC, a capped profit company with Microsoft as minority owner. This image of OpenAI’s current structure is from their website:

OpenAI's corporate structure

OpenAI Global could raise money and, critically to its investors, make it, but it still operated under the auspices of the non-profit and its mission; OpenAI Global’s operating agreement states:

The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company’s cash flow into research and development activities and/or related expenses without any obligation to the Members.

Microsoft, despite this constraint on OpenAI Global, was not only an investor, but also a customer, incorporating OpenAI into all of its products.

ChatGPT Tribes

The third critical piece of background is the most well-known, and what has driven those ambitions to new heights: ChatGPT was released at the end of November 2022, and it has taken the world by storm. Today ChatGPT has over 100 million weekly users and over $1 billion in revenue; it has also fundamentally altered the conversation about AI for nearly every major company and government.

What was most compelling to me, though, was the possibility I noted above, in which ChatGPT becomes the foundation of a new major consumer tech company, the most valuable and most difficult kind of company to build. I wrote earlier this year in The Accidental Consumer Tech Company:

When it comes to meaningful consumer tech companies, the product is actually the most important. The key to consumer products is efficient customer acquisition, which means word-of-mouth and/or network effects; ChatGPT doesn’t really have the latter (yes, it gets feedback), but it has an astronomical amount of the former. Indeed, the product that ChatGPT’s emergence most reminds me of is Google: it simply was better than anything else on the market, which meant it didn’t matter that it came from a couple of university students (the origin stories are not dissimilar!). Moreover, just like Google — and in opposition to Zuckerberg’s obsession with hardware — ChatGPT is so good people find a way to use it. There isn’t even an app! And yet there is now, a mere four months in, a platform.

The platform I was referring to was ChatGPT plugins; it’s a compelling concept with a UI that didn’t quite work, and it was only eight months later at OpenAI’s first developer day that the company announced GPTs, their second take at being a platform. Meanwhile, Altman was reportedly exploring new companies outside of the OpenAI purview to build chips and hardware, apparently without the board’s knowledge. Some combination of these factors, or perhaps something else not yet reported, were the final straw for the board, which, led by Chief Scientist Ilya Sutskever, deposed Altman over the weekend. The Atlantic reported:

Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution. For years, the two sides managed to coexist, with some bumps along the way.

This tenuous equilibrium broke one year ago almost to the day, according to current and former employees, thanks to the release of the very thing that brought OpenAI to global prominence: ChatGPT. From the outside, ChatGPT looked like one of the most successful product launches of all time. It grew faster than any other consumer app in history, and it seemed to single-handedly redefine how millions of people understood the threat — and promise — of automation. But it sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts. ChatGPT supercharged the race to create products for profit as it simultaneously heaped unprecedented pressure on the company’s infrastructure and on the employees focused on assessing and mitigating the technology’s risks. This strained the already tense relationship between OpenAI’s factions — which Altman referred to, in a 2019 staff email, as “tribes.”

Altman’s tribe — the one that was making OpenAI into much more of a traditional tech company — is certainly the one that is more familiar to people in tech, including myself. I even had a paragraph in my Article about the developer day keynote that remarked on OpenAI’s transition, that I unfortunately edited out. Here is what I wrote:

It was around this time that I started to, once again, bemoan OpenAI’s bizarre corporate structure. As a long-time Silicon Valley observer it is enjoyable watching OpenAI follow the traditional startup path: the company is clearly in the rapid expansion stage where product managers are suddenly considered useful, as they occupy that sweet spot of finding and delivering low-hanging fruit for an entity that doesn’t yet have the time or moat to tolerate kingdom building and feature creep.

What gives me pause is that the goal is not an IPO, retiring to a yacht, and giving money to causes that do a better job of soothing the guilt of being fabulously rich than actually making the world a better place. There is something about making money and answering to shareholders that holds the more messianic impulses in check; when I hear that Altman doesn’t own any equity in OpenAI that makes me more nervous than relieved. Or maybe I’m just biased because I won’t have S-1s or 10-Ks to analyze.

Obviously I regret the edit, but then again, I didn’t realize how prescient my underlying nervousness about OpenAI’s structure would prove to be, largely because I clearly wasn’t worried enough.

Microsoft vs. the Board

Much of the discussion on tech Twitter over the weekend has been shock that a board would incinerate so much value. First off, Altman is one of the Valley’s most-connected executives, and a prolific fund-raiser and dealmaker; second is the fact that several OpenAI employees already resigned, and more are expected to follow in the coming days. OpenAI may have had two tribes previously; it’s reasonable to assume that going forward it will only have one, led by a new CEO in Shear who puts the probability of AI doom at between 5 and 50 percent and has advocated a significant slowdown in development.

Here’s the reality of the matter, though: whether or not you agree with the Sutskever/Shear tribe, the board’s charter and responsibility is not to make money. This is not a for-profit corporation with a fiduciary duty to its shareholders; indeed, as I laid out above, OpenAI’s charter specifically states that it is “unconstrained by a need to generate financial return”. From that perspective the board is in fact doing its job, as counterintuitive as that may seem: to the extent the board believes that Altman and his tribe were not “build[ing] general-purpose artificial intelligence that benefits humanity” it is empowered to fire him; they do, and so they did.

This gets at the irony in my concern about the company’s non-profit status: I was worried about Altman being unconstrained by the need to make money or the danger of having someone in charge without a financial stake in the outcome, when in fact it was those same factors that cost him his job. More broadly, my criticism was insufficiently expansive because philosophical concerns about unconstrained power pale — at least in the case of business analysis, Stratechery’s core competency — in the face of how much this structure made OpenAI a fundamentally unstable entity to make deals with. This refers, of course, to Microsoft, and as someone who has been a proponent of Satya Nadella’s leadership, I have to admit that my analysis of the company’s partnership with OpenAI was lacking.

Microsoft had, to its tremendous short-term benefit, bet a substantial portion of its future on its OpenAI partnership. This goes beyond money, which Microsoft has plenty of, and much of which it hasn’t yet paid out (or granted in terms of Azure credits); OpenAI’s technology is built into a whole host of Microsoft’s products, from Windows to Office to ones most people have never heard of (I see you Dynamics CRM nerds!). Microsoft is also investing massively in infrastructure that is custom-built for OpenAI — Nadella has been touting the financial advantages of specialization — and has just released a custom chip that was tuned for running OpenAI models. That this level of commitment was made to an entity not motivated by profit, and thus un-beholden to Microsoft’s status as an investor and revenue driver, now seems absurd.

Or, rather, it did, until Nadella tweeted the following at 11:53pm Pacific:

Satya Nadella's tweet announcing the hiring of Sam Altman

The counter to the argument I just put forth about Microsoft’s poor decision to partner with a non-profit is the reality of AI development, specifically the need for massive amounts of compute. It was the need for this compute that led OpenAI, which had barred itself from making a traditional venture capital deal, to surrender their IP to Microsoft in exchange for Azure credits. In other words, while the board may have had the charter of a non-profit, and an admirable willingness to act on and stick to their convictions, they ultimately had no leverage because they weren’t a for-profit company with the capital to be truly independent.

The end result is that an entity committed by charter to the safe development of AI has basically handed off all of its work and, probably soon enough, a sizable portion of its talent, to one of the largest for-profit entities on earth. Or, in an AI-relevant framing, the structure of OpenAI was ultimately misaligned with fulfilling its stated mission. Trying to organize incentives by fiat simply doesn’t account for all of the possible scenarios and variable at play in a dynamic situation; harvesting self-interest has, for good reason, long been the best way to align individuals and companies.

Altman Questions

There is one other angle of the board’s actions that ought to be acknowledged: it very well could have been for cause. I endorse Eric Newcomer’s thoughtful column on his eponymous Substack:

In its statement, the board said it had concluded Altman, “was not consistently candid in his communications with the board.” We shouldn’t let poor public messaging blind us from the fact that Altman has lost confidence of the board that was supposed to legitimize OpenAI’s integrity…

My understanding is that some members of the board genuinely felt Altman was dishonest and unreliable in his communications with them, sources tell me. Some members of the board believe that they couldn’t oversee the company because they couldn’t believe what Altman was saying. And yet, the existence of a nonprofit board was a key justification for OpenAI’s supposed trustworthiness.

I don’t think any of us really knows enough right now to urge the board to make a hasty decision. I want you to consider a couple things here:

Newcomer notes the board’s charter that I referenced above, the fact that Anthropic’s founders felt it necessary to leave OpenAI in the first place, Musk’s antipathy towards Altman, and Altman’s still somewhat murky and unexplained exit from YCombinator. Newcomer concludes:

I’m sure that writing this cautionary letter will not make me popular in many corners of Silicon Valley. But I think we should just slow down and get more facts. If OpenAI leads us to artificial general intelligence or anywhere close, we will want to have taken the time to think for more than a weekend about who we want to take us there…

Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation. He lost the trust of his board. We should take that seriously.

Perhaps I am feeling a bit humbled by the aforementioned miss in my Microsoft analysis — much less my shock at the late night reversal in fortunes — but I will note that I have staked my claim in opposition to AI doomers and the call for regulation; to that end, I am wary of a narrative that confirms my priors about what drove the events of this weekend. And, I would note, I remain concerned about the philosophical question of executives who seek to control incredible capabilities without skin in the game.

To that end, a startup ecosystem fixture like Altman going to work for Microsoft is certainly surprising: that Microsoft is the one place that retains access to OpenAI’s IP, and can combine that with effectively unlimited funding and GPU access, certainly adds credence to the narrative that power over AI is Altman’s primary motivation.

The Altered Landscape

What is clear is that Altman and Microsoft are in the driver seat of AI. Microsoft has the IP and will soon have the team to combine with its cash and infrastructure, while shedding coordination problems inherent in their partnership with OpenAI previously (and, of course, they are still partners with OpenAI!).

I’ve also argued for a while that it made more sense for external companies to build on Azure’s API rather than OpenAI’s; Microsoft is a development platform by nature, whereas OpenAI is fun and exciting but likely to clone your functionality or deprecate old APIs. Now the choice is even more obvious. And, from the Microsoft side, this removes a major reason for enterprise customers, already accustomed to evaluating long-term risks, to avoid Azure because of the OpenAI dependency; Microsoft now owns the full stack.

Google, meanwhile, might need to make some significant changes; the company’s latest model, Gemini, has been delayed, and its Cloud business has been slowing as spending shifts to AI, the exact opposite outcome the company had hoped for. How long will the company’s founders and shareholders tolerate the perception that the company is moving too slow, particularly in comparison to the nimbleness and willingness to take risks demonstrated by Microsoft?

That leaves Anthropic, which looked like a big winner 12 hours ago, and now feels increasingly tenuous as a standalone entity. The company has struck partnership deals with both Google and Amazon, but it is now facing a competitor in Microsoft with effectively unlimited funds and GPU access; it’s hard not to escape the sense that it makes sense as a part of AWS (and yes, B corps can be acquired, with considerably more ease than a non-profit).

Ultimately, though, one could make the argument that not much has changed at all: it has been apparent for a while that AI was, at least in the short to medium-term, a sustaining innovation, not a disruptive one, which is to say it would primarily benefit and be deployed by the biggest companies. The costs are so high that it’s hard for anyone else to get the money, and that’s even before you consider questions around channel and customer acquisition. If there were a company poised to join the ranks of the Big Five it was OpenAI, thanks to ChatGPT, but that seems less likely now (but not impossible). This, in the end, was Nadella’s insight: the key to winning if you are big is not to invent like a startup, but to leverage your size to acquire or fast-follow them; all the better if you can do it for the low price of $0.


  1. Microsoft’s original agreement with OpenAI also barred Microsoft from pursuing AGI based on OpenAI tech on its own; my understanding is that this clause was removed in the most recent agreement 

The OpenAI Keynote

In 2013, when I started Stratechery, there was no bigger event than the launch of the new iPhone; its only rival was Google I/O, which is when the newest version of Android was unveiled (hardware always breaks the tie, including with Apple’s iOS introductions at WWDC). It wasn’t just that smartphones were relatively new and still adding critical features, but that the strategic decisions and ultimate fates of the platforms were still an open question. More than that, the entire future of the tech industry was clearly tied up in said platforms and their corresponding operating systems and devices; how could keynotes not be a big deal?

Fast forward a decade and the tech keynote has diminished in importance and, in the case of Apple, disappeared completely, replaced by a pre-recorded marketing video. I want to be mad about it, but it makes sense: an iPhone introduction has been diminished not by Apple’s presentation, but rather Apple’s presentations reflect the reality that the most important questions around an iPhone are about marketing tactics. How do you segment the iPhone line? How do you price? What sort of brand affinity are you seeking to build? There, I just summarized the iPhone 15 introduction, and the reality that the smartphone era — The End of the Beginning — is over as far as strategic considerations are concerned. iOS and Android are a given, but what is next and yet unknown?

The answer is, clearly, AI, but even there, the energy seems muted: Apple hasn’t talked about generative AI other than to assure investors on earnings calls that they are working on it; Google I/O was of course about AI, but mostly in the context of Google’s own products — few of which have actually shipped — and my Article at the time was quickly side-tracked into philosophical discussions about both the nature of AI innovation (sustaining versus disruptive), the question of tech revolution versus alignment, and a preview of the coming battles of regulation that arrived with last week’s Executive Order on AI.

Meta’s Connect keynote was much more interesting: not only were AI characters being added to Meta’s social networks, but next year you will be able to take AI with you via Smart Glasses (I told you hardware was interesting!). Nothing, though, seemed to match the energy around yesterday’s OpenAI developer conference, their first ever: there is nothing more interesting in tech than a consumer product with product-market fit. And that, for me, is enough to bring back an old Stratechery standby: the keynote day-after.

Keynote Metaphysics and GPT-4 Turbo

This was, first and foremost, a really good keynote, in the keynote-as-artifact sense. CEO Sam Altman, in a humorous exchange with Microsoft CEO Satya Nadella, promised, “I won’t take too much of your time”; never mind that Nadella was presumably in San Francisco just for this event: in this case he stood in for the audience who witnessed a presentation that was tight, with content that was interesting, leaving them with a desire to learn more.

Altman himself had a good stage presence, with the sort of nervous energy that is only present in a live keynote; the fact he never seemed to know which side of the stage a fellow presenter was coming from was humanizing. Meanwhile, the live demos not only went off without a hitch, but leveraged the fact that they were live: in one instance a presenter instructed a GPT she created to text Altman; he held up his phone to show he got the message. In another a GPT randomly selected five members of the audience to receive $500 in OpenAI API credits, only to then extend it to everyone.

New products and features, meanwhile, were available “today”, not weeks or months in the future, as is increasingly the case for events like I/O or WWDC; everything combined to give a palpable sense of progress and excitement, which, when it comes to AI, is mostly true.

GPT-4 Turbo is an excellent example of what I mean by “mostly”. The API consists of six new features:

  • Increased context length
  • More control, specifically in terms of model inputs and outputs
  • Better knowledge, which both means updating the cut-off date for knowledge about the world to April 2023 and providing the ability for developers to easily add their own knowledge base
  • New modalities, as DALL-E 3, Vision, and TTS (text-to-speech) will all be included in the API, with a new version of Whisper speech recognition coming.
  • Customization, including fine-tuning, and custom models (which, Altman warned, won’t be cheap)
  • Higher rate limits

This is, to be clear, still the same foundational model (GPT-4); these features just make the API more usable, both in terms of features and also performance. It also speaks to how OpenAI is becoming more of a product company, with iterative enhancements of its core functionality. Yes, the mission still remains AGI (artificial general intelligence), and the core scientific team is almost certainly working on GPT-5, but Altman and team aren’t just tossing models over the wall for the rest of the industry to figure out.

Price and Microsoft

The next “feature” was tied into the GPT-4 Turbo introduction: the API is getting cheaper (3x cheaper for input tokens, and 2x cheaper for output tokens). Unsurprisingly this announcement elicited cheers from the developers in attendance; what I cheered as an analyst was Altman’s clear articulation of the company’s priorities: lower price first, speed later. You can certainly debate whether that is the right set of priorities (I think it is, because the biggest need now is for increased experimentation, not optimization), but what I appreciated was the clarity.

It’s also appropriate that the segment after that was the brief “interview” with Nadella: OpenAI’s pricing is ultimately a function of Microsoft’s ability to build the infrastructure to support that pricing. Nadella actually explained how Microsoft is accomplishing that on the company’s most recent earnings call:

It is true that the approach we have taken is a full stack approach all the way from whether it’s ChatGPT or Bing Chat or all our Copilots, all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties, and also over time, you can see the sort of stack optimization all the way to the silicon, because the abstraction layer to which the developers are riding is much higher up than low-level kernels, if you will.

So, therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open source models or proprietary models. We also have a bunch of open source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it. But the thing is, we have scale leverage of one large model that was trained and one large model that’s being used for inference across all our first-party SaaS apps, as well as our API in our Azure AI service…

The lesson learned from the cloud side is — we’re not running a conglomerate of different businesses, it’s all one tech stack up and down Microsoft’s portfolio, and that, I think, is going to be very important because that discipline, given what the spend like — it will look like for this AI transition any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

The fact that Microsoft is benefiting from OpenAI is obvious; what this makes clear is that OpenAI uniquely benefits from Microsoft as well, in a way they would not from another cloud provider: because Microsoft is also a product company investing in the infrastructure to run OpenAI’s models for said products, it can afford to optimize and invest ahead of usage in a way that OpenAI alone, even with the support of another cloud provider, could not. In this case that is paying off in developers needing to pay less, or, ideally, have more latitude to discover use cases that result in them paying far more because usage is exploding.

GPTs and Computers

I mentioned GPTs before; you were probably confused, because this is a name that is either brilliant or a total disaster. Of course you could have said the same about ChatGPT: massive consumer uptake has a way of making arguably poor choices great ones in retrospect, and I can see why OpenAI is seeking to basically brand “GPT” — generative pre-trained transformer — as an OpenAI chatbot.

Regardless, this was how Altman explains GPTs:

GPTs are tailored version of ChatGPT for a specific purpose. You can build a GPT — a customized version of ChatGPT — for almost anything, with instructions, expanded knowledge, and actions, and then you can publish it for others to use. And because they combine instructions, expanded knowledge, and actions, they can be more helpful to you. They can work better in many contexts, and they can give you better control. They’ll make it easier for you accomplish all sorts of tasks or just have more fun, and you’ll be able to use them right within ChatGPT. You can, in effect, program a GPT, with language, just by talking to it. It’s easy to customize the behavior so that it fits what you want. This makes building them very accessible, and it gives agency to everyone.

We’re going to show you what GPTs are, how to use them, how to build them, and then we’re going to talk about how they’ll be distributed and discovered. And then after that, for developers, we’re going to show you how to build these agent-like experiences into your own apps.

Altman’s examples included a lesson-planning GPT from Code.org and a natural language vision design GPT from Canva. As Altman noted, the second example might have seemed familiar: Canva had a plugin for ChatGPT, and Altman explained that “we’ve evolved our plugins to be custom actions for GPTs.”

I found the plugin concept fascinating and a useful way to understand both the capabilities and limits of large language models; I wrote in ChatGPT Gets a Computer:

The implication of this approach is that computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing)…

Large language models, though, with their probabilistic approach, are in many domains shockingly intuitive, and yet can hallucinate and are downright terrible at math; that is why the most compelling plug-in OpenAI launched was from Wolfram|Alpha. Stephen Wolfram explained:

For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.

That is the exact combination that happened, which led to the title of that Article:

The fact this works so well is itself a testament to what Assistant AI’s are, and are not: they are not computing as we have previously understood it; they are shockingly human in their way of “thinking” and communicating. And frankly, I would have had a hard time solving those three questions as well — that’s what computers are for! And now ChatGPT has a computer of its own.

I still think the concept was incredibly elegant, but there was just one problem: the user interface was terrible. You had to get a plugin from the “marketplace”, then pre-select it before you began a conversation, and only then would you get workable results after a too-long process where ChatGPT negotiated with the plugin provider in question on the answer.

This new model somewhat alleviates the problem: now, instead of having to select the correct plug-in (and thus restart your chat), you simply go directly to the GPT in question. In other words, if I want to create a poster, I don’t enable the Canva plugin in ChatGPT, I go to Canva GPT in the sidebar. Notice that this doesn’t actually solve the problem of needing to have selected the right tool; what it does do is make the choice more apparent to the user at a more appropriate stage in the process, and that’s no small thing. I also suspect that GPTs will be much faster than plug-ins, given they are integrated from the get-go. Finally, standalone GPTs are a much better fit with the store model that OpenAI is trying to develop.

Still, there is a better way: Altman demoed it.

ChatGPT and the Universal Interface

Before Altman introduced the aforementioned GPTs he talked about improvements to ChatGPT:

Even though this is a developer conference, we can’t help resist making some improvements to ChatGPT. A small one, ChatGPT now uses GPT-4 Turbo, with all of the latest improvements, including the latest cut-off, which we’ll continue to update — that’s all live today. It can now browse the web when it needs to, write and run code, analyze data, generate images, and much more, and we heard your feedback that that model picker was extremely annoying: that is gone, starting today. You will not have to click around a drop-down menu. All of this will just work together. ChatGPT will just know what to use and when you need it. But that’s not the main thing.

You may wonder why I put this section after GPTs, given they were, according to Altman, the main thing: it’s because I think this feature enhancement is actually much more important. As I just noted, GPTs are a somewhat better UI on an elegant plugin concept, in which a probabilisitic large language model gets access to a deterministic computer. The best UI, though, is no UI at all, or rather, just one UI, by which I mean “Universal Interface”.

In this case “browsing” or “image generation” are basically plug-ins: they are specialized capabilities that, before today, you had to explicitly invoke; going forward they will just work. ChatGPT will seamlessly switch between text generation, image generation, and web browsing, without the user needing to change context. What is necessary for the plug-in/GPT idea to ultimately take root is for the same capabilities to be extended broadly: if my conversation involved math, ChatGPT should know to use Wolfram|Alpha on its own, without me adding the plug-in or going to a specialized GPT.

I can understand why this capability doesn’t yet exist: the obvious technical challenges of properly exposing capabilities and training the model to know when to invoke those capabilities are a textbook example of Professor Clayton Christensen’s theory of integration and modularity, wherein integration works better when a product isn’t good enough; it is only when a product exceeds expectation that there is room for standardization and modularity. To that end, ChatGPT is only now getting the capability to generate an image without the mode being selected for it: I expect the ability to seek out less obvious tools will be fairly difficult.

In fact, it’s possible that the entire plug-in/GPT approach ends up being a dead-end; towards the end of the keynote Romain Huet, the head of developer experience at OpenAI, explicitly demonstrated ChatGPT programming a computer. The scenario was splitting the tab for an Airbnb in Paris:

Code Interpreter is now available today in the API as well. That gives the AI the ability to write and generate code on the file, or even to generate files. So let’s see that in action. If I say here, “Hey, we’ll be 4 friends staying at this Airbnb, what’s my share of it plus my flights?”

Now here what’s happening is that Code Interpreter noticed that it should write some code to answer this query so now it’s computing the number of days in Paris, the number of friends, it’s also doing some exchange rate calculation behind the scene to get this answer for us. Not the most complex math, but you get the picture: imagine you’re building a very complex finance app that’s counting countless numbers, plotting charts, really any tasks you might tackle with code, then Code Interpreter will work great.

Uhm, what tasks do you not tackle with code? To be fair, Huet is referring to fairly simple math-oriented tasks, not the wholesale recreation of every app on the Internet, but it is interesting to consider for which problems ChatGPT will gain the wisdom to choose the right tool, and for which it will simply brute force a new solution; the history of computing would actually give the latter a higher probability: there are a lot of problems that were solved less with clever algorithms and more with the application of Moore’s Law.

Consumers and Hardware

Speaking of the first year of Stratechery, that is when I first wrote about integration and modularization, in What Clayton Christensen Got Wrong; as the title suggests I didn’t think the theory was universal:

Christensen himself laid out his theory’s primary flaw in the first quote excerpted above (from 2006):

You also see it in aircrafts and software, and medical devices, and over and over.

That is the problem: Consumers don’t buy aircraft, software, or medical devices. Businesses do.

Christensen’s theory is based on examples drawn from buying decisions made by businesses, not consumers. The reason this matters is that the theory of low-end disruption presumes:

  • Buyers are rational
  • Every attribute that matters can be documented and measured
  • Modular providers can become “good enough” on all the attributes that matter to the buyers

All three of the assumptions fail in the consumer market, and this, ultimately, is why Christensen’s theory fails as well. Let me take each one in turn:

To summarize the argument, consumers care about things in ways that are inconsistent with whatever price you might attach to their utility, they prioritize ease-of-use, and they care about the quality of the user experience and are thus especially bothered by the seams inherent in a modular solution. This means that integrated solutions win because nothing is ever “good enough”; as I noted in the context of Amazon, Divine Discontent is Disruption’s Antidote:

Bezos’s letter, though, reveals another advantage of focusing on customers: it makes it impossible to overshoot. When I wrote that piece five years ago, I was thinking of the opportunity provided by a focus on the user experience as if it were an asymptote: one could get ever closer to the ultimate user experience, but never achieve it:

A drawing of The Asymptote Version of the User Experience

In fact, though, consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.

A drawing of The Ever-Changing Version of the User Experience

In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine.

I see no reason why both Articles wouldn’t apply to ChatGPT: while I might make the argument that hallucination is, in a certain light, a feature not a bug, the fact of the matter is that a lot of people use ChatGPT for information despite the fact it has a well-documented flaw when it comes to the truth; that flaw is acceptable, because to the customer ease-of-use is worth the loss of accuracy. Or look at plug-ins: the concept as originally implemented has already been abandoned, because the complexity in the user interface was more detrimental than whatever utility might have been possible. It seems likely this pattern will continue: of course customers will say that they want accuracy and 3rd-party tools; their actions will continue to demonstrate that convenience and ease-of-use matter most.

This has two implications. First, while this may have been OpenAI’s first developer conference, I remain unconvinced that OpenAI is going to ever be a true developer-focused company. I think that was Altman’s plan, but reality in the form of ChatGPT intervened: ChatGPT is the most important consumer-facing product since the iPhone, making OpenAI The Accidental Consumer Tech Company. That, by extension, means that integration will continue to matter more than modularization, which is great for Microsoft’s compute stack and maybe less exciting for developers.

Second, there remains one massive patch of friction in using ChatGPT; from AI, Hardware, and Virtual Reality:

AI is truly something new and revolutionary and capable of being something more than just a homework aid, but I don’t think the existing interfaces are the right ones. Talking to ChatGPT is better than typing, but I still have to launch the app and set the mode; vision is an amazing capability, but it requires even more intent and friction to invoke. I could see a scenario where Meta’s AI is inferior technically to OpenAI, but more useful simply because it comes in a better form factor.

After highlighting some news stories about OpenAI potentially partnering with Jony Ive to build hardware, I concluded:

There are obviously many steps before a potential hardware product, including actually agreeing to build one. And there is, of course, the fact that Apple and Google already make devices everyone carries, with the latter in particular investing heavily in its own AI capabilities; betting on the hardware in market winning the hardware opportunity in AI is the safest bet. That may not be a reason for either OpenAI or Meta to abandon their efforts, though: waging a hardware battle against Google and Apple would be difficult, but it might be even worse to be “just an app” if the full realization of AI’s capabilities depend on fully removing human friction from the process.

This is the implication of a Universal Interface, which ChatGPT is striving to be: it also requires universal access, and that will always be a challenge for any company that is “just an app.” Yes, as I noted, the odds seem long, thanks to Apple and Google’s dominance, but I think there is an outside chance that the paradigm-shifting keynote is only just beginning its comeback.

Attenuating Innovation (AI)

In 2019, a very animated Bill Gates explained to Andrew Ross Sorkin why Microsoft lost mobile:

There’s no doubt that the antitrust lawsuit was bad for Microsoft. We would have been more focused on creating the phone operating system so that instead of using Android today, you would be using Windows Mobile. If it hadn’t been for the antitrust case, Microsoft would have…

You’re convinced?

Oh we were so close. I was just too distracted. I screwed that up because of the distraction. We were just three months too late with a release that Motorola would have used on a phone, so yes, it’s a winner-take-all game, that is for sure. Now nobody here has ever heard of Windows Mobile, but oh well. That’s a few hundred billion here or there.

This opinion is, to use a technical term favored by analysts, bullshit. Windows Mobile wasn’t three months late relative to Android; Windows Mobile launched as the Pocket PC 2000 operating system in, you guessed it, 2000, a full eight years before the first Android device hit the market.

The issue with Windows Mobile was, first and foremost, Gates himself: in his view of the world the Windows-based PC was the center of a user’s computing life, and the phone a satellite; small wonder that Windows Mobile looked and operated like a shrunken-down version of Windows: there was a Start button, and Windows Mobile 2003, the first version to have the “Windows Mobile” name, even had the same Sonoma Valley wallpaper as Windows XP:

A screenshot of Windows Mobile in 2003

If anything, the problem with Windows Mobile is that it was too early: Android, which originally looked like a Blackberry, had the benefit of copying the iPhone; the iPhone, in stark contrast to Windows Mobile, looked nothing like the Mac, despite sharing the same internals. Instead, Steve Jobs and company started with a new interface paradigm — multi-touch — and developed a user interface that was actually suited to a handheld device. Jobs — appropriately! — called it revolutionary.

Fast forward four months from the iPhone introduction, and Jobs and Gates were together on stage for the D5 Conference, and Gates still didn’t get it; when Walt Mossberg asked him about what devices we would be using in five years, Gates still had a Windows device at the center:

I don’t think you’ll have one device. I think you’ll have a full-screen device that you can carry around and you’ll do dramatically more reading off of that. I believe in the tablet form factor. I think you’ll have voice, I think you’ll have ink, I think you’ll have some way of having a hardware keyboard and some settings for that. And then you’ll have the device that fits in your pocket which the whole notion of how much function should you combine in there, there’s navigation computers, there’s media, there’s phone, technology is letting us put more things in there but then again, you really want to tune it so people what they expect. So there’s quite a bit of experimentation in that pocket-sized device. But I think those are natural form factors. We’ll have the evolution of the portable machine, and the evolution of the phone, will both be extremely high volume, complementary, that is if you own one you’re more likely to own the other.

In fact, in five years worldwide smartphone sales would total 700 million units, more than doubling the 348.7 million PCs that shipped that same year; yes, a lot of those smartphone sales went to people who already had PCs, but it was already apparent that for huge swathes of people — including in developed countries — the phone was the only device that you needed.

What is even more fascinating about this conversation, though, is the way in which it illustrated how Jobs and Apple were able to invent the future, while Microsoft utterly missed it.

Mossberg asked:

The core functions of the device form factor formerly known as the cellphone, whatever we want to call it — the pocket device — what would you say the core functions are five years out?

Gates’ answer was redolent of so many experts trying to predict the future: he had some ideas and some inside knowledge of new technology, but no real vision of what might come next:

How quickly all these things that have been somewhat specialized — the navigation device, the digital wallet, the phone, the camera, the video camera — how quickly those all come together, that’s hard to chart out, but eventually you’ll be able to make something that has the capability to do every one of those things. And yet given the small size, you still won’t want to edit your homework or edit a movie on a screen of that size, and so you’ll have something else that lets you do the reading and editing and those things. Now if we could ever get a screen that would just roll out like a scroll, then you might be able to have the device that did everything.

After a back-and-forth about e-ink and projection screens, Mossberg asked Jobs the same question, and his answer was profound:

I don’t know.

The reason I don’t know is because I wouldn’t have thought that there would have been maps on it five years ago. But something comes along, gets really popular, people love it, get used to it, you want it on there. People are inventing things constantly and I think the art of it is balancing what’s on there and what’s not — it’s the editing function.

That right there is the recipe for genuine innovation:

  • Embrace uncertainty and the fact one doesn’t know the future.
  • Understand that people are inventing things — and not just technologies, but also use cases — constantly.
  • Remember that the art comes in editing after the invention, not before.

To be like Gates and Microsoft is to do the opposite: to think that you know the future; to assume you know what technologies and applications are coming; to proscribe what people will do or not do ahead of time. It is a mindset that does not accelerate innovation, but rather attenuates it.

A Cynical Read on AI Alarm

Last week in a Stratechery Interview with Gregory Allen about the chip ban we discussed why Washington D.C. suddenly had so much urgency about AI. The first reason was of course ChatGPT; it was the second, though, that set off alarm bells in my head. Here’s Allen:

The other thing that’s happened that I do think is important just for folks to understand is, that Center for AI Safety letter that came out, that was signed by Sam Altman, that was signed by a bunch of other folks that said, “The risks of AI, including the risks of human extinction, should be viewed in the same light as nuclear weapons and pandemics.” The list of signatories to that letter was quite illustrious and quite long, and it’s really difficult to overstate the impact that that letter had on Washington, D. C. When you have the CEO of all these companies…when you get that kind of roster saying, “When you think of my technology, think of nuclear weapons,” you definitely get Washington’s attention.

It turns out you get more than that: on Monday the Biden administration released an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This Executive Order goes far beyond setting up a commission or study about AI, a field that is obviously still under rapid development; instead it goes straight to proscription.

Before I get to the executive order, though, I want to go back to Gates: that video at the top, where he blamed the Department of Justice for Microsoft having missed mobile, was the first thing I thought of during my interview with Allen. The fact of the matter is that Gates is the single most unreliable narrator about why Microsoft missed mobile, precisely because he was so intimately involved in the effort.

By the time that interview happened in 2019, it was obvious to everyone that Microsoft had utterly failed in mobile, and that it cost the company billions of dollars along the way. It is exceptionally difficult, particularly for someone as intelligent and successful as Gates, to admit the obvious truth: Microsoft missed mobile because Microsoft approached the space with the entirely wrong paradigm in mind. Or, to be more blunt, Gates got it wrong. It is much easier to blame someone else than to face that failure, particularly when the federal government is sitting right there!

In short, it is always necessary to carefully examine the motivations of a self-interested actor, and that certainly applies to the letter Allen referenced.


To rewind just a bit, last January I wrote AI and the Big Five, which posited that the initial wave of generative AI would largely benefit the dominant tech companies. Apple’s strategy was unclear, but it controlled the devices via which AI would be accessed, and had the potential to benefit even more if AI could be run locally. Amazon had AWS, which held much of the data over which companies might wish to apply AI, but also lacked its own foundational models. Google likely had the greatest capabilities, but also the greatest business model challenges. Meta controlled the apps through which consumers might be most likely to encounter AI generated content. Microsoft, meanwhile, thanks to its partnership with OpenAI, was the best placed to ride the initial wave generated by ChatGPT.

Nine months later and the Article holds up well: Apple is releasing ever more powerful devices, but still lacks a clear strategy; Amazon spent its last earnings call trying to convince investors that AI applications would come to their data, and talking up its partnership with Anthropic, OpenAI’s biggest competitor; Google has demonstrated great technology but has been slow to ship; Meta is pushing ahead with generative AI in its apps; and Microsoft is actually registering meaningful financial impact from its OpenAI partnership.

With this as context, it’s interesting to consider who signed that letter Allen referred to, which stated:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

There are 30 signatories from OpenAI, including the aforementioned CEO Sam Altman. There are 15 signatories from Anthropic, including CEO Dario Amodei. There are seven signatories from Microsoft, including CTO Kevin Scott. There are 81 signatories from Google, including Google DeepMind CEO Demis Hassabis. There are none from Apple or Amazon, and two low-level employees from Meta.

What is striking about this tally is the extent to which the totals and prominence align to the relative companies’ current position in the market. OpenAI has the lead, at least in terms of consumer and developer mindshare, and the company is deriving real revenue from ChatGPT; Anthropic is second, and has signed deals with both Google and Amazon. Google has great products and an internal paralysis around shipping them for business model reasons; urging caution is very much in their interest. Microsoft is in the middle: it is making money from AI, but it doesn’t control its own models; Apple and Amazon are both waiting for the market to come to them.

In this ultra-cynical analysis the biggest surprise is probably Meta: the company has its own models, but no one of prominence has signed. These models, though, have been gradually open-sourced: Meta is betting on distributed innovation to generate value that will best be captured via the consumer touchpoints the the company controls.

The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.

An Executive Order on Attenuating Innovation

There is another quote I thought of this week. It was delivered by Senator Amy Klobuchar in a tweet:

A tweet from Senator Klobuchar celebrating tech regulation

I wrote at the time in an Update:

In 1991 — assuming that the “dawn of the Internet” was the launch of the World Wide Web — the following were the biggest companies by market cap:

  1. $88 billion — General Electric
  2. $80 billion — Exxon Mobil
  3. $62 billion — Walmart
  4. $54 billion — Coca-Cola
  5. $42 billion — Merck

The only tech company in the top 10 was IBM, with a $31 billion market cap. Imagine proposing a bill then targeting companies with greater than $550 billion market caps, knowing that it is nothing but tech companies!

What doesn’t occur to Senator Klobuchar is the possibility that the relationship between the massive increase in wealth, and even greater gain in consumer welfare, produced by tech companies since the “dawn of the Internet” may in fact be related to the fact that there hasn’t been any major regulation (the most important piece of regulation, Section 230, protected the Internet from lawsuits; this legislation invites them). I’m not saying that the lack of regulation is causal, but I am exceptionally skeptical that we would have had more growth with more regulation.

More broadly, tech sure seems like the only area where innovation and building is happening anywhere in the West. This isn’t to deny that the big tech companies aren’t sometimes bad actors, and that platforms in particular do, at least in theory, need regulation. But given the sclerosis present everywhere but tech it sure seems like it would be prudent to be exceptionally skeptical about the prospect of new regulation; I definitely wouldn’t be celebrating it as if it were some sort of overdue accomplishment.

Unfortunately this week’s Executive Order takes the exact opposite approach to AI that we took to technology previously. As Steven Sinofsky explains in this excellent article:

This document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law — literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to “govern” AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.

Much attention has been focused on the Executive Order’s ultra-specific limits on model sizes and attributes (you can exceed those limits if you are registered and approved, a game best played by large established companies like the list I just detailed); unfortunately that is only the beginning of the issues with this Executive Order, but again, I urge you to read Sinofsky’s post.

What is so disappointing to me is how utterly opposed this executive order is to how innovation actually happens:

  • The Biden administration is not embracing uncertainty: it is operating from an assumption that AI is dangerous, despite the fact that many of the listed harms, like learning how to build a bomb or synthesize dangerous chemicals or conduct cyber attacks, are already trivially accomplished on today’s Internet. What is completely lacking is anything other than the briefest of hand waves at AI’s potential upside. The government is Bill Gates, imagining what might be possible, when it ought to be Steve Jobs, humble enough to know it cannot predict the future.
  • The Biden administration is operating with a fundamental lack of trust in the capability of humans to invent new things, not just technologies, but also use cases, many of which will create new jobs. It can envision how the spreadsheet might imperil bookkeepers, but it can’t imagine how that same tool might unlock entire new industries.
  • The Biden administration is arrogantly insisting that it ought have a role in dictating the outcomes of an innovation that few if any of its members understand, and almost certainly could not invent. There is, to be sure, a role for oversight and regulation, but that is a blunt instrument best applied after the invention, like an editor.

In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.

The Sclerotic Shiggoth

I fully endorse Sinofsky’s conclusion:

This approach to regulation is not about innovation despite all the verbiage proclaiming it to be. This Order is about stifling innovation and turning the next platform over to incumbents in the US and far more likely new companies in other countries that did not see it as a priority to halt innovation before it even happens.

I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.

What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?

They should not. We should accelerate innovation, not attenuate it. Innovation — technology, broadly speaking — is the only way to grow the pie, and to solve the problems we face that actually exist in any sort of knowable way, from climate change to China, from pandemics to poverty, and from diseases to demographics. To attack the solution is denialism at best, outright sabotage at worst. Indeed, the shoggoth to fear is our societal sclerosis seeking to drag the most exciting new technology in years into an innovation anti-pattern.

Photo of a radiant, downscaled city teetering on the brink of an expansive abyss, with a dark, murky quagmire below containing decayed structures reminiscent of historic landmarks. The city is a beacon of the future, with flying cars, green buildings, and residents in futuristic attire. The influence of AI is subtly interwoven, with robots helping citizens and digital screens integrated into the environment. Below, the haunting silhouette of a shoggoth, with its eerie tendrils, endeavors to pull the city into the depths, illustrating the clash between forward-moving evolution and outdated forces.
Photo generated by Dall-E 3, with the following prompt: “Photo of a radiant, downscaled city teetering on the brink of an expansive abyss, with a dark, murky quagmire below containing decayed structures reminiscent of historic landmarks. The city is a beacon of the future, with flying cars, green buildings, and residents in futuristic attire. The influence of AI is subtly interwoven, with robots helping citizens and digital screens integrated into the environment. Below, the haunting silhouette of a shoggoth, with its eerie tendrils, endeavors to pull the city into the depths, illustrating the clash between forward-moving evolution and outdated forces.”

China Chips and Moore’s Law


The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.
Gordon Moore, Cramming More Components Onto Integrated Circuits

Gordon Moore's illustration of Moore's Law


Moore’s law is dead.
Jensen Huang

On Tuesday the Biden administration tightened export controls for advanced AI chips being sold to China; the primary target was Nvidia’s H800 and A800 chips, which were specifically designed to skirt controls put in place last year. The primary difference between the H800/A800 and H100/A100 is the bandwidth of their interconnects: the A100 had 600 Gb/s interconnects (the H100 has 900GB/s), which just so happened to be the limit proscribed by last year’s export controls; the A800 and H800 were limited to 400 Gb/s interconnects.

The reason why interconnect speed matters is tied up with Nvidia CEO Jensen Huang’s thesis that Moore’s Law is dead. Moore’s Law, as originally stated in 1965, states that the number of transistors in an integrated circuit would double every year. Moore revised his prediction 10 years later to be a doubling every two years, which held until the last decade or so, when it has slowed to a doubling about every three years.

In practice, though, Moore’s Law has become something more akin to a fundamental precept underlying the tech industry: computing power will both increase and get cheaper over time. This precept — which I will call Moore’s Precept, for clarity — is connected to Moore’s technical prediction: smaller transistors can switch faster, and use less energy in the switching, even as more of them fit on a single wafer; this means that you can either get more chips per wafer or larger chips, either decreasing price or increasing power for the same price. In practice we got both.

What is critical is that the rest of the tech industry didn’t need to understand the technical or economic details of Moore’s Law: for 60 years it has been safe to simply assume that computers would get faster, which meant the optimal approach was always to build for the cutting edge or just beyond, and trust that processor speed would catch up to your use case. From an analyst perspective, it is Moore’s Precept that enables me to write speculative articles like AI, Hardware, and Virtual Reality: it is enough to see that a use case is possible, if not yet optimal; Moore’s Precept will provide the optimization.

The End of Moore’s Precept?

This distinction between Moore’s Law and Moore’s Precept is the key to understanding Nvidia CEO Jensen Huang’s repeated declarations that Moore’s Law is dead. From a technical perspective, it has certainly slowed, but density continues to increase; here is TSMC’s transistor density by node size, using the first (i.e. worse) iteration of each node size:1

TSMC Transistor Density (MTr/mm) Year Introduced
90 nm 3.4 2004
65 nm 5.6 2006
40 nm 9.8 2008
28 nm 16.6 2011
20 nm 20.9 2014
16 nm 28.9 2015
10 nm 52.5 2017
7 nm 91.2 2019
5 nm 138.2 2020

Remember, though, that cost matters; here is the same table with TSMC’s introductory price/wafer, and what that translates to in terms of price/billion transistors:

TSMC MTr/mm Year Introduced Price/Wafer Price/BTr
90 nm 3.4 2004 $1,650 $6.87
65 nm 5.6 2006 $1,937 $4.89
40 nm 9.8 2008 $2,274 $3.28
28 nm 16.6 2011 $2,891 $2.46
20 nm 20.9 2014 $3,677 $2.49
16 nm 28.9 2015 $3,984 $1.95
10 nm 52.5 2017 $5,992 $1.61
7 nm 91.2 2019 $9,346 $1.45
5 nm 138.2 2020 $16,988 $1.74

Notice that number on the bottom right: with TSMC’s 5 nm process the price per transistor increased — and it increased a lot (20%). The reason was obvious: 5 nm was the first process that required ASML’s extreme ultraviolet (EUV) lithography, and EUV machines were hugely expensive — around $150 million each.2 In other words, it appeared that while the technical definition of Moore’s Law would continue, the precept that chips would always get both faster and cheaper would not.

GPUs and Embarrassing Parallelism

Huang’s argument, to be clear, does not simply rest on the cost of 5 nm chips; remember Moore’s Precept is about speed as well as cost, and the truth is that a lot of those density gains have primarily gone towards power efficiency as energy became a constraint in everything from mobile to PCs to data centers. Huang’s thesis for several years now is that Nvidia has the solution to making computing faster: use GPUs.

GPUs are much less complex than CPUs; that means they can execute instructions much more quickly, but those instructions have to be much simpler. At the same time, you can run a lot of them at the same time to achieve outsized results. Graphics is, unsurprisingly, the most obvious example: every “shader” — the primary processing component of a GPU — calculates what will be displayed on a single portion of the screen; the size of the portion is a function of how many shaders you have available. If you have 1,024 shaders, each shader draws 1/1,024 of the screen. Ergo, if you have 2,048 shaders, you can draw the screen twice as fast. Graphics performance is “embarrassingly parallel”, which is to say it scales with the number of processors you apply to the problem.

This “embarrassing parallelism” is the key to GPUs outsized performance relative to CPUs, but the challenge is that not all software problems are easily parallel-izable; Nvida’s CUDA ecosystem is predicated on providing the tools to build software applications that can leverage GPU parallelism, and is one of the major moats undergirding Nvidia’s dominance, but most software applications still need the complexity of CPUs to run.

AI, though, is not most software. It turns out that AI, both in terms of training models and in leveraging them (i.e. inference) is an embarrassingly parallel application. Moreover, the optimum amount of scalability goes far beyond a computer monitor displaying graphics; this is why Nvidia AI chips feature the high-speed interconnects referenced by the chip ban: AI applications run across multiple AI chips at the same time, but the key to making sure those GPUs are busy is feeding them with data, and that requires those high speed interconnects.

That noted, I’m skeptical about the wholesale shift of traditional data center applications to GPUs; from Nvidia On the Mountaintop:

Humans — and companies — are lazy, and not only are CPU-based applications easier to develop, they are also mostly already built. I have a hard time seeing what companies are going to go through the time and effort to port things that already run on CPUs to GPUs; at the end of the day, the applications that run in a cloud are determined by customers who provide the demand for cloud resources, not cloud providers looking to optimize FLOP/rack.

There’s another reason to think that traditional CPUs still have some life in them as well: it turns out that Moore’s Precept may be back on track.

EUV and Moore’s Precept

The table I posted above only ran through 5 nm; the iPhone 15 Pro, though, has an N3 chip, and check out the price/transistor:

TSMC MTr/mm Year Introduced Price/Wafer Price/BTr
90 nm 3.4 2004 $1,650 $6.87
65 nm 5.6 2006 $1,937 $4.89
40 nm 9.8 2008 $2,274 $3.28
28 nm 16.6 2011 $2,891 $2.46
20 nm 20.9 2014 $3,677 $2.49
16 nm 28.9 2015 $3,984 $1.95
10 nm 52.5 2017 $5,992 $1.61
7 nm 91.2 2019 $9,346 $1.45
5 nm 138.2 2020 $16,988 $1.74
3 nm (N3B) 197.0 2023 $20,000 $1.44
3 nm (N3E) 215.6 2023 $20,000 $1.31

While I only included the first version of each node previously, the N3B process, which is used for the iPhone’s A17 Pro chip, is a dead-end; TSMC changed its approach with the N3E, which will be the basis of the N3 family going forward. It also makes the N3 leap even more impressive in terms of price/transistor: while N3B undid the 5 nm backslide, N3E is a marked improvement over 7 nm.

Moreover, the gains are actually what you would expect: yes, those EUV machines cost a lot, but the price decreases embedded in Moore’s Precept are not a function of equipment getting cheaper — notice that the price/wafer has been increasing continuously. Rather, ever declining prices/transistor are a function of Moore’s Law, which is to say that new equipment, like EUV, lets us “Cram[] More Components Onto Integrated Circuits”.

What happened at 5 nm was similar to what happened at 20 nm, the last time the price/transistor increased: that was the node where TSMC started to use double-patterning, which means they had to do every lithography step twice; that both doubled the utilization of lithography equipment per wafer and also decreased yield. For that node, at least, the gains from making smaller transistors were outweighed by the costs. A year later, though, and TSMC launched the 16 nm node that re-united Moore’s Law with Moore’s Precept. That is exactly what seems to have happened with 3 nm — the gains of EUV are now significantly outweighing the costs — and early rumors about 2 nm density and price points suggests the gains should continue for another node.

Chip Ban Angst

All of this is interesting in its own right, but it’s particularly pertinent in light of the recent angst in Washington DC over Huawei’s recent smartphone with a 7 nm chip, seemingly in defiance of those export controls. I already explained why that angst was misguided in this September Update. To summarize my argument:

  • TSMC had already shown that 7 nm chips could be made using deep ultraviolet (DUV)-based immersion lithography, and China had plenty of DUV lithography machines, given that DUV has been the standard for multiple generations of chips.
  • China’s Semiconductor Manufacturing International Corp. (SMIC) had already made a 7 nm chip in 2022; sure it was simpler than the one launched in that Huawei phone, but that is the exact sort of progression you should expect from a competent foundry.
  • SMIC is almost certainly not producing that 7nm chip economically; Intel, for example, could make a 7nm chip using DUV, they just couldn’t do it economically, which is why they ultimately switched to EUV.

In short, the problem with the chip ban was drawing the line at 10 nm: that line was arbitrary given that the equipment needed to make 10 nm chips had already been shown to be capable of producing 7 nm chips; that SMIC managed to do just that isn’t a surprise, and, crucially, is not evidence that the chip ban was a failure.

The line that actually matters is 5 nm, which is another way to say that the export control that will actually limit China’s long-term development is EUV. Fortunately the Trump administration had already persuaded the Netherlands to not allow the export of EUV machines, which the Biden administration further locked down with its chip ban and further coordination with the Netherlands. The reality is that a lot of chip-making equipment is “multi-nodal”; much of the machinery can be used at multiple nodes, but you must have EUV machines to realize Moore’s Precept, because it is the key piece of technology driving Moore’s Law.

By the same token, the A800/H800 loophole was a real one: the H800 is made on TSMC’s third-generation 5 nm process (confusingly called N4), which is to say it is made with EUV; the interconnect limits were meaningful, and would make AI development slower and more costly (because those GPUs would be starved of data more of the time), but it didn’t halt it. This matters because AI is the military application the U.S. should be the most concerned with: a lot of military applications run perfectly fine on existing chips (or even, in the case of guided weaponry, chips that were made decades ago); wars of the future, though, will almost certainly be undergirded by AI, a field that is only just now getting started.

This leads to a further point: the payoff from this chip ban will not come immediately. The only way the entire idea makes sense is if Moore’s Law continues to exist, because that means the chips that will be available in five or ten years will be that much faster and cheaper than the ones that exist today, increasing the gap. And, at the same time, the idea also depends on taking Huang’s argument seriously, because AI needs not just power but scale. Fortunately movement on both fronts is headed in the right direction.

There remain good arguments against the entire concept of the chip ban, including the obvious fact that China is heavily incentivized to built up replacements from scratch (and could have leverage on the U.S. on the trailing edge): perhaps in 20 years the U.S. will not only have lost its most potent point of leverage but will also see its most cutting edge companies undercut by Chinese competition. That die, though, has long since been cast; the results that matter are not a smartphone in 2023, but the capabilities of 2030 and beyond.


  1. I am not certain I have the exact right numbers for older nodes, but I have confirmed that the numbers are in the right ballpark 

  2. TSMC first used EUV with latter iterations of its 7nm process, but that was primarily to move down the learning curve; EUV was not strictly necessary, and the original 7nm process used immersion DUV lithography exclusively 

AI, Hardware, and Virtual Reality

In a recent interview I did with Craig Moffett we discussed why there is a “TMT” sector when it comes to industry classifications. TMT stands for technology, media, and telecoms, and what unifies them is that all deal in a world of massive up-front investment — i.e. huge fixed costs — and then near perfect scalability once deployed — zero marginal costs.

Each of these three categories, though, is distinct in the experience they provide:

  • Media is a recording or publication that enables a shift in time between production and consumption.
  • Telecoms enables a shift in place when it comes to communication.
  • Technology, which generally means software, enables interactivity at scale.

Another way to think about these categories is that if reality is the time and place in which one currently exists, each provides a form of virtual reality:

  • Media consumption entails consuming content that was created at a different time.
  • Communication entails talking to someone who is in a different place.
  • Software entails manipulating bits on a computer in a manner that doesn’t actually change anything about your physical space, just the virtual one.

The constraint on each of these is the same: human time and attention. Media needs to be created, software needs to be manipulated, and communication depends on there being someone to communicate with. That human constraint, by extension, is perhaps why we don’t actually call media, communication, or software “virtual reality”, despite the defiance of reality I noted above. No matter how profound the changes wrought by digitization, the human component remains.

AI removes the human constraint: media and interactive experiences can be created continuously; the costs may be substantial, particularly compared to general compute, but are practically zero relative to the cost of humans. The most compelling use case to date, though, is communication: there is always someone to talk to.

ChatGPT Talks and Sees

The first AI announcement of the week was literally AI that can talk: OpenAI announced that you can now converse with ChatGPT, and I found the experience profound.

You have obviously been able to chat with ChatGPT via text for many months now; what I only truly appreciated after talking with ChatGPT, though, was just how much work it was to type out questions and read answers. There was, in other words, a human constraint in our conversations that made it feel like I was using a tool; small wonder that the vast majority of my interaction with ChatGPT has been to do some sort of research, or try to remember something on the edge of my memory, too fuzzy to type a clear search term into Google.

Simply talking, though, removed that barrier: I quickly found myself having philosophical discussions including, for example, the nature of virtual reality. It was the discussion itself that provided a clue: virtual reality feels real, but something can only feel real if human constraints are no longer apparent. In the case of conversation, there is no effort required to talk to another human in person, or on the phone; to talk to them via chat is certainly convenient, but there is a much more tangible separation. So it is with ChatGPT.1

The second AI announcement was that ChatGPT now has vision: you can take a picture of an object or a math problem, and ask ChatGPT about it. It’s a very powerful capability, particularly because it seems that GPT-4 is truly multi-modal: there isn’t some sort of translation layer in-between. The limitation, though, was effort: I had to open up the camera, take a picture, and then ask some sort of question. To put it another way, the impressiveness of the vision capability was, at least for me, somewhat diminished by the fact said capability was released at the same time as voice chat, which impressed precisely because it was easy.

What is interesting is that I had the opposite reaction during a demo last week: when I watched someone from OpenAI demonstrate vision, it seemed like the more impressive feature by far. The context in which I observed that demo, though, was a Zoom call, which meant I was engaging with the feature on a distant and more intellectual level — a level not dissimilar from how I might have interacted with ChatGPT when I had to type my questions and read the answers. To simply talk, meanwhile, wasn’t very impressive to observe, but was much more impressive when I was the one interacting.

Meta AI and Emu

The next set of AI announcements happened yesterday at Meta Connect. Meta is releasing its own chatbot, called Meta AI, and a fleet of celebrity-based AI characters, with the promise of more to come, and a developer platform to boot. I haven’t used any of these products, which are, for now, limited to text interactions. What the releases point to, though, is the removal of another kind of human constraint: in the very near future literally billions of people can talk to Tom Brady or Snoop Dogg, all at the same time.

I doubt, for the record, that celebrity chat bots will ever be much more than a novelty and cool demo, but that is only because they will be superceded by bots that are actually tuned much more explicitly to every individual;2 each individual bot, though, will have the same absence of constraint inherent in conversations with real people: the bot will always be available, no matter what.

Meta also announced that you will be able to use Emu, its image generation model, to create custom stickers in chat, and to edit photos in Instagram. Both seem immediately useful, not because Emu is particularly good — that remains to be seen — but because its capabilities are being applied in an immediately useful use case, in a pre-existing channel. The existence of these channels, whether they be Meta’s messaging apps or its social networks, is why Meta was always destined to be a force in AI: it is one thing to build a product that people choose to use, and another, significantly easier thing, to augment a product people already use every day. Less friction is key!

Meta Smart Glasses

The most compelling announcement for me, though, was a hardware product, specifically the updated Meta Smart Glasses. Here is the key part of the introduction:

The most interesting thing about this isn’t any of those specs. It’s that these are the first smart glasses that are built and shipping with Meta AI in them. Starting in the US you’re going to get a state-of-the-art AI that you can interact with hands-free wherever you go…

This is just the beginning, because this is just audio. It’s basically just text. Starting next year, we’re going to be issuing a free software update to the glasses that makes them multi-modal. So the glasses are going to be able to understand what you’re looking at when you ask them questions. So if you want to know what the building is that you’re standing in front of, or if you want to translate a sign that’s in front of you to know what it’s saying, or if you need help fixing this sad leaky faucet, you can just talk to Meta AI and look at it and it will walk you through it step-by-step how to do it.

I think that smart glasses are going to be an important platform for the future, not only because they’re the natural way to put holograms in the world, so we can put digital objects in our physical space, but also — if you think about it, smart glasses are the ideal form factor for you to let an AI assistant see what you’re seeing and hear what you’re hearing.

I wonder what my reaction would have been to this announcement had I not experienced the new OpenAI features above, because I basically just made the case for smart glasses: there is a step-change in usability when the human constraint is removed, which is to say that ChatGPT’s vision capabilities seem less useful to me because it takes effort to invoke and interact with it, which is to further say I agree with Zuckerberg that smart glasses are an ideal form factor for this sort of capability.

The Hardware Key

What was most remarkable about this announcement, though, is the admission that followed:

Before this last year’s AI breakthroughs, I kind of thought that smart glasses were only really only going to become ubiquitous once we dialed in the holograms and the displays and all that stuff, which we’re making progress on, but is somewhat longer. But now, I think that the AI part of this is going to be just as important in smart glasses being widely adopted as any of the other augmented reality features.

It was just 11 months ago that Meta’s stock was plummeting thanks to investor angst about its business, exacerbated by the perception that Meta had shifted to the Metaverse in a desperate attempt to find new growth. This was an incorrect perception, of course, which I explained in Meta Myths: users were not deserting Facebook, Instagram engagement was not plummeting, TikTok’s growth had been arrested, advertising was not dying, and Meta’s spending, particularly on AI, was not a waste. At the end, though, I said that one thing was maybe true: the Metaverse might be a waste of time and money.

However, it seems possible that AI — to Zuckerberg’s surprise — may save the day. This smart glasses announcement is — more than the Quest 3 — evidence that Meta’s bet on hardware might pay off. AI is truly something new and revolutionary and capable of being something more than just a homework aid, but I don’t think the existing interfaces are the right ones. Talking to ChatGPT is better than typing, but I still have to launch the app and set the mode; vision is an amazing capability, but it requires even more intent and friction to invoke. I could see a scenario where Meta’s AI is inferior technically to OpenAI, but more useful simply because it comes in a better form factor.

This is why I wasn’t surprised by this week’s final piece of AI news, first reported by The Information:

Jony Ive, the renowned designer of the iPhone, and OpenAI CEO Sam Altman have been discussing building a new AI hardware device, according to two people familiar with the conversations. SoftBank CEO and investor Masayoshi Son has talked to both about the idea, according to one of these people, but it is unclear if he will remain involved.

The Financial Times added more details:

Sam Altman, OpenAI’s chief, has tapped Ive’s company LoveFrom, which the designer founded when he left Apple in 2019, to develop the ChatGPT creator’s first consumer device, according to three people familiar with the plan. Altman and Ive have held brainstorming sessions at the designer’s San Francisco studio about what a new consumer product centred on OpenAI’s technology would look like, the people said. They hope to create a more natural and intuitive user experience for interacting with AI, in the way that the iPhone’s innovations in touchscreen computing unleashed the mass-market potential of the mobile internet. The process of identifying a design or device remains at an early stage with many different ideas on the table, they said.

Son, SoftBank’s founder and chief executive, has also been involved in some of the discussions, pitching a central role for Arm — the chip designer in which the Japanese conglomerate holds a 90 per cent stake — as well as offering financial backing. Son, Altman and Ive have discussed creating a company that would draw on talent and technology from their three groups, the people said, with SoftBank investing more than $1bn in the venture.

There are obviously many steps before a potential hardware product, including actually agreeing to build one. And there is, of course, the fact that Apple and Google already make devices everyone carries, with the latter in particular investing heavily in its own AI capabilities; betting on the hardware in market winning the hardware opportunity in AI is the safest bet.

That may not be a reason for either OpenAI or Meta to abandon their efforts, though: waging a hardware battle against Google and Apple would be difficult, but it might be even worse to be “just an app” if the full realization of AI’s capabilities depend on fully removing human friction from the process.

Virtual Reality

I should, I suppose, mention the Quest 3, which was formally announced at Meta’s event, given that I opened this Article with allusions to “Virtual Reality.” I have used a prototype Quest 3 device, but not a release version, and so can’t fully comment on its capabilities or the experience; what I will note is that the mixed reality gaming experiences were incredibly fun, particularly a Zombie shooter that is set in the room you are located in.

That’s the thing, though: Quest 3 still strikes me mostly as a console, while the Apple Vision strikes me mostly as a productivity device. Both are interesting niches, but niches nonetheless. What seems essential for both to fully realize the vision of virtual reality is to lose their current sense of boundedness and friction, which is to say that both need AI generation.

In fact, I would argue that defining “virtual reality” to mean an immersive headset is to miss the point: virtual reality is a digital experience that has fully broken the bounds of human constraints, and in that experience the hardware is a means, not an end. Moreover, a virtual reality experience need not involve vision at all: talking with ChatGPT, for example, is an aural experience that feels more like virtual reality than the majority of experiences I’ve had in a headset.

True virtual reality shifts time like media, place like communications, and, crucially, does so with perfect availability and infinite capacity. In this view, virtual reality is AI, and AI is virtual reality. Hardware does matter — that has been the focus of this Article — but it matters as a means to an end, to enable an interactive experience without the constraints of human capacity or the friction of actual reality.

I wrote a follow-up to this Article in this Daily Update.


  1. The experience wasn’t perfect, both from a usability standpoint, and from the overly verbose answers, delivered in ChatGPT’s sterile style; you can see the seeds of something very compelling though 

  2. And startups like Character.AI are reportedly doing extremely well 

FTC Sues Amazon

From the Wall Street Journal:

The Federal Trade Commission and 17 states on Tuesday sued Amazon, alleging the online retailer illegally wields monopoly power that keeps prices artificially high, locks sellers into its platform and harms its rivals. The FTC’s lawsuit, filed in Seattle federal court, marks a milestone in the Biden administration’s aggressive approach to enforcing antitrust laws and has been anticipated for months. The agency’s chair, Lina Khan, is a longtime critic of Amazon who wrote in the Yale Law Journal in 2017 that earlier generations of competition cops and courts abandoned the law’s concerns over conglomerates such as Amazon. Khan has had trouble convincing courts of her antitrust views, however. Having earlier lost cases against both Microsoft and Meta Platforms, she and her agency now face a crucial test in taking on Amazon.

The federal agency and the states alleged that Amazon violated antitrust laws by using anti-discounting measures that punished merchants for offering lower prices elsewhere. The government also said sellers on Amazon were compelled to use its logistics service if they want their goods to appear in Amazon Prime, the subscription program whose perks include faster shipping times. Such “tying,” the complaint says, illegally “restricts sellers’ choices” and “reduces product selection available to Amazon’s rivals.”

The FTC also said sellers feel they must use Amazon’s services such as advertising to be successful on the platform. Between being paid for its logistics program, advertising and other services, “Amazon now takes one of every $2 that a seller makes,” Khan said at a briefing with the media Tuesday.

This is the key paragraph of the FTC’s (heavily redacted) complaint:

This case is about the illegal course of exclusionary conduct Amazon deploys to block competition, stunt rivals’ growth, and cement its dominance. The elements of this strategy are mutually reinforcing. Amazon uses a set of anti-discounting tactics to prevent rivals from growing by offering lower prices, and it uses coercive tactics involving its order fulfillment service to prevent rivals from gaining the scale they need to meaningfully compete. Amazon deploys this interconnected strategy to block off every major avenue of competition — including price, product selection, quality, and innovation — in the relevant markets for online superstores and online marketplace services.

I will, for the sake of space, focus on these two complaints; I will note, though, that the extreme suspicion with which things like a subscriptions-based loyalty program (Prime), bundling (Prime), store-branded goods (Amazon Basics et al.), and advertising are presented hardly does the FTC’s case any good. Characterizing practices that have been common tactics in retail for literally decades as some sort of nefarious plot makes me question this paragraph from the press release:

The complaint alleges that Amazon violates the law not because it is big, but because it engages in a course of exclusionary conduct that prevents current competitors from growing and new competitors from emerging. By stifling competition on price, product selection, quality, and by preventing its current or future rivals from attracting a critical mass of shoppers and sellers, Amazon ensures that no current or future rival can threaten its dominance. Amazon’s far-reaching schemes impact hundreds of billions of dollars in retail sales every year, touch hundreds of thousands of products sold by businesses big and small and affect over a hundred million shoppers.

That first sentence in particular made me think of this meme:

A meme about the FTC claiming it isn't suing Amazon for being big

Set that aside for now, though; I actually think at least one of the complaints is compelling, if not convincing.

FBA and Prime

This complaint is not the compelling one; from the complaint:

Amazon deploys yet another tactic as part of its monopolistic course of conduct. Amazon conditions sellers’ ability to be “Prime eligible” on their use of Amazon’s order fulfillment service. As with Amazon’s anti-discounting tactics, this coercive conduct forecloses Amazon’s rivals from drawing a critical mass of sellers or shoppers – thereby depriving them of the scale needed to compete effectively online.

Amazon makes Prime eligibility critical for sellers to fully reach Amazon’s enormous base of shoppers. In 2021, more than ██% of all units sold on Amazon in the United States were Prime eligible. Prime eligibility is critical for sellers in part because of the enormous reach of Amazon’s Prime subscription program. According to public reports, Mr. Bezos told Amazon executives that Prime was created in 2005 to “draw a moat around [Amazon’s] best customers.” Prime now blankets more than ██% of all U.S. households, with its reach extending as far as ██% in some zip codes.

Amazon requires sellers who want their products to be Prime eligible to use Amazon’s fulfillment service, Fulfillment by Amazon (“FBA”), even though many sellers would rather use an alternative fulfillment method to store and package customer orders.

I find this charge ridiculous on its face. The core offering of Prime — the feature that it launched with 18 years ago — was a shipping guarantee. From the February 2005 press release:

Today the Company also introduced “Amazon Prime,” Amazon.com’s first ever membership program. For a flat membership fee of $79 per year, members get unlimited, express two-day shipping for free, with no minimum purchase requirement. Members also get one-day, overnight shipping for only $3.99 per item — order as late as 6:30PM ET.

“Amazon Prime is ‘all-you-can-eat’ express shipping,” said Jeff Bezos, founder and CEO of Amazon.com. “Though expensive for the Company in the short-term, it’s a significant benefit and more convenient for customers. With Amazon Prime, there’s no minimum purchase to think about, and no consolidating orders — two-day shipping becomes an everyday experience rather than an occasional indulgence.”

It seems eminently reasonable to me that Amazon predicate inclusion in a program defined by a shipping guarantee on letting Amazon deliver your products. Prime was a massive risk at the time, dwarfed only by the many billions of dollars that Amazon has spent since then building out its logistics network. I see no basis on which a government regulator ought to demand that Amazon give out access to the Prime label and bear the reputation risk for 3rd-party delivery services that did not take those risks or make those investments. It’s absurd.

The FTC’s argument seems to be mostly based on the existence of an Amazon program called “Seller-Fulfilled Prime” that launched in 2015, before enrollment was shuttered in 2019, and suspended in 2020; Amazon announced it was coming back in 2023 (perhaps because of this case). Seller-Fulfilled Prime let sellers participate in the Prime program, as long as they delivered the goods themselves (i.e. didn’t use a 3rd-party fulfillment service) and passed Amazon’s stringent requirements. The FTC, based on internal emails (which are redacted), claims that Amazon killed the program because it reduced the company’s hold on merchants. A few points on this:

  • First, Prime is Amazon’s brand and program; just because Amazon opened it up once doesn’t mean it ought be compelled to keep it open.
  • Second, this charge definitely feels downstream from a fishing expedition; I imagine those redacted emails are pretty spicy, because it’s hard to see any justification for this charge otherwise.
  • Three, look carefully at those dates: in 2019 Amazon announced that Prime would shift to a one-day guarantee, and in 2020 Amazon was in the middle of saving the country during lockdowns. Given the reputational risk attached to Prime those seem like relevant reasons to suspend the program.

Ultimately, though, these arguments pale in comparison to the sheer audacity of the FTC’s insistence it ought to be able to simply take what Amazon has built and distribute it to whoever wants it.

Pricing Punishment

This charge is more compelling; from the complaint:

One set of tactics stifles the ability of rivals to attract shoppers by offering lower prices. Amazon deploys a sophisticated surveillance network of web crawlers that constantly monitor the internet, searching for discounts that might threaten Amazon’s empire. When Amazon detects elsewhere online a product that is cheaper than a seller’s offer for the same product on Amazon, Amazon punishes that seller. It does so to prevent rivals from gaining business by offering shoppers or sellers lower prices…

The sanctions Amazon levies on sellers vary. For example, Amazon knocks these sellers out of the all-important “Buy Box,” the display from which a shopper can “Add to Cart” or “Buy Now” an Amazon-selected offer for a product. Nearly ██% of Amazon sales are made through the Buy Box and, as Amazon internally recognizes, eliminating a seller from the Buy Box causes that seller’s sales to “tank.” Another form of punishment is to bury discounting sellers so far down in Amazon’s search results that they become effectively invisible…

Moreover, Amazon’s one-two punch of seller punishments and high seller fees often forces sellers to use their inflated Amazon prices as a price floor everywhere else. As a result, Amazon’s conduct causes online shoppers to face artificially higher prices even when shopping somewhere other than Amazon. Amazon’s punitive regime distorts basic market signals: one of the ways sellers respond to Amazon’s fee hikes is by increasing their own prices off Amazon. An executive from another online retailer sums up this perverse dynamic: Amazon’s anti-discounting conduct █████████████████████████████████. Amazon’s illegal tactics mean that when Amazon raises its fees, others — competitors, sellers, and shoppers – suffer the harms.

Amazon’s tactics suppress rival online superstores’ ability to compete for shoppers by offering lower prices, thereby depriving American households of more affordable options. Amazon’s conduct also suppresses rival online marketplace service providers’ ability to compete for sellers by offering lower fees because sellers cannot pass along those savings to shoppers in the form of lower product prices.

This all sounds bad and, at first glance, anti-competitive. Consider, though, what the FTC is implicitly asking:

  • First, Amazon is replacing the offending merchants in the Buy Box with other merchants who offer lower prices (including, potentially, Amazon itself). It’s difficult to understand how this is bad for consumers.
  • Second, insisting that Amazon promote merchants who offer higher prices on Amazon and lower prices elsewhere is, once again, insisting that Amazon offer the fruits of its investments, both in terms of customer acquisition and in delivery speed, to merchants who are actively seeking to develop an Amazon competitor.
  • Third, most-favored nation clauses, which this is in practice if not in specifics (in fact, according to the FTC, these practices replaced MFN clauses) have consistently been found to be legal.

Most importantly, though, this alleged illegality rests on Amazon being a monopoly, which means, as happens with all antitrust cases, we have a question of market definition. In this case the FTC has defined the relevant markets as “the online superstore market” and “the market for online marketplace services.” These definitions, conveniently enough, exclude all brick-and-mortar retailers (the word “omnichannel” doesn’t appear in the complaint), and all independent retailers, such as those hosted by Shopify. The FTC says that this narrow definition makes sense because of the convenience and selection that is exclusive to “online superstores”, thanks to the ability to ship things together; never-mind that other websites are only a click away, and that the entire reason you can ship things together is because most items on Amazon are Fulfilled by Amazon (see the previous complaint).

This definition is obviously going to be critical to this case: Benedict Evans ran the numbers and, if you consider all of retail, then Amazon only has single-digits worth of marketshare; if you consider all of e-commerce Amazon has about 35% share. What is clear is that just about everything on Amazon is available elsewhere: the power the company has with regard to pricing is a function of the demand it delivers to merchants — demand that is not compelled of customers, but willingly given precisely because customers find Amazon’s service to be valuable.

Amazon’s Market-Making

That’s not to say that there aren’t merchants wholly beholden to Amazon; in 2019 an Amazon reseller named Molson Hart wrote an essay on Medium complaining about Amazon’s fulfillment prices, and included this chart:

We sell plush and construction toys on Amazon. Well, technically, we sell toys on our website, on eBay, on Walmart.com, to brick-and-mortar stores, and we sell on Amazon. But, really, we only sell on Amazon. In 2018, we had about $4,000,000 in sales but Amazon.com accounted for over 98% of that.

How Amazon dominates the sale of one merchant

Harvard Business School would call this “vendor/customer concentration”. In the e-commerce world, we call it being Amazon’s bitch.

While Amazon received $1.95 million from us last year, they are not afraid of losing our business for a couple of reasons. First, there are thousands of companies out there eager to take our place. Second, Amazon had $277 billion in gross merchandise revenue in 2018. Our $3.9 million in sales on Amazon accounted for .0014% of that. Finally, we have nowhere else to go and Amazon knows it.

Hart shared his entrepreneurship story on a podcast, and I think it provides important context:

In 2014-2015, you could sell literally anything you wanted on Amazon and it was profitable. You didn’t really need to do any data analysis. If you were buying in China and you didn’t do an absolutely abominable job sending it to Amazon, you were making money. When you’re selling into retail it’s different. On Amazon you could just sell commodities in 2014-2015 — literally a towel, you didn’t have to have a brand or anything — but to sell into retail, which is a developed market, I can’t call into Walmart and be like, “Hey, I’m this twenty-six year old kid and I’m going to sell you towels.” They’re like “No, we’re going to buy from branded manufacturers of towels and factories for towels and people who have an established business, some credibility in this industry.” So if we wanted to sell into retail we had to bring innovative products to the table, otherwise there was no incentive for them to take the risk on a young company with a young founder, etc.

So I figured out at one point, “Maybe instead of coming up with all of these innovative products, because innovation was really hard, maybe we could find stuff that is already popular in China or Japan or Korea and just bring it to America and re-brand it.” So then what I did is I just went onto Taobao — China’s Amazon — and went through tens of thousands of products. I looked at the top sellers in each category, toys, games, bikes, sporting goods, all that stuff, and anytime I saw a product that was selling really well in China that I had never seen before I put it into a bucket and said, “OK, we’re going to launch this in America.” So that’s what we did and by-and-large was a pretty effective strategy. That’s actually how Brain Flakes was born.

Brain Flakes is Hart’s biggest product, and the biggest driver of that Amazon-dominated sales chart up above. The question I have with regards to that chart, though, and Hart’s griping about Amazon’s fees, is hasn’t Amazon earned the right to charge Hart whatever it deems appropriate? Hart himself admits that his entire business was predicated on Amazon’s marketplace model, a model that enabled individual entrepreneurs with smarts and hustle to build big businesses without a reputation or a brand. To put it another way, Hart’s business is dominated by Amazon because Amazon made his entire business possible (and if these complaints sound familiar, they echo complaints about Facebook from companies built on Facebook, Yelp’s complaints that they have to acquire customers instead of relying on SEO, or publishers the world over blaming their commodity status on the same companies that made the market for them in the first place).

I am by no means here to pick on Hart or any of the millions of other 3rd-party merchants on Amazon: I salute their entrepreneurial grit. I fail, though, to see what exactly is anticompetitive in this story. What I see, much like the Prime program above, is massive investment by Amazon to create an entire category that dramatically increased the amount of commerce, and it’s unclear to me why they can’t conduct normal business activity to ensure they have competitive prices in that market.

That noted, the reason I find this part of the complaint compelling is that I do have unease about the use of enforced price matching and other non-organic means of limiting competition; that, along with acquisitions and digital advertising, was one of the three areas of concern I highlighted in a 2019 Article that explained what antitrust crusaders fail to understand about Aggregators who gain market power not through controlling supply but rather by harnessing demand. The issue, though, is that these concerns ought be addressed through new laws; trying to apply antitrust regulations that were created for the analog world in a digital context simply doesn’t make sense, and very likely will, like so many other recent FTC actions, fail in court.

Charter-Disney Winners and Losers

From the Wall Street Journal:

Disney and Charter Communications have reached an agreement that will restore popular channels, including ESPN and ABC, to the cable operator’s nearly 15 million subscribers, ending a blackout that lasted for more than a week. The agreement comes just hours before ESPN’s coverage of the first “Monday Night Football” game of the season — a highly anticipated matchup between the New York Jets and Buffalo Bills. It also marks a seminal moment in the oft-fraught relationship between pay-TV providers and entertainment companies, which have been at loggerheads in recent years as the continued rise of streaming upended their respective businesses.

Disney and Charter released a joint press release that included the details:

Among the key deal points:

  • In the coming months, the Disney+ Basic ad-supported offering will be provided to customers who purchase the Spectrum TV Select package, as part of a wholesale arrangement;
  • The ESPN flagship direct-to-consumer service will be made available to Spectrum TV Select subscribers upon launch and;
  • Charter will maintain flexibility to offer a range of video packages at varying price points based upon different customer’s viewing preferences.

Charter also will use its significant distribution capabilities to offer Disney’s direct-to-consumer services to all of its customers – in particular its large broadband-only customer base – for purchase at retail rates. These include Disney+, Hulu and ESPN+, as well as The Disney Bundle.

Effective immediately, Spectrum TV will provide its customers widespread access to a more curated lineup of 19 networks from The Walt Disney Company. Spectrum will continue to carry the ABC Owned Television Stations, Disney Channel, FX and the Nat Geo Channel, in addition to the full ESPN network suite. Networks that will no longer be included in Spectrum TV video packages are Baby TV, Disney Junior, Disney XD, Freeform, FXM, FXX, Nat Geo Wild, and Nat Geo Mundo.

To preserve all these valuable business models, the parties also have renewed their commitment to lead the industry in mitigating the effects of unauthorized password sharing.

The biggest question here is the third bullet point: Charter would like to offer bundles that don’t include ESPN, but it’s notable that the press release says “maintain flexibility”, as opposed to “gain flexibility”; that lines up with something ESPN chairman Jimmy Pitaro told the Hollywood Reporter:

“The first [priority] was protecting the traditional business model, one that’s been very, very good to us and continues to be good to us,” Pitaro adds. “And we were able to do that, we secured commitments that were very strong in terms of rates and minimum penetration.”

I’m going to assume that this means that Spectrum will continue to be contractually compelled to have ESPN in 70%~80% of its TV packages (that’s why it’s hard to even find information about the company’s TV Basic and TV Essentials packages on its marketing pages); that leaves the question of who won, and the answer depends on your perspective: are you asking about this specific stand-off, the overall future of TV, or the entire arc of video?

UPDATE: The minimum penetration is 85%.

Current Standoff — Winner: Charter

Charter is the unequivocal winner of this standoff. Indeed, the agreement details above completely validate my argument last week that ESPN no longer has the same leverage it enjoyed for decades. Remember, Charter was willing to meet Disney’s demand for higher ESPN affiliate fees; what Charter wanted was all of Disney’s non-sports content too. That used to exist on Disney’s cable channels, but Disney — along with the rest of Hollywood — had broken the bundle by putting all of their best content on its streaming services. Now the ad-supported Disney+ will be a part of the cable bundle as well, along with the future ESPN streaming service.

Disney did, of course, get its rate increase and minimum penetration guarantees, and will charge for Disney+; that charge, though, is balanced out by the elimination of the channels that Disney cannibalized from the bundle.

More important is the fact that Disney has been forced to give up its attempt at double-dipping: no longer can the company get paid by Charter for channels and charge subscribers directly for what is generally the same general entertainment content. That was what Charter wanted, and Disney, lacking leverage and the reality of massive sports rights fees that presumed the presence of Charter’s millions of TV subscribers, gave in.

Future of TV — Winner: Disney

Here is perhaps the biggest surprise in this deal: I actually think it is Disney that is the bigger winner when it comes to the future of TV. Note this paragraph in the press release:

Charter will also use its significant distribution capabilities to offer Disney’s direct-to-consumer services to all its customers – in particular its large broadband-only customer base – for purchase at retail rates. These include Disney+, Hulu and ESPN+, as well as The Disney Bundle.

I wrote extensively about the go-to-market capabilities of cable companies and why they were well-positioned to bundle streaming services last year in Cable’s Last Laugh; I will refrain from quoting half of the piece, and briefly summarize:

  • First, Disney, along with every other streaming service, needs help improving their go-to-market efficiency; in this there is no better asset than the cable companies’ existing go-to-market machines.
  • Second, Disney, along with every other streaming service, needs help lowering churn. When you are a standalone streaming service the only way to stop churn is by continually producing new must-see content, which is extremely expensive. It is much easier if you are part of a bundle and sharing the burden of generating new content with other companies.
  • Third, Disney, along with every other streaming service, has come to realize that their greatest growth opportunity is in advertising. A profitable advertising business, though, depends on scale; the fact that Disney has just quadrupled its ad-supported Disney+ base is a big deal!

It’s obvious, of course, that a stronger bundle is better for Disney’s existing cable channels, particularly ESPN; what should also be clear is that a stronger bundle is better for Disney’s streaming services as well, and now Disney is committed to building exactly that alongside of Charter, and inevitably over the next several years, every other pay-TV provider.

This is why Disney is the long-term winner: the obstacle to the company doing the right thing for the long-term health of their business was not Charter, it was Disney’s own management, and Charter did the company the tremendous favor of forcing it to give up an unsustainable double-dipping strategy and take a step into a future of re-bundling.

Charter, meanwhile, knows better than anyone the value of bundles: the more services it can tie into a single billing statement the stickier their offering is for end users. Yes, the company may have been willing to give up video, but it is stronger for having it.

The Arc of Video — Winner: Consumers

All that noted, both Charter and Disney emerge from the last decade weaker than they were before. Disney, along with the rest of Hollywood, killed the golden goose that was 90% of households subscribed to cable. No matter how successful Disney+ or an over-the-top ESPN streaming service becomes it will never be as profitable as effectively charging a tax on every household in America.

Charter is worse off as well: yes, the company had leverage over Disney in this negotiation, but that was a function of no longer caring whether or not it carried ESPN, not because the alternative was better. Indeed, Charter’s strategy of directing unhappy customers to Fubo was a necessary negotiating ploy that carried long-term risks: once customers are accustomed to getting their sports and news from an app it quickly becomes apparent that that app can be accessed over any broadband provider, including fiber and fixed wireless. As I noted above, Charter knows the value of bundles better than anyone, and this new bundle is much weaker than the old one.

The big winners, though, are consumers, on multiple levels:

  • First, consumers will soon have the option to get nearly all of their entertainment on an a la carte basis, particularly once the ESPN streaming service launches, even as they have access to a bundle that includes nearly all of their entertainment for a lower price than if they subscribed individually.
  • Second, consumers will be able to access general entertainment on-demand, and a far greater range of sports thanks to the effectively infinite number of channels enabled by streaming.
  • Third, consumers will be able to get their general entertainment ad-free if they are willing to pay (this is a win for the entertainment companies as well, who will gain the opportunity to segment their customer base based on their willingness to pay for an ad-free experience).

The biggest win of all, though, comes at Charter’s expense specifically: as noted above the loosening of the TV part of the bundle will make it easier to change broadband providers. That means that Charter will have to compete based on the quality of its broadband, which has fallen behind fiber over the last several years. Charter has announced plans to rectify that, pledging to spend $5.5 billion over the next three years to move its entire network to DOCSIS 4.0; other cable carriers are plotting similar upgrades. Meanwhile, Charter has been very aggressive in pushing its mobile service, significantly undercutting the big phone carriers in price, particularly as part of a bundle.

This is great news for consumers, and redolent of what happened with the iPhone. When the consumer point of contact changed from a carrier-controlled interface to an Apple-controlled one, the only alternative for phone carriers was to compete on their network quality; that was bad for profitability but great for consumers, both in terms of price and quality. I would expect a similar effect as the consumer point of contact for TV continues to change from a cable box to apps: infrastructure providers like Charter will have to compete by building infrastructure, and that’s a good thing.

Other Winners and Losers

Tech is another big winner of this fight, which shouldn’t be a surprise: big tech is so dominant in part because it provides so much consumer surplus in its markets; a market where consumers are winning is probably one where tech is as well. In this case video is becoming an app game, and while Charter and the other pay-TV providers have useful go-to-market channels, tech is the king of distribution and customer acquisition.

To that end, what cable can do for streamers is already being done by Amazon, Apple, Roku, etc.; the latest entrant is YouTube, which is using NFL Sunday Ticket to launch YouTube Channels, a streaming marketplace designed to sell services like Disney+ (for an ongoing commission, of course). YouTube, though, can pair that offering with YouTube TV, which means it has everything that Spectrum has; in this regard the fact that YouTube has already significantly increased the value of Sunday Ticket through better technology should make competitors nervous.

The second big winner is Fox. Fox sold off its 21st Century division to Disney to focus on news and sports; Fox News charges the second highest affiliate fees amongst cable channels, and Fox has invested heavily in sports rights that run across the Fox broadcast network (which gets large retransmission fees from cable companies), FS1, and regional networks like the Big Ten network. The cable bundle sticking together is existential for Fox, and it looks like that is going to happen — indeed, Fox’s live offerings are now going to be re-bundled with 21st Century content streamed by Disney.

Fox’s fate, meanwhile, gets to why sports leagues are big winners as well. Had the bundle fallen apart than the NBA, which is in the midst of negotiating a new rights deal, would have been in big trouble; now that it has a future ESPN can more confidently bid. At the same time, now that everything is becoming an app, including traditional TV, the motivation for tech companies to bid in order to secure their marketplaces as the ultimate winners is higher as well.


One final comment about the significance of this deal.

There is a certain flavor of detached cynicism that is often the default response to news; examples abounded yesterday after this deal was announced. For example:

A cynical response to the Charter-Disney deal that is wrong

Most of the time this response works well: the status quo is a powerful thing, and if your goal is being right more often than not than it is always safer to be skeptical that things are different this time.

In this case, though, I think Sherman has it wrong: cable TV as we know it ended several years ago with The Great Unbundling. The significance of the just-announced deal between Disney and Charter is that The Great Re-bundling has begun.

The Rise and Fall of ESPN’s Leverage

On December 12, 1975, RCA Corporation launched its Satcom I communications satellite; the primary purpose was to provide long-distance telephone service between Alaska and the continental U.S. RCA had hopes, though, that there might be new uses for its capacity; to that end the company had listed for sale a 24-hour transponder that covered the entire United States, only to discontinue the offering after failing to find a single buyer.

Three years later Bill Rasmussen, the communications manager for the Hartford Whalers, was let go from his job; he had the idea of doing the same coverage he did for the team, but independently, along with other Connecticut sports, leveraging the then-expanding cable access TV facilities in Connecticut. These facilities existed to capture broadcast signals from New York and Boston using large antennas and deliver them to people’s houses; the cables, though, had capacity to carry more channels at basically zero cost, including Rasmussen’s proposed Connecticut sports network.

It was in the course of canvasing Connecticut cable providers that Rasmussen was introduced to the concept of satellite communications, and Al Parinello, a manager at RCA. At first Rasmussen pitched his Connecticut sports network idea, and Parinello was confused: satellites covered the entire country, so why was Rasmussen only talking about a single state? Parinello told James Andrew Miller in Those Guys Have All The Fun:

I can still remember the conversation. Bill said, “Let me get this straight. You mean to tell me, for no extra money — for no extra money! — we could take this signal and beam it anywhere in the country?” And I said, “That’s right.” And then he asked again, “Anywhere in the country?” And I said, “Anywhere.” I remember we went back and forth like this a couple times. Bill and Scott were looking at each other, and they might have been getting sexually excited, I’m not sure. But I can tell that they were very, very excited.

It was in the course of that conversation that Parinello mentioned the unbought 24-hour transponders, which would let Rasmussen send a signal around the entire United States for less than it could cost him to buy access on those Connecticut cable companies; he bought it the next day, and only then set out to create what would become ESPN.

In other words, the very idea for ESPN sprung from:

  1. The fact that RCA had invested massive capital costs in the Satcom I satellite and thus:
  2. Was selling access to that satellite at a relatively low price, given that said access had zero marginal costs, which meant:
  3. Rasmussen could leverage that access to reach every home in America, or at least every cable operator, for an even lower price than it cost to reach only the state of Connecticut.

Massive fixed costs resulting in zero distribution costs and massive scalability on a platform that is inherently indifferent to the data it is distributing might sound familiar: it’s the same economic forces undergirding the Internet, and it speaks to those forces’ power that while they may have made ESPN in the first place, they threaten to destroy it in the long run.

ESPN and the Advent of Affiliate Fees

The first ESPN broadcast was a year later, on September 7, 1979; in the intervening time Rasmussen had made a deal with the NCAA, which oversaw a host of untelevised sports, to televise the early rounds of the men’s basketball tournament along with several other less popular sports. The other important deal was with Anheuser-Busch, which signed an advertising contract for $1.3 million. The idea was to convince cable distributors around the country to pick up the free ESPN signal, and to make up the cost with advertising; 1.4 million homes had access to that first broadcast.

In another foreshadowing of the Internet, ESPN soon realized that providing ongoing content monetized with nothing but advertising was good for growth but bad for actually making money; a year later the company reached 6 million homes and had a new deal with Anheuser-Busch that didn’t come close to covering its costs. Rasmussen was also out, as new management sought to rework its deal with the NCAA and, most importantly, the cable operators.

Then, in 1982, CBS Cable failed, leading Wall Street to question the whole business model; this didn’t affect ESPN, which was still mostly owned by Getty Oil, with ABC as a new investor and partner, but it did affect the cable companies, who saw their stocks plummet. The last thing they needed was for ESPN to go out of business too; Roger Werner, ESPN’s then CEO, told Miller:

We went to the market with this sort of survival pitch essentially as follows: If you come in voluntarily and do a new deal with us, we’ll start your rate at four cents in 1983 or ’84 and then we’ll go to six cents the next year, then eight cents. Either rip up the old contract and have some protection for whatever the term of your new affiliation agreement is going to be, or pay the prevailing rate when your old deal expires. There was the specter that if we were still around—and we intended to be around—we’d be a much more expensive service…

Essentially we were saying, guys, if you’re not interested in paying a fee and you’re really not interested in stepping up to the plate in the near term, tell us now and we’ll pull the plug. Nobody really wanted to deal with the idea that they were going to be paying for a product that had been free, but actually my recollection of this is that it was very stress-filled, it was very contentious.

It worked. Suddenly ESPN had two business models: advertising and a per-subscriber fee, whether or not they watched ESPN. Andy Brilliant, the then-general counsel told Miller:

At the end of the day, they blinked and agreed to pay us a dime per household. We breathed a massive sigh of relief. It was the first time we actually received validation that our service was worth something to the cable operators. I think that really put us on the map for good.

It also changed how ESPN thought about programming. Then-President of ESPN Bill Grimes told Miller:

This was, like, ’83; at that time we had boxing one night and skiing, tennis, and a whole bunch of other stuff on the schedule. We were talking one day about the fact that there was a lot of college basketball becoming available. I said, “You know, we could get basketball six nights a week. Our weekly ratings in prime would really go up.” But Roger [Werner] said, “That’s true, we could probably get a better rating. But they’re only numbers. We’re now in the business of subscriber fees. So what we want is as diverse programming as possible. Even if a program like skiing or auto racing gets a lower rating, there are people who will never watch a basketball game. So we should now think a little bit differently.” This was totally contrary to what I had grown up with in the business — rating, rating, rating. Get the highest ratings we can get. But Roger was right. We didn’t want all our ratings from one thing, because it’s only those hundred people who watch the skiing event that’ll yell like hell if the cable operators ever do decide to drop ESPN. His belief that sacrificing a little bit of ratings to have greater variety was going to create more rabid fans of ESPN was absolutely right.

Werner was right: ESPN could raise its affiliate fees, and cable operators that tried to drop them in protest were overrun with complaints, quickly adding the channel back. By 1986 ESPN was charging around 27 cents per subscriber, and then they signed a deal with the NFL, adding a 9 cent surcharge to their fees; cable operators could choose to not show the game (and avoid the surcharge), but within weeks nearly every cable operator realized their customers would not tolerate not having access to the NFL. George Bodenheimer, who would later become President of ESPN, told Miller this anecdote about the surcharge:

We set a deadline and we told everybody there was a benefit to committing to us then, but those who didn’t sign by midnight of the deadline date would pay a higher price. I remember pleading with one particular cable operator who was my account who said he wasn’t going to agree to sign on. His name was Leonard Tow.

Tow was in the process of building what is now known as Frontier Communications; Grimes picked up the story:

Leonard comes in, and you know what the first thing he says about the deal was? “We can’t afford to do this.” I said, “People not seeing the games aren’t going to like it.” Leonard said, “I know football’s popular, but we’re already paying you guys a subscriber fee. We’ll just put on some other local programming the night of the game.” I reminded him that if he changed his mind after tomorrow, he would have to pay a 20 percent incremental fee, a premium, but he just kept saying nope. On the way out I said, “Leonard, look, we’re really successful now and we’re going to be more successful in the future. It would be awful not having you a part of this, but I really believe you’re going to wind up changing your mind. Just wait until people find out you won’t have the games.” He disagreed and we said good-bye. One week later, he called and signed on. And, oh yeah, he paid the extra 20 percent.

ESPN paid $153 million over three years for those NFL rights; the first broadcast reached 45 million homes, earning the network an incremental $4.05 million/month, just about enough to cover the NFL rights. What was more important is that the NFL attracted new subscribers which paid ESPN’s full fees, which amounted to over $12 million a month. Moreover, ESPN also got rights from the NFL for unlimited access to highlights: that fueled studio shows like NFL Primetime and SportsCenter that cost very little to produce, yet both attracted large audiences (for advertising), and made the NFL and other sports even more popular. The flywheel was fully engaged.

Charter vs. Disney

Over the last decade the story of ESPN specifically, Disney more broadly, and cable as a whole has been the slow but steady disintegration of that flywheel, culminating in the current standoff between Charter and Disney. From the Wall Street Journal:

Charter Communications subscribers are caught in the middle of a philosophical fight between the cable giant and Disney, parent company of ESPN, ABC and several other networks. Disney-owned networks on Thursday went dark for customers of Charter’s Spectrum cable systems, which has nearly 15 million video subscribers across the country including the New York and Los Angeles markets. As a result, sports fans who are Charter subscribers are losing access to college football and the U.S. Open. And the National Football League season is about to begin: ESPN’s “Monday Night Football” starts Sept. 11. Other channels no longer available to Charter include ABC-owned TV stations and cable networks FX, Disney Channel, Freeform and National Geographic.

Channels going dark in the midst of an affiliate fee dispute aren’t new: indeed, they were how ESPN managed to extract per-subscriber fees in the first place. And, for 40 years, ESPN usually won, including a standoff with YouTube TV in late 2021; I wrote at the time in an Update:

It appears that Disney decisively won its stand-off with Google; YouTube TV dropped Disney channels for about two days, only to come to an agreement that Disney characterized as “fair terms that are consistent with the market”; this strongly suggests that Google saw sufficient cancellations in that two-day window that it caved on its demands to get a lower rate. This further reaffirms just how powerful the ESPN bundle is (and Disney’s bundle generally).

When Disney went dark on Charter last week, I initially assumed a similar outcome; then came the Charter investor call the next morning, and this slide:

Charter's investor slide about video

The most important sentence is in the light blue box on the far right: “The video product is no longer a key driver of financial performance.” This is the culmination of a 25-year shift in business model for the cable companies: those initial investments in wires in the ground to provide small communities access to big city TV broadcasts turned out to be very well suited to providing broadband Internet access. Remember the lesson of RCA and ESPN’s founding: the digital transmission of information is inherently indifferent to the data being distributed. In the case of cable the initial use case was digital TV signals, but the exact same cable could also carry packets running the TCP/IP protocol.1

Of course for a long time it was very profitable to carry both, along with voice: cable companies offered “triple play” bundles that included TV, Internet, and telephony. Over time the telephony part dropped off, as people used mobile phones exclusively; cable carriers have since moved into the mobile carrier space as well, fueled by profits from TV and broadband Internet. What made the Internet part the most valuable, though, is that the cable companies didn’t need to pay for content: everything was just a packet.

That, though, was also the problem: some of those packets reformed themselves as Netflix video streams, which ate into time spent watching TV. Worse, Netflix’s stock was rising and rising as it acquired ever more customers, much to the chagrin of Hollywood, which felt entitled to those multiples given they were the ones producing the most compelling content. That resulted in the fateful decision to start their own streaming services, impoverishing the TV bundle; Charter’s investor presentation included “The Impoverishment Cycle” created by MoffettNathanson:

"The Impoverishment Cycle" from MoffettNathanson, via Charter

What Disney and all of the rest forgot was the lesson first imparted by Werner at the dawn of affiliate fees: retaining customers means offering content for everyone; in the case of the cable bundle, that meant having compelling programming above-and-beyond sports.

The second lesson Disney forgot was why that NFL deal made sense for ESPN at the time, even though the surcharge ESPN charged cable providers was only projected to barely cover the deal: high end sports deals drove customer demand, but the real money was made on (1) everyone who didn’t care about football and (2) cheap content like SportsCenter. The latter, though, has also been impoverished by the Internet; I noted last year when the Big Ten signed a TV deal that excluded ESPN:

The Big Ten’s exclusion of ESPN really highlights the degree to which social media has supplanted ESPN’s previous tentpole shows like SportsCenter; ESPN used to get discounts on rights deals because to be excluded from SportsCenter meant publicity death. That’s no longer the case.

The former, meanwhile, is a reminder that while ESPN has generally made money from rights deals, particularly for smaller sports that filled the schedule and inspired niche fans to badger the cable companies, the biggest properties — particularly the NFL — have always been cognizant of their worth and willing to extract their full value. Disney, in turn, can only maintain ESPN profitability by passing on those rights fees to cable distributors, who must in turn pass them on to their customers.

The third lesson Disney has forgotten is the most counter-intuitive takeaway of this battle: the worst thing that has happened to the company’s negotiating position is that ESPN is already available on the Internet.

The Phases of Cable TV

The cable TV industry has gone through four distinct phases in terms of competition:

Phase 1: Non-Consumption

The first phase was the time in ESPN’s history I detailed above: burgeoning cable TV services were running cables to every home in America and trying to convince customers to sign-up. In this case their competition was non-consumption: a lot of people didn’t have cable, and the cable companies wanted them to sign up for service. ESPN was a particularly unique asset in this regard thanks to its provision of sports content that wasn’t available elsewhere — indeed, until the NFL deal, most of the content had never been available at all. This certainly led to some bruising fights between ESPN and the cable companies over affiliate fees, but it’s always easier to come to an agreement when the pie is growing.

Phase 2: Satellite

The second phase was the 90s emergence of DirecTV and Dish Network, which offered the same channels as cable TV but via a small satellite dish you could mount on your roof or porch. This was a more involved installation process, but ultimately cheaper thanks to the fact that DirecTV and Dish didn’t need to put an actual cable in the ground. This was also good news for ESPN because now there was an alternative to traditional cable TV: if a cable provider didn’t want to accept higher affiliate fees then ESPN could withhold service, trusting its viewers would punish the cable provider by moving to satellite (which means they would probably be gone forever).

Phase 3: IPTV

By the 2000s the satellite threat to cable was fading because satellite was TV only: if a customer had both Internet and TV via their cable provider than it was much harder to switch. Remember, though, that it’s all data in the end; thus the 2000s saw the rise of IPTV offerings from traditional telecom providers like AT&T and Verizon. They too saw salvation for their own fading telephony business in providing broadband Internet, but providing a competitive offering to cable meant offering TV as well. And, thanks to the Internet, they could simply provide said TV using the TCP/IP protocol.

The decade that followed was probably the time of maximum ESPN leverage: it was easier for customers to switch from the cable bundle to the telecom bundle than it was to install an extra satellite dish; it’s no surprise that this was the decade when ESPN’s aggressiveness in terms of both acquiring sports rights and in raising affiliate fees increased; it was also the peak of ESPN’s relative share of Disney profits.

Phase 4: vMVPDs

The virtual multichannel video programming distributor (vMVPD) era kicked off in 2015 with the launch of Sling TV. This took the IPTV trend in Phase 3 to its logical endpoint: instead of needing a box to display IPTV signals, you could simply use an app. vMVPD’s have had a big impact on the landscape in two ways: first, they significantly diminished the cord-cutting trend for years as they both captured cord-cutters and also non-consumers, and second, they decimated regional sports networks that had long increased affiliate fees even more aggressively than ESPN. I wrote in What the NBA Can Learn From Formula 1:

There just aren’t that many SuperFans of a single team, yet regional networks cost more than anything outside of ESPN — more in some markets. This worked in a world where everyone got cable by default, but remember that cable is losing far more customers than pay-TV as a whole, thanks to the rise of the aforementioned virtual pay-TV providers. Virtual pay-TV providers don’t have a customer base to defend, or infrastructure costs to leverage: they distribute via the Internet that people already pay for. To that end, they don’t have to carry everything, and regional sports networks were the most obvious thing to drop: this lets virtual pay-TV providers have a lower price than cable by virtue of excluding content that most people don’t want.

Still, this didn’t seem to affect ESPN, as exemplified by the fact they appear to have won their negotiation with YouTube TV in 2021. In fact, though, this dispute with Charter is showing why ESPN may be a loser as well. Go back to the issue of cable customer churn in response to ESPN’s lack of availability; here’s how it manifested in each phase:

  • In Phase 1, a churned customer meant less leverage on expensive buildouts, and pressure from Wall Street.
  • In Phase 2, a churned customer went to the effort of getting satellite and probably never came back.
  • In Phase 3, a churned customer would not just change their TV provider, but also their broadband provider, and remember that broadband was becoming the cable companies’ biggest business.

In Phase 4, though, a churned TV customer is still a broadband customer, because the Internet is a precondition for watching the vMVPD! Sure, a customer could be so incensed that they also change their Internet provider, but that is completely unnecessary and, given the inconvenience involved, highly unlikely.

That means that ESPN, for the first time in its history, has no leverage over the cable companies. Indeed, MoffettNathanson reported that Charter is actively helping customers move to vMVPDs:

For Charter, the uncomfortable truth is that it just doesn’t matter all that much. Yes, they probably do still make some money on video. But not much, and they recognize that linear video is going to be a rapidly declining line of service under even the most optimistic scenarios, so the issue is arguably nothing more than when, not if, video goes away. Charter has already established a referral capability for customers to switch them to YouTube TV or FuboTV (predictably, they haven’t mentioned referring customers to Sling TV or DirecIV Now, and they presumably wouldn’t steer anyone to Hulu Live if the trigger was a dispute with Disney).

Notably, the first NFL Monday Night Football game (ESPN) features two Spectrum-market teams; the New York Jets and the Buffalo Bills. To handle a potential rush of customers anxious about missing the game, Charter is preparing a one-touch QR code that would not only create a new YouTube TV or Fubo subscription, but would also downgrade from a Spectrum video bundle with a single click…Disney may learn the hard way that it’s tough to win a negotiation with a counterparty that has nothing to lose.

This truth may be uncomfortable for Charter; it ought to be sobering for Disney, particularly since the company, along with the rest of Hollywood, was the one responsible for destroying the value of TV to companies like Charter who were built on it.

The Case for Re-Bundling

Once-and-current Disney CEO Bob Iger has been talking a lot recently about ESPN’s inevitable shift to going over-the-top, including stating that he has a particular date in mind; this showdown with Charter and the revelation of ESPN’s dramatic diminishment in its negotiating position is a reminder that declining businesses often don’t have the luxury of dictating their future.

So what does that future look like?

First, it’s very possible — perhaps even likely — that Charter and Disney come to an agreement. As the MoffettNathanson note observes, Charter probably still is making some money on video, and it is also both a customer acquisition tool and churn mitigation factor for their broadband business, and a part of the modern triple play bundle (with mobile). Disney, meanwhile, along with all of the other Hollywood studios, still needs the substantial amount of cash that they receive from cable TV providers (this is particularly pressing for Disney given that they still have to pay for sports rights). Yes, they also earn money from vMVPDs like YouTube TV, but not every customer will seamlessly transition.

To that end, the company that ought to give here is Disney: according to that Wall Street Journal article Charter is willing to accept the reported $1.50 increase in affiliate fees Disney is demanding if they receive the right to bundle the ad-supported versions of Disney+. Charter argues that it is only right that Disney re-add its most valuable entertainment content to the pay-TV bundle, and frankly, I think they have a point.

More importantly for Disney, though, is that cable TV providers like Charter remain potent go-to-market entities — decades of servicing customers in their homes has meant a massive build-up in everything from stores to local sales to customer support — and that could be very helpful as Disney seeks to acquire more marginal customers. More importantly, though, I think it is in the long-term interest of the streaming services to be part of a bundle. I wrote last year in Cable’s Last Laugh:

The cable companies are better suited than almost anyone else to rebundle for real. Imagine a “streaming bundle” that includes Netflix, HBO Max, Disney+, Paramount+, Peacock, etc., available for a price that is less than the sum of its parts…Owning the customer may be less important than simply having more customers, particularly if those customers are much less likely to churn. After all, that’s one of the advantages of a bundle: instead of your streaming service needing to produce compelling content every single month, you can work as a team to keep customers on board with the bundle.

What Charter is proposing is a bit different — they want to bundle traditional TV with streaming services — and I get why Disney is resistant: there are a lot of people paying for both traditional TV and Disney+ (and Hulu and ESPN+); giving Charter bundling rights would cannibalize some amount of revenue. Moreover, it would also mean the end of whatever grand plans Disney might have about offering its own bundle, or cutting out the cable companies’ margin once-and-for-all. At some point, though, Disney and everyone else in Hollywood has to wake up to reality; I wrote in Hollywood on Strike:

The broader issue is that the video industry finally seems to be facing what happened to the print and music industry before them: the Internet comes bearing gifts like infinite capacity and free distribution, but those gifts are a poisoned chalice for industries predicated on scarcity. When anyone could publish text, most text-based businesses went from massive profitability to terminal decline; when anyone could distribute music the music industry could only be saved by tech companies like Spotify helping them sell convenience in place of plastic discs.

For the video industry the first step to survival must be to retreat to what they are good at — producing content that isn’t available anywhere else — and getting away from what they are not, i.e. running undifferentiated streaming services with massive direct costs and even larger opportunity ones. Talent, meanwhile, has to realize that they and the studios are not divided by this new paradigm, but jointly threatened: the Internet is bad news for content producers with outsized costs, and long-term sustainability will be that much harder to achieve if the focus is on increasing them.

Re-bundling is better for everyone; it’s Disney’s fault that the entities best-placed to pull that off no longer need it.

Second, for all of the talk about ESPN, it’s worth noting that its content is still valuable — that’s the entire reason this dispute is a big deal. Will anyone care if Charter stops carrying channels from anyone else in Hollywood? And yet, all of those studios are just as dependent on cable TV cashflow, even as many of them have “cheated” to a much greater extent than Disney: Peacock, for example, carries most of NBC’s sports programming, including football, and even put some of the most attractive Olympics programming exclusively on the streaming service. Why on earth should Charter or any other cable provider pay for NBCUniversal channels? Or, more pertinently, if ESPN isn’t available, why would any of the dwindling number of subscribers stay?

The biggest long-term question, though, has to be around sports itself. Sports leagues could extract ever higher rights fees from ESPN because ESPN could extract ever higher affiliate fees from cable TV providers; if the latter is broken than the former is as well. Yes, vMVPDs like YouTube TV will still exist — and be big winners — and Disney still plans an ESPN streaming service. All of those options, though, entail dramatically increased customer choice; leagues like the NBA have shrugged off declining ratings with the certainty that they would, via cable TV subscribers, get paid regardless, but now the choice isn’t just whether to click the remote, but whether to simply click cancel and watch something else. Better to re-bundle sooner rather than later!


  1. Yes, I know I just said “protocol” twice 

Nvidia On the Mountaintop

It was only 11 months ago that I wrote an Article entitled Nvidia In the Valley; the occasion was yet another plummet in their stock price:

Nvidia's current stock price drop

To say that the company has turned things around is, needless to say, an understatement:

Nvidia's latest stock rise

That big jump in May was Nvidia’s last earnings, when the company shocked investors with an incredibly ambitious forecast; this last week Nvidia vastly exceeded those expectations and forecasted even bigger growth going forward. From the Wall Street Journal:

Chip maker Nvidia said revenue in its recently completed quarter more than doubled from a year ago, setting a new company record, and projected that surging interest in artificial intelligence is propelling its business faster than expected. Nvidia is at the heart of the boom in artificial intelligence that made it a $1 trillion company this year, and it is forecasting growth that outpaces even the most bullish analyst projections.

Nvidia’s stock, already the top performer in the S&P 500 this year, rose 7.5% following the results, which would be about $87 billion in market value. The company said revenue more than doubled in its fiscal second quarter to about $13.5 billion, far ahead of Wall Street forecasts in a FactSet survey. Even more strikingly, it said revenue in its current quarter would be around $16 billion, besting expectations by about $3.5 billion. Net profit for the company’s second quarter was $6.19 billion, also surpassing forecasts.

The results show a wave of investment in artificial intelligence that began late last year with the arrival of OpenAI’s ChatGPT language-generation tool is gaining steam as companies and governments seek to harness its power in business and everyday life. Many companies see AI as indispensable to their future growth and are making large investments in computing infrastructure to support it.

Now the big question on everyone’s mind is if Nvidia is the new Cisco:

Is Nvidia Cisco?

I don’t think so, at least in terms of the near-term: there are some fundamental differences between Nvidia and Cisco that are worth teasing out. The bigger question is the long term, and here the comparison might be more apt.

Nvidia and Cisco

The first difference between Nvidia and Cisco is in the above charts: Nvidia already went through a crash, thanks to the double whammy of Ethereum moving to proof-of-stake and the COVID cliff in terms of PC sales; both left Nvidia with huge amounts of inventory it had to write-off over the second half of last year. The bright spot for Nvidia was the steady growth of data center revenue, thanks to the increase of machine learning workloads; I included this chart in that Article last fall:

Nvidia's gaming revenue drop

What has happened over the last two quarters is that data center revenue is devouring the rest of the company; here is an updated version of that same chart:

Nvidia's sky-rocketing AI revenue

Here is Nvidia’s revenue mix:

Nvidia's revenue mix

This dramatic shift in Nvidia’s business provides some interesting contrasts to Cisco’s dot-com run-up. First, here was Cisco’s revenue, gross profit, net profit, and stock price in the ten years starting from its 1993 IPO:

Cisco's revenue, profit, and stock price in the 90s

Here is Nvidia’s last ten years:

Nvidia's revenue, profit, and stock price

The first thing to note is the extent to which Nvidia’s crash last year looks similar to Cisco’s dot-com crash: in both cases steady but steep revenue increases initially outpaced the stock price, which eventually overshot just a few quarters before big inventory write-downs led to big decreases in profitability (score one for crypto optimists hopeful that the current doldrums are simply their own dot-com hangover).

Cisco, though, didn’t have a second act, unlike this data center explosion. What is notable is the extent to which Nvidia’s revenue increase is matching the slope of the stock price increase (obviously this is inexact given the different axis); it seems likely that the stock will overshoot revenue growth soon enough, but it hasn’t really happened yet. It’s also worth noting how much more disciplined Nvidia appears to be in terms of below-the-line costs: its net profit is moving in concert with its revenue, unlike Cisco in the 90s; I suspect this is a function of Nvidia being a much larger and more mature company.

Another difference is the nature of Nvidia’s customers: over 50% of the company’s Q2 revenue came from the large cloud service providers, followed by large consumer Internet companies (i.e. Meta). This category does, of course, include the startups that once might have purchased Cisco routers and Sun servers directly, and now rent capacity (if they can get it); cloud providers, though, monetize their hardware immediately, which is good for Nvidia.

Still, there is an important difference from other cloud workloads: previously a new company or line of business only ramped their cloud utilization with usage, which ought to correlate to customer acquisition, if not revenue. Model training, though, is an up-front cost, not dissimilar to the cost needed to buy those Sun servers and Cisco routers in the dot-com era; that is cloud revenue that has a much higher likelihood of disappearing if the company in question doesn’t find a market.

This point is relevant to Nvidia given that training is the part of AI where the company is the most dominant, thanks to both its software ecosystem and the ability to operate huge fleet of Nvidia chips as a single GPU; inference is where Nvidia will first see challenges, and that is also the area of AI that is correlated with usage, and thus more durable from a cloud provider perspective.

Those points about a software ecosystem and hardware scalability are also the biggest reason why Nvidia is different than Cisco. Nvidia has a moat in both, along with a substantial manufacturing advantage thanks to its upfront payments to TSMC over the last several years to secure its own 4nm line (and having the good fortune of asking for more scale at a time when TSMC’s other sources of high performance computing revenue are in a slump). There is certainly a massive incentive for both the cloud providers and large Internet companies to bridge Nvidia’s moats — see AWS’s investments in its own chips, for example, or Meta’s development of and support for PyTorch — but right now Nvidia has a big lead and the frenzy inspired by ChatGPT is only deepening their install base, with all of the positive ecosystem effects that entails.

GPU Demand

The biggest challenge facing Nvidia is the one that is ultimately out of their control: what does the final market look like?

Go back to the dot-com era, and the era that proceeded it. The advent of computing, first in the form of mainframes and then the PC, digitized information, making it endlessly duplicable. Then came the Internet which made the marginal cost of distributing that content go to zero (with the caveat that most people had very low bandwidth). This was an obvious business opportunity that plenty of startups jumped all over, even as telecom companies took on the bandwidth problem; Cisco was the beneficiary of both.

The missing element, though, was demand: consistent consumer demand for Internet applications only started to arrive with the advent of broadband connections in the 2000s (thanks in part to a buildout that bankrupted said telecom companies), and then exploded with smartphones a decade later, which made the Internet accessible anytime, anywhere. It was demand that made the router business as big as dot-com investors thought it might be, although by then Cisco had a host of competitors, including large cloud providers who built (and open-sourced) their own.

There are lots of potential starting points to choose for AI: machine learning has obviously been a thing for a while, or you might point to the 2017 invention of the transformer; the release of GPT-3 in 2020 was perhaps akin to the release of the Mosaic web browser, which would make ChatGPT the Netscape IPO. One way to categorize this emergence is to characterize training as being akin to digitization in the previous era, and creation — i.e. inference — as akin to distribution. Once again there are obvious business opportunities that arise from combining the two, and once again startups are jumping all over them, along with the big incumbents.

However you want to make the analogy, what is important to note is that the missing element is the same: demand. ChatGPT took the world by storm, and the use of AI for writing code is both proliferating widely and is extremely high leverage. Every SaaS company in tech, meanwhile, is hard at work at an AI strategy, for the benefit of their sales team if nothing else. That is no small thing, and the exploration and implementation of those strategies will use up a lot of Nvidia GPUs over the next few years. The ultimate question, though, is how much of this AI stuff is actually used, and that is ultimately out of Nvidia’s control.

My best guess is that the next several years will be occupied building out the most obvious use cases, particularly in the enterprise; the analogy here is to the 2000s build-out of the web. The question, though, is what will be the analogy to mobile (and the cloud), which exploded demand and led to one of the most profitable decades tech has ever seen? The answer may be an already discarded fad: the metaverse.

A GPU Overhang and the Metaverse

In April 2022, when Dall-E 2 came out, I wrote DALL-E, the Metaverse, and Zero Marginal Content, and highlighted three trends:

  • First, the gaming industry was increasingly about a few AAA games, small indie titles, and the huge sea of mobile; the limiting factor in further development was the astronomical cost of developing high quality assets.
  • Second, social media succeeded by virtue of making content creation free, because users created the content of their own volition.
  • Third, TikTok pointed to a future where every individual not only had their own feed, but also where the provenance of that content didn’t matter.

AI is how those three trends might intersect:

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before:

  • Game developers pushed the limits on text, then images, then video, then 3D
  • Social media drives content creation costs to zero first on text, then images, then video
  • Machine learning models can now create text and images for zero marginal cost

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

I wrote in the conclusion:

Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds need virtual content created at virtually zero cost, fully customizable to the individual.

Zero marginal cost is, I should note, aspirational at this point: inference is expensive, both in terms of power and also in terms of the need to pay off all of that money that is showing up on Nvidia’s earnings. It’s possible to imagine a scenario a few years down the line, though, where Nvidia has deployed countless ever more powerful GPUs, and inspired massive competition such that the world’s supply of GPU power far exceeds demand, driving the marginal costs down to the cost of energy (which hopefully will have become cheaper as well); suddenly the idea of making virtual environments on demand won’t seem so far-fetched, opening up entirely new end-user experiences that explode demand in the way that mobile once did.

The GPU Age

The challenge for Nvidia is that this future isn’t particularly investable; indeed, the idea assumes a capacity overhang at some point, which is not great for the stock price! That, though, is how technology advances, and even if a cliff eventually comes, there is a lot of money to be made in the meantime.

That noted, the biggest short-term question I have is around Nvidia CEO Jensen Huang’s insistence that the current wave of demand is in fact the dawn of what he calls accelerated computing; from the Nvidia earnings call:

I’m reluctant to guess about the future and so I’ll answer the question from the first principle of computer science perspective. It is recognized for some time now that…using general purpose computing at scale is no longer the best way to go forward. It’s too energy costly, it’s too expensive, and the performance of the applications are too slow. And finally, the world has a new way of doing it. It’s called accelerated computing and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that’s already in the data center. And by using it, you offload the CPUs. You save a ton of money in order of magnitude, in cost and order of magnitude and energy and the throughput is higher and that’s what the industry is really responding to.

Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It’s ready to be deployed.

And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers. This is true for the world’s clouds and you’re seeing a whole crop of new GPU-specialized cloud service providers. One of the famous ones is CoreWeave and they’re doing incredibly well. But you’re seeing the regional GPU specialist service providers all over the world now. And it’s because they all recognize the same thing, that the best way to invest their capital going forward is to put it into accelerated computing and generative AI.

My interpretation of Huang’s outlook is that all of these GPUs will be used for a lot of the same activities that are currently run on CPUs; that is certainly a bullish view for Nvidia, because it means the capacity overhang that may come from pursuing generative AI will be back-filled by current cloud computing workloads. And, to be fair, Huang has a point about the power and space limitations of current architectures.

That noted, I’m skeptical: humans — and companies — are lazy, and not only are CPU-based applications easier to develop, they are also mostly already built. I have a hard time seeing what companies are going to go through the time and effort to port things that already run on CPUs to GPUs; at the end of the day, the applications that run in a cloud are determined by customers who provide the demand for cloud resources, not cloud providers looking to optimize FLOP/rack.

If GPUs are going to be as big of a market as Nvidia’s investors hope it will be, it will be because applications that are only possible with GPUs generate the demand to make it so. I’m confident that time will come; what I, nor Huang, nor anyone else can be sure of is when that time will arrive.