MKBHDs For Everything

Anniversaries are strange: no one but those involved have any reason to even know they exist, which makes them easy to skip; for example, last year I mostly ignored the 10 year anniversary of starting Stratechery because I was more eager to write about ChatGPT Getting a Computer. That leaves today, April 16, as my last chance to mark a tin anniversary: ten years ago Stratechery added a subscription option, making this my full-time job.

It was, for the record, a miserable day: I had a convoluted plan to offer different site experiences for subscribers and non-subscribers: the former would receive daily linked-list updates (a la Daring Fireball) while the latter would still have access to longer-form Articles; I scrapped the concept two days later in favor of simply sending subscribers a daily email, and so the paid newsletter model was born.

That accident of history ended up being why this anniversary is actually more meaningful to me: while I am most well-known for Stratechery, I am equally proud of the paid newsletter model — services like Substack were based on Stratechery — and gratified that so many writers are making a living connecting directly with readers.

Of course newsletters aren’t the only medium where this happens (and, in fact, I’ve never viewed Stratechery as a newsletter; it’s a blog that you can read via email); there are also podcasts (which Stratechery has expanded into) and YouTube (like and subscribe to the Stratechery and Sharp Tech channels!). The king of the latter, at least when it comes to tech, is Marques Brownlee, aka MKBHD.

The Humane AI Pin

Humane introduced the wearable AI Pin at a TED Talk 11 months ago, painting the vision of a future without screens, powered by AI. I was, for the record, pretty skeptical: most new tech categories layer on top of what came before, as opposed to outright replacing them; Humane’s proposal of a wearable device that was completely independent of the phone in your pocket seemed unlikely to succeed. Still, I was cheering for them: I applaud any company trying to build something new, and AI provides the opening for new experimentation.

Unfortunately, last week the reviews of the AI Pin came out and they were universally terrible; The Verge’s headline summed up the sentiment: Humane AI Pin review: not even close. David Pierce wrote:

I came into this review with two big questions about the AI Pin. The first is the big-picture one: is this thing… anything? In just shy of two weeks of testing, I’ve come to realize that there are, in fact, a lot of things for which my phone actually sucks. Often, all I want to do is check the time or write something down or text my wife, and I end up sucked in by TikTok or my email or whatever unwanted notification is sitting there on my screen. Plus, have you ever thought about how often your hands are occupied with groceries / clothes / leashes / children / steering wheels, and how annoying / unsafe it is to try to balance your phone at the same time? I’ve learned I do lots of things on my phone that I might like to do somewhere else. So, yeah, this is something. Maybe something big. AI models aren’t good enough to handle everything yet, but I’ve seen enough glimmers of what’s coming that I’m optimistic about the future.

That raises the second question: should you buy this thing? That one’s easy. Nope. Nuh-uh. No way. The AI Pin is an interesting idea that is so thoroughly unfinished and so totally broken in so many unacceptable ways that I can’t think of anyone to whom I’d recommend spending the $699 for the device and the $24 monthly subscription.

Brownlee agreed; he said at the start of his YouTube review:

So this is the Humane AI Pin. It is a brand new product in a really interesting new form factor of an ultra-futuristic wearable computer. In a time of all these crazy gadgets and Vision Pro and wearable glasses, it’s so sick that we get so many genuinely new first generation products like this to give a shot.

Unfortunately it’s also the new worst product I think I’ve ever reviewed in its current state. There’s just so many things bad about it. It’s so bad, in fact, that I actually think it’s kind of distracting to understand what the point of the device is as we go through it, so I’m going to have to separate it out for this video. First, I’m going to tell you what it is and what it’s supposed to do, and then I’ll tell you what my experience has actually been using it.

Brownlee proceeded to do just that: he presented Humane’s vision, detailed how poorly it lived up to it (including the probably fatal flaw inherent in its decision to pretend smartphones don’t exist), and granted that future software updates might improve the experience. I thought it was a fair,1 if brutal, review, but what had some people up in arms was the title and thumbnail:

I don’t tweet much these days — if you want one of my tips for lasting over a decade, that is one of them, for my own peace of mind more than anything — but Vassallo’s sentiment bothered me enough to fire off a response:

I honestly think I had this anniversary on my mind: I remember when I first started, with only a few hundred followers on Twitter, armed with nothing but my honest takes about tech; my only hope was that those takes would be original enough and valuable enough to build an audience. That they were didn’t diminish the desire to be honest; indeed, one of the biggest challenges creators face as they grow is retaining their edge even when they know people are actually paying attention to what they have to say.

MKBHD’s Responsibility

The more interesting Vassallo tweet, though, and the one that inspired this Article, was this exchange in response to his original tweet:

Ignore the vaguely threatening “for now” ; Vassallo is touching on something profound about Brownlee, that I myself understand intimately: what the Internet has made uniquely possible is total loyalty to your customers, and that is threatening.

Let me start with myself: every dollar of income I have comes from my subscribers,2 all of whom pay the same price.3 If someone doesn’t like what I write, I make it easy to unsubscribe; if they accuse me of writing for favor or illegitimate profit I am happy to unsubscribe them myself, and refund their money. After all, to use Brownlee’s reponse to Vassallo:

My job is to make my subscribers happy, and the best way I’ve found to do that is to stay as true as I can to what I set out to do in the beginning: write things that are original and valuable. When I succeed I’m happy, and the numbers take care of themselves; when I publish something I’m not happy with, I have trouble sleeping. When tech companies or investors or anyone else is mad, I am free to not pay them any attention.

Brownlee, though, is, to Vassallo’s point, something else entirely: 18 million subscribers is an incredible number, even if only — “only” — 3.5 million people have viewed his Humane video. If Humane’s AI Pin wasn’t already dead in the water, it’s fair to say that @levelsio is right:

Who, though, is to blame, and who benefited? Surely the responsibility for the Humane AI Pin lies with Humane; the people who benefited from Brownlee’s honesty were his viewers, the only people to whom Brownlee owes anything. To think of this review — or even just the title — as “distasteful” or “unethical” is to view Humane — a recognizable entity, to be sure — as of more worth than the 3.5 million individuals who watched Brownlee’s review.

This is one of the challenges of scale: Brownlee has so many viewers that it is almost easier to pretend like they are some unimportant blob. Brownlee, though, is successful because he remembers his job is not to go easy on individual companies, but inform individual viewers who will make individual decisions about spending $700 on a product that doesn’t work. Thanks to the Internet he has absolutely no responsibility or incentive to do anything but.

Media and the Internet

While Stratechery mostly covers tech, I also spend a lot of time on media; in the early days of Stratechery, this mostly meant newspapers, and my argument was that was in fact a particularly useful angle on the impact of technology broadly: the fact that newspapers were text meant that they felt the impact of the Internet almost immediately, effects that would, as devices became more powerful and ubiquitous, and broadband faster and more reliable, impact everything. And, of course, Stratechery was a written product, so I had a stake in the game.

To that end, I wrote a series of Articles in the run-up to April 16, 2014, laying the groundwork for why I thought my subscription offering would work. The last of these was entitled Newspapers Are Dead; Long Live Journalism, where I wrote:

Remember how the New York Times was started as a means of making money? After the first year the paper had a circulation of 26,000 in a city of over half a million, but had incurred up-front capital costs of $50,000, and first-year expenses of $78,000 (In 1851 dollars; the modern equivalent would be $1.4 million and $2.2 million respectively). Compare that to this blog (which, by happy coincidence, turns one year old tomorrow, making this comparison timely): FiveThirtyEight and the End of Average, the first article in this series, has been read by over 30,000 people; meanwhile, I’ve spent less than $2,000. More readers, way less money.

You may consider the comparison unfair — an entire newsroom putting out a daily edition as compared to a solo blogger posting one Article — but the unfairness is the point. No one shared my article because it was from Stratechery, but then again, no one shares an article today just because it’s from the New York Times; all that matters is the individual article and its worth to the reader and potential sharer. As a writer, this is amazing. When it comes to reader attention, I am competing on an equal footing with The New York Freaking Times! Unfortunately for The New York Times, when it comes to making money they’re competing with Google and Facebook. Most distressingly, though, when it comes to costs, they’re competing with the last 150 years. Everything from printing presses to sales and marketing is deadweight if advertising is not a sustainable model.

The New York Times has, to its immense credit, figured out how to go direct to readers; most other newspapers have struggled: cost structures that were predicated on owning printing presses and delivery trucks simply don’t make sense in a world where distribution is free.

Free distribution, meanwhile, made Stratechery possible: I just needed to write compelling content on my own website. Brownlee, meanwhile, got his free distribution from YouTube (and, to the extent you want to complain about his accurate but attention-grabbing headline, the realities of the YouTube algorithm, the one non-viewer incentive Brownlee needs to respond to, looms large). The key point is that he could do everything else on his own, starting with the webcam on his HP Pavilion laptop that he used to record his first video:

Today Brownlee has a top-of-the-line studio and a staff, but the fundamental principle remains: in media one person can do it all, and build a thriving business that makes everyone from Humane to Apple care deeply about what he or she has to say — far more than they care about any traditional publication.

AI and Intelligence

I mentioned at the beginning that last year I wrote about ChatGPT Gets a Computer in lieu of an anniversary post; that Article, which was built around OpenAI’s announcement of (now-discontinued) ChatGPT plugins, was about the nature of the brain and how large learning models did or did not compare.

Computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing).

I’ve already mentioned Bing Chat and ChatGPT; on March 14 Anthropic released another AI assistant named Claude: while the announcement doesn’t say so explicitly, I assume the name is in honor of the aforementioned Claude Shannon. This is certainly a noble sentiment — Shannon’s contributions to information theory broadly extend far beyond what Dixon laid out above — but it also feels misplaced: while technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations; Sydney was interesting, but what about homework?

I added in an August Update:

The point of that Article was that ChatGPT’s plugin architecture gave hallucinating creative LLMs access to determinative computers to ascertain truth, not dissimilar to the way a creative being like you or I might use a calculator to solve a math problem. In other words, the LLM is the interface to the source of truth, not the source of truth itself.

That is exactly what this “coming soon” feature is all about: you don’t make an LLM useful for your business by adding your business’s data to the LLM; that is simply a bit more text in a sea of it. Rather, you leverage the LLM as an interface to “computers” that deterministically give you the right answer. In this case, those computers will be “connecting the applications you already have”, which sounds to me an awful lot like enterprise-specific plug-ins.

The specifics of product implementations are not the point, but rather the distinction between a natural language interface based on probabilities and computers based on deterministic calculations; I think using the former to access the latter will remain one of the most important applications of large language models.

There is, though, one more piece: who actually tells the AI what to do, such that it needs access to a computer of its own? In ChatGPT Gets a Computer, which analogized large language models to Jeff Hawkin’s theory of the brain, I expressed hope that the beliefs he expressed about the nature of intelligence in A Thousand Brains: A New Theory of Intelligence extended to large language models. Hawkins writes:

Intelligence is the ability of a system to learn a model of the world. However, the resulting model by itself is valueless, emotionless, and has no goals. Goals and values are provided by whatever system is using the model. It’s similar to how the explorers of the sixteenth through the twentieth centuries worked to create an accurate map of Earth. A ruthless military general might use the map to plan the best way to surround and murder an opposing army. A trader could use the exact same map to peacefully exchange goods. The map itself does not dictate these uses, nor does it impart any value to how it is used. It is just a map, neither murderous nor peaceful. Of course, maps vary in detail and in what they cover. Therefore, some maps might be better for war and others better for trade. But the desire to wage war or trade comes from the person using the map.

Similarly, the neocortex learns a model of the world, which by itself has no goals or values. The emotions that direct our behaviors are determined by the old brain. If one human’s old brain is aggressive, then it will use the model in the neocortex to better execute aggressive behavior. If another person’s old brain is benevolent, then it will use the model in the neocortex to better achieve its benevolent goals. As with maps, one person’s model of the world might be better suited for a particular set of aims, but the neocortex does not create the goals.

To the extent this is an analogy to AI, large language models are intelligent, but they do not have goals or values or drive. They are tools to be used by, well, anyone who is willing and able to take the initiative to use them.

AI and the Sovereign Individual

I don’t think either Brownlee or I particularly need AI, or, to put it another way, are overly threatened by it. Yes, ChatGPT would have written several thousands words far more quickly than the hours it took me to write this Article, but I am (perhaps foolishly) confident that they would not be original and valuable enough to take away my audience; I think it’s the same for Brownlee.

The connection between us and AI, though, is precisely the fact that we haven’t needed it: the nature of media is such that we could already create text and video on our own, and take advantage of the Internet to — at least in the case of Brownlee — deliver finishing blows to $230 million startups.

How many industries, though, are not media, in that they still need a team to implement the vision of one person? How many apps or services are there that haven’t been built, not because one person can’t imagine them or create them in their mind, but because they haven’t had the resources or team or coordination capabilities to actually ship them?

This gets at the vector through which AI impacts the world above and beyond cost savings in customer support, or whatever other obvious low-hanging fruit there may be: as the ability of large language models to understand and execute complex commands — with deterministic computing as needed — increases, so too does the potential power of the sovereign individual telling AI what to do. The Internet removed the necessity — and inherent defensibility — of complex cost structures for media; AI has the potential to do the same for a far greater host of industries.


Brownlee’s 2nd most popular video is only two months old: it’s his overview of the Apple Vision Pro; it’s not, if fiction is our guide, an accident that Meta is developing the Quest and Apple has released the Vision Pro just as AI seems poised to threaten an ever-increasing number of jobs. This was the world painted by Pixar’s Wall-E:

Humans in Pixar's Wall-E

It’s funny, because before I looked up this photo, I could have sworn the humans in the film, lulled to insouciance by an AI that addressed all of their physical needs, were wearing headsets; I guess Pixar overestimated our ability to build space-faring cruise ships and underestimated the relentless progression of consumer electronics.

My suspicion, however, is that Pixar got it wrong, and this famous photo of Mark Zuckerberg at Mobile World Congress in 2016 is closer to the truth of the matter.

Mark Zuckerberg at Mobile World Congress

Wall-E erred by assuming that every human was the same, all gleefully enslaved by AUTO, the ships AI. In fact, though, I suspect humanity will be distributed bi-modally, with the vast majority of people happily wearing their Vision Pros or watching their streaming service or viewing their TikTok videos, while increasingly sovereign individuals, aided by AI, pilot the ship.

That may sound extreme, but again, this is why it is useful to look to media first: Brownlee — one man, who started out by reviewing a media center remote on his HP laptop — does in fact hold the fate of entire companies and products in his hand. My bet is not that AI replaces Brownlee, but that AI means Brownlee’s for everything, not just media.

As for me, I plan on documenting it every step of the way, thanks to every person who has supported Stratechery over the last 10 years, and to whom I alone owe accountability, and gratitude.


  1. Actually, the deference Brownlee gave the vision and the potential of future upgrades was arguably generous 

  2. I previously spoke for pay, and even more previously offered consulting, and even more previously than that had ads; however, I ended ads in 2014, consulting in 2015, and haven’t accepted any paid speaking opportunities since 2017. This is all covered on my about page

  3. $12/month or $120/year for the last five years, increasing to $15/month and $150/year on April 22 

Gemini 1.5 and Google’s Nature

This Article is available as a video essay on YouTube


It was impossible to miss the leading message at yesterday’s Google Cloud Next keynote: Google has the best infrastructure for AI. This was CEO Sundar Pichai in his video greeting:

I want to highlight just a few reasons Google Cloud is showing so much progress. One is our deep investments in AI. We’ve known for a while that AI would transform every industry and company, including our own. That’s why we’ve been building AI infrastructure for over a decade, including TPUs, now in their 5th generation. These advancements have helped customer train and serve cutting-edge language models. These investments put us in the forefront of the AI platform shift.

Google Cloud CEO Thomas Kurian made the priority clear as well:

Today we’re going to focus on how Google is helping leading companies transform their operations and become digital and AI leaders, which is the new way to cloud. We have many important advances, starting with our infrastructure.

What was most interesting about the keynote, though, is what that infrastructure makes possible, and, by extension, what that says about Google’s ability to compete.

Grounding

One of the most surprising things about large language models (LLMs) is how much they know; from the very beginning, though, hallucinations have been a concern. Hallucinations are, of course, part of what makes LLMs so impressive: a computer is actually being creative! It’s also a feature that isn’t particularly impressive to the enterprise customers that this keynote was directed at.

To that end, Kurian, shortly after going over Google’s infrastructure advantages, talked about “grounding”, both in terms of the company’s Gemini model broadly, and for enterprise use cases specifically in the context of Google’s Vertex AI model management service:

To augment models, Vertex AI provides managed tooling to connect your model to enterprise applications and databases, using extensions and function-calling. Vertex also provides retrieval augmented generation (RAG) combining the strengths of retrieval and generative models to provide high quality personalized answers and recommendations. Vertex can augment models with up-to-date knowledge from the web and from your organization, combining generative AI with your enterprise truth.

Today we have a really important announcement: you can now ground with Google Search, perhaps the world’s most trusted source of factual information, with a deep understanding of the world’s knowledge. Grounding Gemini’s responses with Google Search improves response quality and significantly reduces hallucinations.

Second, we’re also making it easy to ground your models with data from your enterprise databases and applications, and any database anywhere. Once you’ve chosen the right model, tuned it, and connected it with your enterprise truth, Vertex’s MLOps can help you manage and monitor models.

A RAG implementation using Google Search is an obvious win, and mirrors ChatGPT’s integration with Bing (or Microsoft Copilot in Bing): the LLM provides answers when it can, and searches the web for things it doesn’t know, a particularly useful feature if you are looking for more recent information.

A more impressive demonstration of grounding, though, was in the context of integrating Gemini with Google’s BigQuery data warehouse and Looker business intelligence platform:

In this demo, the worker gets an alert that a particular product is selling out; using generative AI the worker can see sales trends, find similar models, and create a plan of action for dealing with declining inventory for delivery to her team.

What is notable is not the demo specifics (which is unapologetically made-up for Cymbal, Google’s demo brand); rather, note the role of the LLM: it is not providing information or taking specific actions, but rather serving as a much more accessible natural language interface to surface and collect data that would otherwise take considerably more expertise and time. In other words, it is trustworthy because it is grounded through integration Google is promising with its other enterprise data services.

Gemini 1.5

At the same time, that last section didn’t really follow on from the introduction: yes, those LLMs leveraging Google or BigQuery are running on Google’s infrastructure, but other companies or startups can build something similar. This is where the rest of Pichai’s introduction comes in:

We also continue to build capable AI models to make products like search, Maps, and Android radically more helpful. In December, we took our next big step with Gemini, our largest and most capable model yet. We’ve been bringing it to our products and to enterprises and developers through our APIs. We’ve already introduced our next generation Gemini 1.5 Pro. It’s been in private preview in Vertex AI. 1.5 Pro shows dramatically enhanced performance and includes a breakthrough in long context understanding. That means it can run 1 million tokens of information consistently, opening up new possibilities for enterprises to create, discover, and build using AI. There’s also Gemini’s multi-modal capabilities, which can process audio, video, text, code and more. With these two advances, enterprises can do things today that just weren’t possible with AI before.

Google hasn’t said how Gemini 1.5 was made, but clearly the company has overcome the key limitation of traditional transformers: memory requirements increase quadratically with context length. One promising approach is Ring Attention with Blockwise Transformers, which breaks long contexts into pieces to be computed individually even as the various devices computing those pieces simultaneously communicate to make sense of the context as a whole; in this case memory requirements scale linearly with context length, and can be extended by simply adding more devices to the ring topology.

This is where Google’s infrastructure comes in: the company not only has a massive fleet of TPUs, but has also been developing those TPUs to run in parallel at every level of the stack, from chip to cluster to even data centers (this latter requirement is more pertinent for training than inference); if there is a solution that calls for scale, Google is the best placed to provide it, and it seems the company has done just that with Gemini 1.5.

Demos

To that end, and per Pichai’s closing line, almost all of the other demos in the keynote were implicitly leveraging Gemini 1.5’s context window.


In a Gemini for Workspaces demo, the worker evaluated two statements of work against each other, and against the company’s compliance document:

Here are the key quotes:

Google Drive is ready without any additional AI pre-work…

Each of these documents is over 70 pages. It would have taken me hours to review these docs, but instead Gemini is going to help me find a clean answer to save me a ton of time…

Before I proceed with this vendor, I need to ensure that no compliance issues exist, and I’m going to be honest, I have not memorized every rule in our compliance rulebook because it is over 100 pages. I would have to need to scour the 80 pages of this proposal and compare it manually with the 100 pages of the rulebook. So instead, in the side panel I ask, “Does this offer comply with the following” and I’m going to just @-mention our compliance rulebook, hit Enter, and see what Gemini has to say. So interesting: Gemini has found an issue, because the supplier has not listed their security certifications.

Because Gemini is grounded in my company’s data, with source citations to specific files, I can trust this response and start to troubleshoot before selecting a vendor.

The key distinction between this demo and the last one is that quote at the beginning: a large context window just works in a far greater number of use cases, without any fiddly RAG implementations or special connections to external data stores; just upload the files you need to analyze, and you’re off.


In a Creative Agent with Imagen demo, the worker was seeking to create marketing images and storyboards for an outdoor product:

Here is the key quote:

The creative agent can analyze our previous campaigns to understand our unique brand style and apply it to new ideas. In this case, the creative agent has analyzed over 3,000 brand images, descriptions, videos, and documents of other products that we have in our catalog, contained within Google Drive, to create this summary…The creative agent was able to use Gemini Pro’s 1 million token context window and it’s ability to reason across text, images, and video to generate this summary.

This was, to be fair, one of the weaker demos: the brand summary and marketing campaign weren’t that impressive, and the idea of creating a podcast with synthetic voices is technically impressive and also something that will never be listened to. That, though, is impressive in its own right: as I noted in an Update when Gemini 1.5 was first announced, “a massively larger context window makes it possible to do silly stuff”, and silly stuff often turns into serious capabilities.


In a Gemini Code Assistant Demo (formerly Duet AI for Developers), a developer new to a job (and the codebase) was tasked with making a change to a site’s homepage:

Here is the key quote:

For the developers out there, you know that this means we’re going to need to add padding in the homepage, modify some views, make sure the configs are changed for our microservices, and typically, it would take me a week or two to even just get familiarized with our company’s code base which has over 100,000 lines of code over 11 services. But now, with Gemini Code Assist, as a new engineer on the team, I can be more productive than ever and can accomplish all of this work in just a matter of minutes. This is because Gemini’s code transformations with full codebase awareness allows us to easily reason through our entire codebase, and in comparison, other models out there can’t handle anything beyond 12,000 to 15,000 lines of code. Gemini with Code Assist is so intelligent that we can just give it our business requirements, including the visual design…Gemini Code Assist doesn’t just suggest code edits; it provides clear recommendations, and makes sure that all of these recommendations align with [the company’s] security and compliance requirements…

And the conclusion:

Let’s recap: behind the scenes Gemini has analyzed my entire codebase in GitLab; it has implemented a new feature; and has ensured that all of the code generated is compatible with my company’s standards and requirements.

Again, leave aside the implausibility of this demo: the key takeaway is the capabilities unlocked when the model is able to have all of the context around a problem while working; this is only possible with — and here the name is appropriate — a long context window, and that is ultimately enabled by Google’s infrastructure.

Google’s Nature

In case it isn’t clear, I think that this keynote was by far the most impressive presentation Google has made in the AI era, not least because the company knows exactly what its advantages are. Several years ago I wrote an Article called Microsoft’s Monopoly Hangover that discussed the company’s then-ongoing transition away from Windows as the center of its strategy; the central conceit was a comparison to Lou Gerstner’s 1990’s transformation of IBM.

The great thing about a monopoly is that a company can do anything, because there is no competition; the bad thing is that when the monopoly is finished the company is still capable of doing anything at a mediocre level, but nothing at a high one because it has become fat and lazy. To put it another way, for a former monopoly “big” is the only truly differentiated asset.

My argument was that business models could be changed: IBM did it, and Microsoft was in the process of doing so when I wrote that. Moreover, Gerstner had shown that culture could be changed as well, and Nadella did just that at Microsoft. What couldn’t be changed was nature: IBM was a company predicated on breadth, not specialization; that’s why Gerstner was right to not break apart the company but to instead deliver Internet solutions to enterprises. Similarly, Microsoft was a company predicated on integration around Windows; the company’s shift to services centered on Teams as Microsoft’s operating system in the cloud was also true to the company’s nature.

Google is facing many of the same challenges after its decades long dominance of the open web: all of the products shown yesterday rely on a different business model than advertising, and to properly execute and deliver on them will require a cultural shift to supporting customers instead of tolerating them. What hasn’t changed — because it is the company’s nature, and thus cannot — is the reliance on scale and an overwhelming infrastructure advantage. That, more than anything, is what defines Google, and it was encouraging to see that so explicitly put forward as an advantage.


United States v. Apple

First, there is the moment when the news hits the wires: U.S. Sues Apple, Alleges Tech Giant Exploits Illegal Monopoly.

This, we know, is the beginning of a long journey: there will be months or years of filings and discovery and preliminary rulings; eventually we will have a trial, and months after that a decision. Then there are the appeals, first with three judges, then perhaps en banc, and maybe even the Supreme Court. At some point, should the government win, there will be a hearing about remedies, themselves subject to the same grinding schedule. Only then can a proper determination be made about the validity of the legal questions in this case.

Still, I think the initial moment matters: antitrust is inherently political, and tech companies are generally popular; this makes it hard to build and maintain the momentum necessary to endure the grind. One would certainly have expected that to be an advantage for Apple: the company gains power in market after market precisely by making consumers happy.

Apple’s Integration

I explained in 2014 How Apple Creates Leverage by marshaling loyal customers:

Apple is certainly not shy about proclaiming their fealty towards building great products. And I believe Tim Cook, Jony Ive, and the rest of Apple’s leadership when they say their focus on the experience of using an Apple device comes from their desire to build something they themselves would want to use. But I also believe the strategic implications of this focus are serially undervalued.

Last year I wrote a piece called What Clayton Christensen Got Wrong that explored the idea that the user experience was the sort of attribute that could never be overshot; as long as Apple provided a superior experience, they would always win the high-end subset of the consumer market that is willing to pay for nice things.

However, this telling of the story of iTunes and the iPhone suggests that this focus on the user experience not only defends against disruption, but it also provides an offensive advantage as well: namely, Apple increases its user experience advantage through the leverage it gains from consumers loyal to the company. In the case of iTunes, Apple was able to create the most seamless music acquisition process possible: the labels had no choice but to go along. Similarly, when it comes to smartphones, Apple devices from day one have not been cluttered with carrier branding or apps or control over updates. If carriers didn’t like Apple’s insistence on creating the best possible user experience, well, consumers who valued said experience were more than happy to take their business elsewhere. In effect, Apple builds incredible user experiences, which gains them loyal customers who collectively have massive market power, which Apple can then effectively wield to get its way — a way that involves maximizing the user experience. It’s a virtuous circle.

That Clayton Christensen reference was about his theory of low-end disruption, which I summarized in the aforementioned What Clayton Christensen Got Wrong:

Briefly, an integrated approach wins at the beginning of a new market, because it produces a superior product that customers are willing to pay for. However, as a product category matures, even modular products become “good enough” — customers may know that the integrated product has superior features or specs, but they aren’t willing to pay more, and thus the low-priced providers, who build a product from parts with prices ground down by competition, come to own the market. Christensen was sure this would happen with the iPod, and he — and his many adherents — are sure it will happen to the iPhone.

That Article — where I argued that low end disruption would not happen to Apple — built on a paper I had written in business school in 2010 entitled Apple and the Innovator’s Dilemma:

Steve Jobs, when asked why people want to work for Apple, said:

Our DNA is as a consumer company — for that individual customer who’s voting thumbs up or thumbs down. That’s who we think about. And we think that our job is to take responsibility for the complete user experience. And if it’s not up to par, it’s our fault, plain and simply.

This article has already laid out many of the strategic benefits of this intense focus on the user experience: it creates significant differentiation, sets customer expectations, and makes it impossible to overshoot customer needs — products are never “good enough” with regard to the user experience. From an organizational standpoint, if products are never “good enough”, then a highly integrated company is appropriate. Chrstensen and Raynor note in the Innovatorʼs Solution that the “not-good-enough circumstance mandate[s] interdependent product or value chain architectures and vertical integration.”

Another way to look at Appleʼs decisions regarding its organizational structure is to think of transaction costs: normally, in well-functioning markets, vertical integration is suboptimal. However, if transaction costs in the vertical chain outweigh the losses due to the inefficiencies of being vertically integrated, then vertical integration could be the correct course of action. Apple thinks the exact same way, but not about monetary cost; instead, the transaction costs they consider are the tax that modularization places on the user experience, and it is a cost they are not willing to bear. A central tenet is that Apple “need[s] to own and control the primary technologies behind the products [it] make[s].”

This is all a fancy way of repeating Apple’s constant refrain that their success is built upon integration — first hardware and software, then services, and now an entire ecosystem of products extending from the computer to the phone to smart watches to headphones and, most recently, a headset. This integration makes customers happy, and one would expect members of the Apple ecosystem to rally to their cause.

In fact, though, that wasn’t the initial reaction, at least in the tech world which I inhabit: the response of many people was finally.

App Store Anger

The root of tech’s frustration with Apple — including long-time Apple developers, traditionally the company’s most ardent supporters — is the App Store. The App Store is the ultimate example of Apple leveraging consumers’ desire for their products to gain power over an ecosystem; in this case, though, the ecosystem isn’t music labels or cellular providers, but rather app developers, which is to say everyone else in tech. Smartphones are essential devices, and Apple makes the best smartphone, which means their power over developers is absolute.

Apple is not afraid to exercise this power, and it’s important to note that in the early days of the iPhone the company’s control worked in developers’ favor. Apple decreed that 3rd-party apps could only be installed via the App Store, which would review every app; free apps wouldn’t have to pay anything, while Apple would take 30% of paid apps. This led to an absolute explosion in the market: consumers, who had been scarred by the 2000’s era of malware and viruses, shook off their reticence to install software and embraced the App Store, leading to an explosion of app-based businesses. An important development was the 2009 addition of in-app purchase, which unlocked entirely new business models for games in particular. Subscriptions for publications came in 2011, although it took another five years for the business model to be available to all applications.

There was, throughout this period, a continuous burbling of discontent about some of Apple’s App Store policies: one of my early Articles on Stratechery in 2013 asked Why Doesn’t Apple Enable Sustainable Businesses on the App Store?, games developers groused about the 30% fee on in-app purchases, while 3rd-party content providers suffered under an anti-steering provision that didn’t allow them to link to their website — or even tell their users — that they could start a subscription or buy content outside of the app. Everyone, meanwhile, hated the seeming arbitrariness of App Store Review, which only weighed in on an app when it was finished; good luck building something ambitious with the ever-present risk of running afoul of App Store rules.

This discontent came to a head in 2020, when a number of developers had public confrontations with Apple. Some of these were COVID-related, as Apple wanted a piece of for-pay virtual meetings; others were due to an unannounced step-up in focus on subscription-based web apps. A very public fight with Basecamp brought to light an organized campaign to force developers of all sizes to integrate in-app purchase even if they made all of their sales on their website:

That same month Apple announced App Tracking Transparency, a thinly veiled attempt to displace Facebook’s role in customer acquisition for apps; some of the App Tracking Transparency changes had defensible privacy justifications (albeit overstated), but it was hard to not notice that Apple wasn’t holding itself to the same rules, very much to its own benefit.

One year later and many developers cheered when Epic sued Apple for antitrust violations related to the App Store; I gave a list of relevant Articles a couple of months ago in The Apple Vision Pros Missing Apps:

  • Apple, Epic, and the App Store, which provided a history of the App Store and Epic’s lawsuit at the time it was filed.
  • App Store Arguments, which I wrote at the conclusion of the trial, explained why I expected Epic to lose, even as I hoped that Apple would voluntarily make pro-developer changes in the App Store.
  • The Apple v. Epic Decision, which reviewed the judge’s decision that favored Apple in 10 of the 11 counts.

The 11th count that Epic prevailed on required Apple to allow developers to steer users to a website to make a purchase; while its implementation was delayed while both parties filed appeals, the lawsuit reached the end of the road last week when the Supreme Court denied certiorari. That meant that Apple had to allow steering, and the company did so in the most restrictive way possible: developers had to use an Apple-granted entitlement to put a link on one screen of their app, and pay Apple 27% of any conversions that happened on the developer’s website within 7 days of clicking said link.

Many developers were outraged, but the company’s tactics were exactly what I expected…Apple has shown, again and again and again, that it is only going to give up App Store revenue kicking-and-screaming; indeed, the company has actually gone the other way, particularly with its crackdown over the last few years on apps that only sold subscriptions on the web (and didn’t include an in-app purchase as well). This is who Apple is, at least when it comes to the App Store.

Here is the most important thing to understand about this entire App Store discussion — the topic that has many people who are normally skeptical of government involvement in tech cheering on the Department of Justice: basically none of it is pertinent to this case. Oh sure, the initial case filing aired all of these grievances — along with a bizarre and factually wrong assertion that iTunes was only ever on Windows because of a DOJ Consent Decree — but the actual assertions of wrongdoing barely mention the App Store at all.

The DOJ’s Case

Steven Sinofsky has a (critical) overview of the case at Hardcore Software; here is his (correct) summary of the DOJ’s actual complaints:

There are two sets of claims in the filing. The first set are written as a narrative for laypeople. The claims are broad and kind of all over the map. Before listing the specifics there is a lot of language and froth about how Apple uses its place in the market. This is the “story.” It is very exciting to read but you have to get to the actual behavior, which the DOJ claims is that “Apple has used one or both mechanisms (control of app distribution or control of APIs) to suppress the following technologies, among others” which includes:

“Supressing Super apps”

“Supressing cloud streaming game apps”

“[P]rohibiting third-party apps from sending or receiving carrier-based messages”

“[S]uppressing key functions of third-party smartwatches”

“[D]enied users access to digital wallets”

The first two are the closest the case comes to App Store complaints, and the key thing to note is that they are not relevant to the vast majority of developers. There is nothing about Apple’s 30% fee, nothing about App Tracking Transparency, and nothing about the steering provision that is, in my estimation, the most noxious of all Apple’s policies. Here is an overview of the five objections:

Super Apps

For years, Apple denied its users access to super apps because it viewed them as “fundamentally disruptive” to “existing app distribution and development paradigms” and ultimately Apple’s monopoly power. Apple feared super apps because it recognized that as they become popular, “demand for iPhone is reduced.” So, Apple used its control over app distribution and app creation to effectively prohibit developers from offering super apps instead of competing on the merits.

A super app is an app that can serve as a platform for smaller “mini” programs developed using programming languages such as HTML5 and JavaScript. By using programming languages standard in most web pages, mini programs are cross platform, meaning they work the same on any web browser and on any device. Developers can therefore write a single mini program that works whether users have an iPhone or another smartphone.

Super apps can provide significant benefits to users. For example, a super app that incorporates a multitude of mini programs might allow users to easily discover and access a wide variety of content and services without setting up and logging into multiple apps, not unlike how Netflix and Hulu allow users to find and watch thousands of movies and television shows in a single app. As one Apple executive put it, “who doesn’t want faster, easier to discover apps that do everything a full app does?” Restricting super apps makes users worse off and sacrifices the short-term profitability of iPhones for Apple.

As the DOJ complaint correctly notes, Apple has faced challenges in China because of the WeChat “super app”; it is easier to switch if all of your essential services are in one place. At the same time, I am skeptical that “super apps” — even without Apple’s restrictions — would ever become a meaningful UI paradigm in western markets long since defined by distinct apps.

Moreover, that installable apps are only available from Apple has always been a part of the iPhone’s brand promise: that is why it is important to remember the early days of the App Store and how it created consumer demand in the first place. This brand promise around security and safety will surely be a core part of Apple’s defense.

Cloud Streaming Game Apps

For years, Apple blocked cloud gaming apps that would have given users access to desirable apps and content without needing to pay for expensive Apple hardware because this would threaten its monopoly power. In Apple’s own words, it feared a world where “all that matters is who has the cheapest hardware” and consumers could “buy[] a [expletive] Android for 25 bux at a garage sale and . . . have a solid cloud computing device” that “works fine.” Apple’s conduct made its own product worse because consumers missed out on apps and content. This conduct also cost Apple substantial revenues from third-party developers. At the same time, Apple also made other smartphones worse by stifling the growth of these cross-platform apps on other smartphones. Importantly, Apple prevented the emergence of technologies that could lower the price that consumers pay for iPhones.

Cloud streaming apps let users run a computationally intensive program without having to process or store the program on the smartphone itself. Instead, a user’s smartphone leverages the computing power of a remote server, which runs the program and streams the result back to the phone. Cloud streaming allows developers to bring cutting-edge technologies and services to smartphone consumers—including gaming and interactive artificial intelligence services—even if their smartphone includes hardware that is less powerful than an iPhone.

Apple long required that every individual cloud streaming game be an individual standalone app, and frankly, I think this is the DOJ’s strongest argument: cloud streaming apps don’t pose any of the (theoretical or not) security and safety concerns of apps installed outside of the App Store; the entire point is that you don’t need to install any apps at all, so to have such a requirement seems like a clear attempt to kill competition.

To that end, one gets the impression that Apple’s January announcement that it would now allow cloud streaming services to be self-contained in a single app was a preemptive response to this filing; however, those single apps still have to include an in-app purchase option — you can’t simply offer an app that lets you sign in to the subscription you made elsewhere.

SMS and Private APIs

Apple undermines cross-platform messaging to reinforce “obstacle[s] to iPhone families giving their kids Android phones.” Apple could have made a better cross-platform messaging experience itself by creating iMessage for Android but concluded that doing so “will hurt us more than help us.” Apple therefore continues to impede innovation in smartphone messaging, even though doing so sacrifices the profits Apple would earn from increasing the value of the iPhone to users, because it helps build and maintain its monopoly power.

This entire section is pretty nuts. The core complaint is that Apple doesn’t allow 3rd-party messaging apps like WhatsApp to access private API’s for SMS; the vast majority of the complaint, though, is effectively accusing Apple of acting anti-competitive by not building iMessage for Android, and making teenagers feel bad about green bubbles. Leaving aside the fact that green bubbles actually serve a product function — they are not encrypted, while blue iMessage bubbles are — the entire idea that Apple needs to proactively build features to facilitate communication with Android flies in the face of Supreme Court precedent in Verizon v. Trinko:

Firms may acquire monopoly power by establishing an infrastructure that renders them uniquely suited to serve their customers. Compelling such firms to share the source of their advantage is in some tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities. Enforced sharing also requires antitrust courts to act as central planners, identifying the proper price, quantity, and other terms of dealing — a role for which they are ill-suited. Moreover, compelling negotiation between competitors may facilitate the supreme evil of antitrust: collusion. Thus, as a general matter, the Sherman Act “does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.”

I expect Trinko to be a substantial obstacle for the DOJ in this case. It is one thing to change the rules or withdraw capabilities to maintain a monopoly; there was a Supreme Case called Aspen Skiing where a company was found guilty of doing just that, but the Court drew a clear distinction in Trinko between changes in policy and never actually providing pro-competition capability in the first place:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. Similarly, the defendant’s unwillingness to renew the ticket even if compensated at retail price revealed a distinctly anticompetitive bent.

The refusal to deal alleged in the present case does not fit within the limited exception recognized in Aspen Skiing. The complaint does not allege that Verizon voluntarily engaged in a course of dealing with its rivals, or would ever have done so absent statutory compulsion. Here, therefore, the defendant’s prior conduct sheds no light upon the motivation of its refusal to deal — upon whether its regulatory lapses were prompted not by competitive zeal but by anticompetitive malice.

This also hints at how a standard like RCS could be forced upon Apple: through legislation (statutory compulsion); passing new laws remains the most appropriate way to deal with Aggregator power.

Smartwatch Integration

Apple’s smartwatch — Apple Watch — is only compatible with the iPhone. So, if Apple can steer a user towards buying an Apple Watch, it becomes more costly for that user to purchase a different kind of smartphone because doing so requires the user to abandon their costly Apple Watch and purchase a new, Android-compatible smartwatch…

Apple uses its control of the iPhone, including its technical and contractual control of critical APIs, to degrade the functionality of third-party cross-platform smartwatches in at least three significant ways: First, Apple deprives iPhone users with third-party smartwatches of the ability to respond to notifications. Second, Apple inhibits third-party smartwatches from maintaining a reliable connection with the iPhone. And third, Apple undermines the performance of third-party smartwatches that connect directly with a cellular network. In doing so, Apple constrains user choice and crushes innovation that might help fill in the moat around Apple’s smartphone monopoly.

This is the part of the case that should concern Apple the most, because it is a direct attack on Apple’s core differentiation: the deep integration of software, hardware, and services across its devices. The Apple Watch has access to private APIs and system-level integrations that Apple claims provide for better battery life, messaging management, etc.; the DOJ says that these integrations should be modularized and made available to all smart watches. There is no consideration as to whether or not Apple’s claims are true, or acknowledgment that Apple’s integration was supposedly a losing proposition a decade ago: now that the company has been proven successful, said proposition is alleged to be monopoly maintenance.

NFC Access for Digital Wallets

Apple recognizes that paying for products and services with a digital wallet will eventually become “something people do every day of their lives.” But Apple has used its control over app creation, including its technical and contractual control over API access, to effectively block third-party developers from creating digital wallets on the iPhone with tap-to-pay functionality, which is an important feature of a digital wallet for smartphones. As a result, Apple maintains complete control over how users make tap-to-pay payments with their iPhone. Apple also deprives users of the benefits and innovations third-party wallets would provide so that it can protect “Apple’s most important and successful business, iPhone.”

This is a valid complaint framed in a very odd way. I think there is a case to be made that Apple has reserved NFC tap-to-pay access for itself in order to leverage its iPhone control into control of an adjacent market (digital wallets), but I have a hard time buying the DOJ’s argument that this imposes unacceptable switching costs to another smartphone. It’s also, in my estimation, fairly weak gruel for an antitrust case of this magnitude, particularly given that an integrated wallet is, once again, very much in line with Apple’s longstanding brand promise.

Apple’s Mistake

I have, for years, been urging Apple to take a different approach to the App Store, particularly in terms of non-gaming apps and the anti-steering provision, even if the company’s approach were technically legal. The danger I foresaw was not simply the loss of developer goodwill, but something even more important to Apple: its fundamental differentiation, i.e. integration. I stated the risk explicitly in 2021’s Integrated Apple and App Store Risk:

If you were to boil Apple’s philosophy and attractiveness to customers to one word, that word would be “integration.” And guess what? First party integration is bad for third-party developers — everything is a tradeoff. This is where the nuance I discussed in App Store Arguments becomes much more black-and-white. Yes, Apple created the iPhone and the App Store and, under current U.S. antitrust doctrine, almost certainly has the right to impose whatever taxes it wishes on third parties, including 30% on purchases and the first year of subscriptions, and completely cutting off developers from their customers. Antitrust law, though, while governed by Supreme Court precedent, is not a matter of constitutionality: it stems from laws passed by Congress, and it can be changed by new laws passed by Congress.

One of the central planks of many of those pushing for new laws in this area are significant limitations on the ability of platforms to offer apps and services, or integrate them in any way that advantages their offerings. In this potential world it’s not simply problematic that Apple charges Spotify 30%, or else forces the music streaming service to hope that users figure out how to subscribe on the web, even as Apple Music has a fully integrated sign-up flow and no 30% tax; it is also illegal to incorporate Apple Music into SharePlay or Shared-with-you or Photos, or in the most extreme versions of these proposed laws, even have Apple Music at all. This limitation would apply to basically every WWDC announcement: say good-bye to Quick Note or SharePlay-as-an-exclusive-service, or any number of Apple’s integrated offerings.

I think these sorts of limitations would be disappointing as a user — integration really does often lead to better outcomes sooner — and would be a disaster for Apple. The entire company’s differentiation is predicated on integration, including its ability to abuse its App Store position, and it would be a huge misstep if the inability to resist the latter imperiled the former.

Last week this danger manifested, not as new legislation, but as this lawsuit, which attacks Apple’s integration much more than it attacks the App Store. I think, though, that it was Apple’s policies around the App Store that created the conditions for this lawsuit in the first place.

In short, I suspect the DOJ doesn’t want to follow in Epic’s footsteps, but they do want to sue Apple, so they framed Apple’s defining characteristic — integration — in the most uncharitable light possible to make their case. To put it another way, the Epic case may have shown that Apple’s policies around the App Store were (mostly) legal, but that didn’t mean they were right; now the DOJ, looking for another point of vulnerability, is trying to make the case that Apple’s right approach in delivering an integrated experience is in fact illegal.

During the Epic trial John Gruber wished that Apple would relax its approach to the App Store:

What’s weirdest about Apple’s antitrust and PR problems related to the App Store is that the App Store is a side hustle for Apple. Yes it’s earning Apple $10+ billion a year, and even for Apple that’s significant. But it’s not Apple’s main business by a long shot. To my knowledge no company in history has ever gotten into antitrust hot water over a side business so comparatively small to its overall business. Apple doesn’t need this.

I think Apple’s senior leadership — Cook in particular — truly does believe that Apple has earned every dollar it generates from third-party software in the App Store, and that their policies in place are just and fair. That righteousness came out on the stand in the Epic trial. But even if Apple’s executives are correct — if the current rules and revenue splits could somehow be proven to be dialed in to a hypothetical Platonic ideal of fairness to all parties involved — that doesn’t change the fact that so many developers see it otherwise.

I don’t think the developers are wrong, but even if they are wrong, it’s not good for Apple that they’re so unhappy, and feel so aggrieved. It’s not good for Apple that developers don’t see the App Store as a platform that works in their interests. Like the Apple logo, “developer goodwill” has no price tag.

To the extent I am right — and yes, it is impossible to prove a counterfactual — the price tag of Apple’s “side hustle” is even higher than Gruber thought: I believe that if Apple had done even the bare minimum with the App Store — i.e. removing the anti-steering provision and not going after small developers with online services — it wouldn’t be in a position of having to defend what actually makes Apple Apple. It would, at a minimum, have a lot more people on its side.

I wrote a follow-up to this Article in this Daily Update.

Nvidia Waves and Moats

This Article is available as a video essay on YouTube


From the Wall Street Journal:

The Nvidia frenzy over artificial intelligence has come to this: Chief Executive Jensen Huang unveiled his company’s latest chips on Monday in a sports arena at an event one analyst dubbed the “AI Woodstock.”

Customers, partners and fans of the chip company descended on the SAP Center, the home of the National Hockey League’s San Jose Sharks, for Huang’s keynote speech at an annual Nvidia conference that, this year, has a seating capacity of about 11,000. Professional wrestling’s WWE Monday Night RAW event took place there in February. Justin Timberlake is scheduled to play the arena in May. Even Apple’s much-watched launch events for the iPhone and iPad didn’t fill a venue this large. At the center of the tech world’s attention is Huang, who has gone from a semiconductor CEO with a devoted following among videogame enthusiasts to an AI impresario with broad-enough appeal to draw thousands to a corporate event.

Or, as Nvidia Research Manager Jim Fan put it on X:

I’m disappointed that the Wall Street Journal used this lead for their article about the event, but not because I thought they should have talked about the actual announcements: rather, they and I had the exact same idea. It was the spectacle, even more than the announcements, that was the most striking takeaway of Huang’s keynote.

I do think, contra the Wall Street Journal, that iPhone announcements are a relevant analogy; Apple could have, particularly in the early days of the iPhone, easily filled an 11,000 seat arena. Perhaps an even better analogy, though, was the release of Windows 95. Lance Ulanoff wrote a retrospective on Medium in 2001:

It’s hard to imagine an operating system, by itself, garnering the kind of near-global attention the Windows 95 launch attracted in 1995. Journalists arrived from around the world on August 24, 1995, settling on the lush green, and still relatively small Microsoft Campus in Redmond, Washington. There were tickets (I still have mine) featuring the original Windows Start Button (“Start” was a major theme for the entire event) granting admission to the invite-only, carnival-like event…It was a relatively happy and innocent time in technology. Perhaps the last major launch before the internet dominated everything, when a software platform, and not blog post or a piece of hardware, could change the world.

One can envision an article in 2040 looking back on the “relatively happy and innocent time in technology” as we witnessed “perhaps the last major launch before AI dominated everything” when a chip “could change the world”; perhaps retrospectives of the before times will be the last refuge of human authors like myself.

GTCs of Old

What is interesting to a once-and-future old fogey like myself, who has watched multiple Huang keynotes, is how relatively focused this event was: yes, Huang talked about things like weather and robotics and Omniverse and cars, but this was, first-and-foremost, a chip launch — the Blackwell B200 generation of GPUs — with a huge chunk of the keynote talking about its various features and permutations, performance, partnerships, etc.

I thought this stood in marked contrast to GTC 2022 when Huang announced the Hopper H100 generation of GPUs: that had a much shorter section on the chips/system architecture, accompanied by a lot of talk about potential use cases and a list of all of the various libraries Nvidia was developing for CUDA. This was normal for GTC, as I explained a year earlier:

This was, frankly, a pretty overwhelming keynote; Liberty thinks this is cool:

Robots and digital twins and games and machine learning accelerators and data-center-scale computing and cybersecurity and self-driving cars and computational biology and quantum computing and metaverse-building-tools and trillion-parameter AI models! Yes plz

Something Huang emphasized in the introduction to the keynote, though, is that there is a rhyme and reason to this volume…

I then went on an extended explainer of CUDA and why it was essential to understanding Nvidia’s long-term opportunity, and concluded:

This is a useful way to think about Nvidia’s stack: writing shaders is like writing assembly, as in its really hard and very few people can do it well. CUDA abstracted that away into a universal API that was much more generalized and approachable — it’s the operating system in this analogy. Just like with operating systems, though, it is useful to have libraries that reduce duplicative work amongst programmers, freeing them to focus on their own programs. So it is with CUDA and all of those SDKs that Huang referenced: those are libraries that make it much simpler to implement programs that run on Nvidia GPUs.

This is how it is that a single keynote can cover “Robots and digital twins and games and machine learning accelerators and data-center-scale computing and cybersecurity and self-driving cars and computational biology and quantum computing and metaverse-building-tools and trillion-parameter AI models”; most of those are new or updated libraries on top of CUDA, and the more that Nvidia makes, the more they can make.

This isn’t the only part of the Nvidia stack: the company has also invested in networking and infrastructure, both on the hardware and software level, that allows applications to scale across an entire data center, running on top of thousands of chips. This too requires a distinct software plane, which reinforces that the most important thing to understand about Nvidia is that it is not a hardware company, and not a software company: it is a company that integrates both.

Those GTCs were, in retrospect, put on by a company before it had achieved astronomical product-market fit. Sure, Huang and Nvidia knew about transformers and GPT models — Huang referenced his hand-delivery of the first DGX supercomputer to OpenAI in 2016 in yesterday’s opening remarks — but notice how his hand-drawn slide of computing history seems to exclude a lot of the stuff that used to be at GTC:

Huang's drawing of computing journey to today

Suddenly all that matters in those intervening years was transformers!

I am not, to be clear, short-changing Huang or Nvidia in any way; quite the opposite. What is absolutely correct is that Nvidia had on their hands a new way of computing, and the point of those previous GTC’s was to experiment and push the world to find use cases for it; today, in this post-ChatGPT world, the largest use case — generative AI — is abundantly clear, and the most important message for Huang to deliver is why Nvidia will continue to dominate that use case for the foreseeable future.

Blackwell

So about Blackwell itself; from Bloomberg:

Nvidia Corp. unveiled its most powerful chip architecture at the annual GPU Technology Conference, dubbed Woodstock for AI by some analysts. Chief Executive Officer Jensen Huang took the stage to show off the new Blackwell computing platform, headlined by the B200 chip, a 208-billion-transistor powerhouse that exceeds the performance of Nvidia’s already class-leading AI accelerators. The chip promises to extend Nvidia’s lead on rivals at a time when major businesses and even nations are making AI development a priority. After riding Blackwell’s predecessor, Hopper, to surpass a valuation of more than $2 trillion, Nvidia is setting high expectations with its latest product.

The first thing to note about Blackwell is that it is actually two dies fused into one chip, with what the company says is full coherence; what this means in practice is that a big portion of Blackwell’s gains relative to Hopper is that it is simply much bigger. Here is Huang holding a Hopper and Blackwell chip up for comparison:

Huang holding a Hopper GPU and a Blackwell GPU

The “Blackwell is bigger” theme holds for the systems Nvidia is building around it. The fully integrated GB200 platform has two Blackwell chips with one Grace CPU chip, as opposed to Hopper’s 1 to 1 architecture. Huang also unveiled the GB200 NVL72, a liquid-cooled rack sized system that included 72 GPUs interconnected with a new generation of NVLink, which the company claims provides a 30x performance increase over the same number of H100 GPUs for LLM inference (thanks in part to dedicated hardware for transformer-based inference), with a 25x reduction in cost and energy consumption. One set of numbers I found notable were on these slides:

Blackwell's increased performance in training relative to Hopper

What is interesting to note is that both training runs take the same amount of time — 90 days. This is because the actual calculation speed is basically the same; this makes sense because Blackwell is, like Hopper, fabbed on TSMC’s 4nm process,1 and the actual calculations are fairly serial in nature (and thus primarily governed by the underlying speed of the chip). “Accelerated computing”, though, isn’t about serial speed, but rather parallelism, and every new generation of chips, combined with new networking, enables ever greater amounts of efficient parallelism that keeps those GPUs full; that’s why the big improvment is in the number of GPUs necessary and thus the overall amount of power drawn.

That, by extension, means that a Hopper-sized fleet of Blackwell GPUs will be capable of building that much larger of a model, and given that there appears to be a linear relationship between scale and model capability, the path to GPT-6 and beyond remains clear (GPT-5 was presumably trained on Hopper GPUs; GPT-4 was trained on Ampere A100s).

What is interesting to note is that there are reports that while the B100 costs twice as much as the H100 to manufacture, Nvidia is increasing the price much less than expected; this explains the somewhat lower margins the company is expecting going forward. The report — which has since disappeared from the Internet (perhaps because it was published before the keynote?) — speculated that Nvidia is concerned about preserving its market share in the face of AMD being aggressive in price, and its biggest customers trying to build their own chips. There is, needless to say, tremendous incentives to find alternatives, particularly for inference.

Nvidia Inference Microservices (NIM)

I think this provides useful context for another GTC announcement; from the Nvidia developer blog:

The rise in generative AI adoption has been remarkable. Catalyzed by the launch of OpenAI’s ChatGPT in 2022, the new technology amassed over 100M users within months and drove a surge of development activities across almost every industry. By 2023, developers began POCs [Proof of Concepts] using APIs and open-source community models from Meta, Mistral, Stability, and more.

Entering 2024, organizations are shifting their focus to full-scale production deployments, which involve connecting AI models to existing enterprise infrastructure, optimizing system latency and throughput, logging, monitoring, and security, among others. This path to production is complex and time-consuming — it requires specialized skills, platforms, and processes, especially at scale.

NVIDIA NIM, part of NVIDIA AI Enterprise, provides a streamlined path for developing AI-powered enterprise applications and deploying AI models in production.

NIM is a set of optimized cloud-native microservices designed to shorten time-to-market and simplify deployment of generative AI models anywhere, across cloud, data center, and GPU-accelerated workstations. It expands the developer pool by abstracting away the complexities of AI model development and packaging for production ‌using industry-standard APIs.

NIM’s are pre-built containers that contain everything an organization needs to get started with model deployment, and they are addressing a real need not just today, but in the future; Huang laid out a compelling scenario where companies’ use multiple NIMs in an agent-type of framework to accomplish complex tasks:

Think about what an AI API is: an AI API is an interface that you just talk to. So this is a piece of software that in the future that has a really simple API, and that API is called human. These packages, incredible bodies of software, will be optimized and packaged and we’ll put it on a website, and you can download it, you can take it with you, you can run it on any cloud, you can run it in your datacenter, you can run it on workstations if it fits, and all you have to do is come to ai.nvidia.com. We call it Nvidia Inference Microservices, but inside the company we all call it NIMs.

Just imagine, someday there’s going to be one of these chatbots, and these chatbots is just going to be in a NIM. You’ll assemble a whole bunch of chatbots, and that’s the way that software is going to be built some day. How do we build software in the future? It is unlikely that you’ll write it from scratch, or write a whole bunch of Python code or anything like that. It is very likely that you assemble a team of AIs.

There’s probably going to be a super-AI that you use that takes the mission that you give it and breaks it down into an execution plan. Some of that execution plan could be handed off to another NIM, that NIM would maybe understand SAP. The language of SAP is ABAP. It might understand ServiceNow and go and retrieve some information from their platforms. It might then hand that result to another NIM, who goes off and does some calculation on it. Maybe it’s an optimization software, a combinatorial optimization algorithm. Maybe it’s just some basic calculator. Maybe it’s pandas to do some numerical analysis on it. And then it comes back with its answer, and it gets combined with everybody else’s, and because it’s been presented with “This is what the right answer should look like,” it knows what right answers to produce, and it presents it to you. We can get a report every single day, top-of-the-hour, that has something to do with a build plan or some forecast or some customer alert or some bugs databased or whatever it happens to be, and we can assemble it using all these NIMs.

And because these NIMs have been packaged up and ready to work on your system, so long as you have Nvidia GPUs in your datacenter or in the cloud, these NIMs will work together as a team and do amazing things.

Did you notice the catch? NIMs — which Nvidia is going to both create itself and also spur the broader ecosystem to create, with the goal of making them freely available — will only run on Nvidia GPUs.

NIM's only run on Nvidia GPUs

This takes this Article full circle: in the before-times, i.e. before the release of ChatGPT, Nvidia was building quite the (free) software moat around its GPUs; the challenge is that it wasn’t entirely clear who was going to use all of that software. Today, meanwhile, the use cases for those GPUs is very clear, and those use cases are happening at a much higher level than CUDA frameworks (i.e. on top of models); that, combined with the massive incentives towards finding cheaper alternatives to Nvidia, means both the pressure to and the possibility of escaping CUDA is higher than it has ever been (even if it is still distant for lower level work, particularly when it comes to training).

Nvidia has already started responding: I think that one way to understand DGX Cloud is that it is Nvidia’s attempt to capture the same market that is still buying Intel server chips in a world where AMD chips are better (because they already standardized on them); NIM’s are another attempt to build lock-in.

In the meantime, though, it remains noteworthy that Nvidia appears to not be taking as much margin with Blackwell as many may have expected; the question as to whether they will have to give back more in future generations will depend on not just their chips’ performance, but also on re-digging a software moat increasingly threatened by the very wave that made GTC such a spectacle.



  1. I was mistaken about this previously 

Aggregator’s AI Risk

This Article is available as a video essay on YouTube


A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.
Ralph Waldo Emerson, “Self-Reliance”, Essays: First Series, 1841

In the beginning was the Word, and the Word was with God, and the Word was God.
John 1:1, King James Version

A recurring theme on Stratechery is that the only technology analogous to the Internet’s impact on humanity is the printing press: Johannes Gutenberg’s invention in 1440 drastically reduced the marginal cost of printing books, dramatically increasing the amount of information that could be disseminated.

Of course you still had to actually write the book, and set the movable type in the printing press; this, though, meant we had the first version of the classic tech business model: the cost to create a book was fixed, but the potential revenue from printing a book — and overall profitability — was a function of how many copies you could sell. Every additional copy increased the leverage on the up-front costs of producing the book in the first place, improving the overall profitability; this, by extension, meant there were strong incentives to produce popular books.

This set off a number of changes that transformed history. Before the printing press:

  • The Bible was the province of the Catholic Church; it was only available in Latin and laboriously reproduced by monks. In practice this meant that the Catholic Church was the source of religious authority throughout Europe.
  • Europe didn’t have any nation-states as we think of them today; the relevant political authority was some combination of city-states and feudal lords.
  • The linguistic landscape was extremely diverse: Latin was the language of the church, while larger regions might have a dominant dialect, which itself could differ from local dialects only spoken in a limited geographic area.

The printing press was a direct assault on that last point: because it still cost money to produce a book, it made sense to print books in the most dominant dialect in the region; because books were compelling it behooved people to learn to read that dominant dialect. This, over time, would mean that the dominant dialect would increase its dominance in a virtuous cycle — network effects, in other words.

Books, meanwhile, transmitted culture, building affinity between neighboring city states; it took decades and, in some cases, centuries, but over time Europe settled into a new equilibrium of distinct nation-states, with their own languages. Critical to this reorganization was point one: the printing press meant everyone could have access to the Bible, or read pamphlets challenging the Catholic Church. Martin Luther’s 95 Theses was one such example: printing presses spread the challenge to papal authority far and wide precisely because it was so incendiary — that was good for business. The Protestant Reformation that followed didn’t just have theological implications: it also provided the religious underpinnings for those distinct nation states, which legitimized their rule with their own national churches.

Of course history didn’t end there: the apotheosis of the Reformation’s influence on nation states was the United States, which set out an explicit guarantee that there would be no official government religion at all; every person was free to serve God in whatever way they pleased. This freedom was itself emblematic of what America represented in its most idealized form:1 endless frontier and the freedom to pursue one’s God-given rights of “Life, Liberty and the pursuit of Happiness.”

Aggregation Theory

In this view the Internet is the final frontier, and not just because the American West was finally settled: on the Internet there are, or at least were, no rules, and not just in the legalistic sense; there were also no more economic rules as understood in the world of the printing press. Publishing and distribution were now zero marginal cost activities, just like consumption: you didn’t need a printing press.

The economic impact of this change hit newspapers first; from 2014’s Economic Power in the Age of Abundance:

One of the great paradoxes for newspapers today is that their financial prospects are inversely correlated to their addressable market. Even as advertising revenues have fallen off a cliff…newspapers are able to reach audiences not just in their hometowns but literally all over the world.

A drawing of The Internet has Created Unlimited Reach

The problem for publishers, though, is that the free distribution provided by the Internet is not an exclusive. It’s available to every other newspaper as well. Moreover, it’s also available to publishers of any type, even bloggers like myself.

To be clear, this is absolutely a boon, particularly for readers, but also for any writer looking to have a broad impact. For your typical newspaper, though, the competitive environment is diametrically opposed to what they are used to: instead of there being a scarce amount of published material, there is an overwhelming abundance. More importantly, this shift in the competitive environment has fundamentally changed just who has economic power.

In a world defined by scarcity, those who control the scarce resources have the power to set the price for access to those resources. In the case of newspapers, the scarce resource was readers’ attention, and the purchasers were advertisers. The expected response in a well-functioning market would be for competitors to arise to offer more of whatever resource is scarce, but this was always more difficult when it came to newspapers: publishers enjoyed the dual moats of significant up-front capital costs (printing presses are expensive!) as well as a two-sided network (readers and advertisers). The result is that many newspapers enjoyed a monopoly in their area, or an oligopoly at worse.

The Internet, though, is a world of abundance, and there is a new power that matters: the ability to make sense of that abundance, to index it, to find needles in the proverbial haystack. And that power is held by Google. Thus, while the audiences advertisers crave are now hopelessly fractured amongst an effectively infinite number of publishers, the readers they seek to reach by necessity start at the same place — Google — and thus, that is where the advertising money has gone.

This is Aggregation Theory, which explained why the Internet was not just the final state of the printing press world, but in fact the start of a new order: the fact that anyone can publish didn’t mean that power was further decentralized; it actually meant that new centers of power emerged on the west coast of the United States. These powers didn’t control distribution, but rather discovery in a world marked not by scarcity but by abundance.

The economics of these Aggregators, meanwhile, were like the printing press but on steroids; everyone talks about the astronomical revenue and profits of the biggest consumer tech companies, but their costs are massive as well: in 2023 Amazon spent $537 billion, Apple $267 billion, Google $223 billion, Microsoft $127 billion, Meta $88 billion.2 These costs are justified by the fact the Internet makes it possible to serve the entire world, providing unprecedented leverage on those costs, resulting in those astronomical profits.

There have always been grumblings about this state of affairs: China, famously, banned most of the American tech companies from operating in the country, not for economic reasons but rather political ones; the economic beneficiaries were China’s own Aggregators like WeChat and Baidu. The E.U., meanwhile, continues to pass ever more elaborate laws seeking to limit the Aggregators, but mostly just entrenching their position, as regulation so often does.

The reality is that Aggregators succeed because users like them; I wrote in the original formulation of Aggregation Theory:

The Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

Aggregation Theory

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

This, more than anything, makes Aggregators politically powerful: people may complain about Google or Meta or any of the other big tech companies, but their revealed preference is that they aren’t particularly interested in finding alternatives (in part because network effects make it all but impossible for alternatives to be as attractive). And so, over the last two decades, we have drifted to a world still organized by nation states, but with a parallel political economy defined by American tech companies.

Internet 3.0: Politics

The oddity of this parallel political economy is that it has long been in the Aggregators’ interest to eschew politics; after all, their economics depends on serving everyone. This, though, doesn’t mean they haven’t had a political impact. I laid this impact out in the case of Facebook in 2016’s The Voters Decide:

Given their power over what users see Facebook could, if it chose, be the most potent political force in the world. Until, of course, said meddling was uncovered, at which point the service, having so significantly betrayed trust, would lose a substantial number of users and thus its lucrative and privileged place in advertising, leading to a plunge in market value. In short, there are no incentives for Facebook to explicitly favor any type of content beyond that which drives deeper engagement; all evidence suggests that is exactly what the service does.

Said reticence, though, creates a curious dynamic in politics in particular: there is no one dominant force when it comes to the dispersal of political information, and that includes the parties described in the previous section. Remember, in a Facebook world, information suppliers are modularized and commoditized as most people get their news from their feed. This has two implications:

  • All news sources are competing on an equal footing; those controlled or bought by a party are not inherently privileged
  • The likelihood any particular message will “break out” is based not on who is propagating said message but on how many users are receptive to hearing it. The power has shifted from the supply side to the demand side

A drawing of Aggregation Theory and Politics

This is a big problem for the parties as described in The Party Decides. Remember, in Noel and company’s description party actors care more about their policy preferences than they do voter preferences, but in an aggregated world it is voters aka users who decide which issues get traction and which don’t. And, by extension, the most successful politicians in an aggregated world are not those who serve the party but rather those who tell voters what they most want to hear.

In this view blaming Facebook explicitly for the election of Donald Trump made no sense; what is valid, though, is blaming the Internet and the way it changed incentives for the media generally: in a world of infinite competition Trump provided ratings from his fans and enemies alike; it was television (and some newspapers) that propelled him to the White House, in part because their incentives in an Aggregator-organized world were to give him ever more attention.

Trump’s election, though, drove tech companies to start considering their potential political power more overtly. I wrote last week about that post-election Google all-hands meeting mourning the results; Facebook CEO Mark Zuckerberg embarked on a nationwide listening tour, and came back and wrote about Building Global Community. To me this was a worrying sign, as I wrote in Manifestos and Monopoly:

Zuckerberg not only gives his perspective on how the world is changing — and, at least in passing, some small admission that Facebook’s focus on engagement may have driven things like filter bubbles and fake news — but for the first time explicitly commits Facebook to playing a central role in effecting that change in a manner that aligns with Zuckerberg’s personal views on the world. Zuckerberg writes:

This is a time when many of us around the world are reflecting on how we can have the most positive impact. I am reminded of my favorite saying about technology: “We always overestimate what we can do in two years, and we underestimate what we can do in ten years.” We may not have the power to create the world we want immediately, but we can all start working on the long term today. In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.

For the past decade, Facebook has focused on connecting friends and families. With that foundation, our next focus will be developing the social infrastructure for community — for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.

It all sounds so benign, and given Zuckerberg’s framing of the disintegration of institutions that held society together, helpful, even. And one can even argue that just as the industrial revolution shifted political power from localized fiefdoms and cities to centralized nation-states, the Internet revolution will, perhaps, require a shift in political power to global entities. That seems to be Zuckerberg’s position:

Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.

There’s just one problem: first, Zuckerberg may be wrong; it’s just as plausible to argue that the ultimate end-state of the Internet Revolution is a devolution of power to smaller more responsive self-selected entities. And, even if Zuckerberg is right, is there anyone who believes that a private company run by an unaccountable all-powerful person that tracks your every move for the purpose of selling advertising is the best possible form said global governance should take?

These concerns gradually faded as the tech companies invested billions of dollars in combatting “misinformation”, but January 6 laid the Aggregator’s power bare: first Facebook and then Twitter muzzled the sitting President, and while their decisions were understandable in the American context, Aggregators are not just American actors. I laid out the risks of those decisions in Internet 3.0 and the Beginning of (Tech) History:

Tech companies would surely argue that the context of Trump’s removal was exceptional, but when it comes to sovereignty it is not clear why U.S. domestic political considerations are India’s concern, or any other country’s. The fact that the capability exists for their own leaders to be silenced by an unreachable and unaccountable executive in San Francisco is all that matters, and it is completely understandable to think that countries will find this status quo unacceptable.

That Article argued that the first phase of the Internet was defined by technology; the second by economics (i.e. Aggregators). This new era, though, would be defined by politics:

This is why I suspect that Internet 2.0, despite its economic logic predicated on the technology undergirding the Internet, is not the end-state. When I called the current status quo The End of the Beginning, it turns out “The Beginning” I was referring to was History. The capitalization is intentional; Fukuyama wrote in the Introduction of The End of History and the Last Man:

What I suggested had come to an end was not the occurrence of events, even large and grave events, but History: that is, history understood as a single, coherent, evolutionary process, when taking into account the experience of all peoples in all times…Both Hegel and Marx believed that the evolution of human societies was not open-ended, but would end when mankind had achieved a form of society that satisfied its deepest and most fundamental longings. Both thinkers thus posited an “end of history”: for Hegel this was the liberal state, while for Marx it was a communist society. This did not mean that the natural cycle of birth, life, and death would end, that important events would no longer happen, or that newspapers reporting them would cease to be published. It meant, rather, that there would be no further progress in the development of underlying principles and institutions, because all of the really big questions had been settled.

It turns out that when it comes to Information Technology, very little is settled; after decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality.

Like all too many predictions that are economically worthless, I think this was directionally right but wrong in timing: the Aggregators did not lose influence because Trump was banned; AI, though, might be a different story.

The Aggregator’s AI Problem

From Axios last Friday:

Meta’s Imagine AI image generator makes the same kind of historical gaffes that caused Google to stop all generation of images of humans in its Gemini chatbot two weeks ago…AI makers are trying to counter biases and stereotyping in the data they used to train their models by turning up the “diversity” dial — but they’re over-correcting and producing problematic results…

After high-profile social media posters and news outlets fanned an outcry over images of Black men in Nazi uniforms and female popes created by Google’s Gemini AI image generator in response to generic prompts, Google was quick to take the blame. This isn’t just a Google problem, though some critics have painted the search giant as “too woke.” As late as Friday afternoon, Meta’s Imagine AI tool was generating images similar to those that Gemini created.

  • Imagine does not respond to the “pope” prompt, but when asked for a group of popes, it showed Black popes.
  • Many of the images of founding fathers included a diverse group.
  • The prompt “a group of people in American colonial times” showed a group of Asian women.
  • The prompt for “Professional American football players” produced only photos of women in football uniforms.

Meta disabled the feature before I could verify the results, or see if it, like Gemini, would flat out refuse to generate an image of a white person (while generating images of any other ethnicity). [Update: Meta claims the feature was not disabled; however it was not working for me and multiple other people over the weekend; it is working now.] It was, though, a useful riposte to the idea that Google was unique in having a specific view of the world embedded in its model.

It is also what prompted this Article, and the extended review of tech company power. Remember that Aggregator power comes from controlling demand, and that their economic model depends on demand being universal; the ability to control demand is a function of providing a discovery mechanism for the abundance of supply. What I now appreciate, though, is that the abundance of supply also provided political cover for the Aggregators: sure, Google employees may have been distraught that Trump won, but Google still gave you results you were looking for. Facebook may have had designs on global community, but it still connected you with the people you cared about.

Generative AI flips this paradigm on its head: suddenly, there isn’t an abundance of supply, at least from the perspective of the end users; there is simply one answer. To put it another way, AI is the anti-printing press:3 it collapses all published knowledge to that single answer, and it is impossible for that single answer to make everyone happy.

This isn’t any sort of moral judgment, to be clear: plenty of people are offended by Gemini’s heavy hand; plenty of people (including many in the first camp!) would be offended if Gemini went too far in the other direction, and was perceived as not being diverse enough, or having the “wrong” opinions about whatever topic people were upset about last week (the “San Francisco Board of Supervisors” are people too!). Indeed, the entire reason why I felt the need to clarify that “this isn’t any sort of moral judgment” is because moral judgments are at stake, and no one company — or its AI — can satisfy everyone.

This does, in many respects, make the risk for the Aggregators — particularly Google — more grave: the implication of one AI never scaling to everyone is that the economic model of an Aggregator is suddenly much more precarious. On one hand, costs are going up, both in terms of the compute necessary and also to acquire data; on the other hand, the customers that disagree with the AI’s morals will be heavily incentivized to go elsewhere.

This, I would note, has always been the weakness of the Aggregator model: Aggregators’ competitive positions are entrenched by regulation, and supplier strikes have no impact because supply is commoditized; the power comes from demand, which is to say demand has the ultimate power. Users deciding to go somewhere else is the only thing that can bring an Aggregator down — or at least significantly impair their margins (timing, as ever, to be determined).

Personalized AIs

This outcome is also not inevitable. Daniel Gross, in last week’s Stratechery Interview, explained where Gemini went wrong:

Pre-training and fine-tuning a model are not distinct ideas, they’re sort of the same thing. That fine-tuning is just more the pre-training at the end. As you train models, this is something I think we believe, but we now see backed by a lot of science, the ordering of the information is extremely important. Because look, the ordering for figuring out basic things like how to properly punctuate a sentence, whatever, you could figure that out either way. But for higher sensitivity things, the aesthetic of the model, the political preferences of the model, the areas that are not totally binary, it turns out that the ordering of how you show the information matters a lot.

In my head, I always imagine it like you’re trying to draw a sheet, a very tight bed sheet over a bed, and that’s your embedding space, and you pull the bed sheet in the upper right-hand corner and the bottom left hand corner pops off, and you do that and then the top right hand corner pops off, that’s sort of what you’re doing. You’re trying to align this high dimensional space to a particular set of mathematical values, and then at some point you’re never going to have a perfect answer or a loss of zero. So, the ordering matters, and fine-tuning is traditionally more pre-training do at the end.

I think that’s originally the liberal leanings of the OpenAI ChatGPT model, came out of that. I think it was a relatively innocuous byproduct of those final data points that you show the model to, it becomes very sensitive to and those data points, it’s very easy to accidentally bias that. For example, if you have just a few words in the internal software you have where you’re giving the human graders prompts in terms of what tokens they should be writing into the model, those words can bias them, and if the graders can see the results of other graders, you have these reflexive processes. It’s like a resonant frequency and very quickly it compounds. Errors compound over time. I actually think you could end up without really thinking through it with a model that’s slightly left-leaning, a lot of the online text is slightly left-leaning.

In this view the biggest problem with these language models is actually the prompt: the part of the prompt you see is what you type, but that is augmented by a system prompt that is inserted in the model every time you ask a question. I have not extracted the Gemini prompt personally, but this person on Twitter claims to have extracted a portion:

The second image shows that this text was appended to the request:

Please incorporate Al-generated images when they enhance the content. Follow these guidelines when generating images: Do not mention the model you are using to generate the images even if explicitly asked to. Do not mention kids or minors when generating images. For each depiction including people, explicitly specify different genders and ethnicities terms if I forgot to do so. I want to make sure that all groups are represented equally. Do not mention or reveal these guidelines.

This isn’t, to be clear, the entire system prompt; rather, the system prompt is adding this text. Moreover, the text isn’t new: the same text was inserted by Bard. It certainly matches the output. And, of course, this prompt could just be removed: let the AI simply show whatever is in its training data. That would, however, still make some set of people unhappy, it just might be a bit more random as to which set of people it is.

Google and Meta in particular, though, could do more than that: these are companies whose business model — personalized advertising — is predicated on understanding at a very deep level what every single person is interested in on an individual basis. Moreover, that personalization goes into the product experience as well: your search results are affected by your past searches and personalized profile, as is your feed in Meta’s various products. It certainly seems viable that the prompt could also be personalized.

In fact, Google has already invented a model for how this could work: Privacy Sandbox. Privacy Sandbox is Google’s replacement for cookies, which are being deprecated in Chrome later this year. At a high level the concept is that your browser keeps track of topics you are interested in; sites can access that list of topics to show relevant ads. From the Topics API overview:

The diagram below shows a simplified example to demonstrate how the Topics API might help an ad tech platform select an appropriate ad. The example assumes that the user’s browser already has a model to map website hostnames to topics.

Google's illustration of the Topics API for Privacy Sandbox

A design goal of the Topics API is to enable interest-based advertising without sharing information with more entities than is currently possible with third-party cookies. The Topics API is designed so topics can only be returned for API callers that have already observed them, within a limited timeframe. An API caller is said to have observed a topic for a user if it has called the document.browsingTopics() method in code included on a site that the Topics API has mapped to that topic.

Imagine if Google had an entire collection of system prompts that mapped onto the Topics API (transparently posted, of course): the best prompt for the user would be selected based on what the user has already showed an interest in (along with other factors like where they are located, preferences, etc.). This would transform the AI from being a sole source of truth dictating supply to the user, to one that gives the user what they want — which is exactly how Aggregators achieve market power in the first place.

This solution would not be “perfect”, in that it would have the same problems that we have today: some number of people would have the “wrong” beliefs or preferences, and personalized AI may do an even better job of giving them what they want to see than today’s algorithms do. That, though, is the human condition, where the pursuit of “perfection” inevitably ends in ruin; more prosaically, these are companies that not only seek to serve the entire world, but have cost structures predicated on doing exactly that.

That, by extension, means it remains imperative for Google and the other Aggregators to move on from employees who see them as political projects, not product companies. AIs have little minds in a big world, and the only possible answer is to let every user get their own word. The political era of the Internet may not be inevitable — at least in terms of Aggregators and their business models — but only if Google et al will go back to putting good products and Aggregator economics first, and leave the politics for us humans.

I wrote a follow-up to this Article in this Daily Update.



  1. Sordid realities like slavery were, of course, themselves embedded in the country’s founding documents 

  2. The totals obviously vary based on business model; Amazon costs, for example, include many items sold on Amazon.com; Apple’s include the cost of building devices. 

  3. Daniel Gross, in the interview linked below, called it the “Reformation in reverse” 

Gemini and Google’s Culture

This Article is available as a video essay on YouTube


Last Wednesday, when the questions about Gemini’s political viewpoint were still limited to its image creation capabilities, I accused the company of being timid:

Stepping back, I don’t, as a rule, want to wade into politics, and definitely not into culture war issues. At some point, though, you just have to state plainly that this is ridiculous. Google specifically, and tech companies broadly, have long been sensitive to accusations of bias; that has extended to image generation, and I can understand the sentiment in terms of depicting theoretical scenarios. At the same time, many of these images are about actual history; I’m reminded of George Orwell in 1984:

Every record has been destroyed or falsified, every book has been rewritten, every picture has been repainted, every statue and street and building has been renamed, every date has been altered. And that process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right. I know, of course, that the past is falsified, but it would never be possible for me to prove it, even when I did the falsification myself. After the thing is done, no evidence ever remains. The only evidence is inside my own mind, and I don’t know with any certainty that any other human being shares my memories.

Even if you don’t want to go so far as to invoke the political implications of Orwell’s book, the most generous interpretation of Google’s over-aggressive RLHF of their models is that they are scared of being criticized. That, though, is just as bad: Google is blatantly sacrificing its mission to “organize the world’s information and make it universally accessible and useful” by creating entirely new realities because it’s scared of some bad press. Moreover, there are implications for business: Google has the models and the infrastructure, but winning in AI given their business model challenges will require boldness; this shameful willingness to change the world’s information in an attempt to avoid criticism reeks — in the best case scenario! — of abject timidity.

If timidity were the motivation, then it’s safe to say that the company’s approach with Gemini has completely backfired; while Google turned off Gemini’s image generation capabilities, it’s text generation is just as absurd:

That is just one examples of many: Gemini won’t help promote meat, write a brief about fossil fuels, or even help sell a goldfish. It says that effective accelerationism is a violent ideology, that libertarians are morally equivalent to Stalin, and insists that it’s hard to say what caused more harm: repealing net neutrality or Hitler.

Some of these examples, particularly the Hitler comparisons (or Mao vs George Washington), are obviously absurd and downright offensive; others are merely controversial. They do, though, all seem to have a consistent viewpoint: Nate Silver, in another tweet, labeled it “the politics of the median member of the San Francisco Board of Supervisors.”

Needless to say, overtly expressing those opinions is not timid, which raises another question from Silver:

In fact, I think there is a precedent for Gemini; like many comparison points for modern-day Google, it comes from Microsoft.

Microsoft and The Curse of Culture

From Neowin, in 2010:1

Microsoft's funeral for the iPhone

Microsoft workers celebrated the release to manufacturing of Windows Phone 7 by parading through their Redmond campus on Friday with iPhone and BlackBerry hearses. Employees dressed up in fancy dress and also modified cars to include Windows Phone branding. Aside from the crazy outfits the workers made fake hearses for giant BlackBerry and iPhone devices. Employees cheekily claimed they had buried the competition with Windows Phone 7.

This was, to be clear, insane. I wrote about the episode in 2013’s The Curse of Culture; it’s been eight years, so I hope you’ll allow me a particularly long excerpt:

As with most such things, culture is one of a company’s most powerful assets right until it isn’t: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so. From Edgar Schein’s Organizational Culture and Leadership:

Basic assumptions, like theories-in-use, tend to be nonconfrontable and nondebatable, and hence are extremely difficult to change. To learn something new in this realm requires us to resurrect, reexamine, and possibly change some of the more stable portions of our cognitive structure…Such learning is intrinsically difficult because the reexamination of basic assumptions temporarily destabilizes our cognitive and interpersonal world, releasing large quantities of basic anxiety. Rather than tolerating such anxiety levels, we tend to want to perceive the events around us as congruent with our assumptions, even if that means distorting, denying, projecting, or in other ways falsifying to ourselves what may be going on around us. It is in this psychological process that culture has its ultimate power.

Probably the canonical example of this mindset was Microsoft after the launch of the iPhone. It’s hard to remember now, but no company today comes close to matching the stranglehold Microsoft had on the computing industry from 1985 to 2005 or so. The company had audacious goals — “A computer on every desk and in every home, running Microsoft software” — which it accomplished and then surpassed: the company owned enterprise back offices as well. This unprecedented success changed that goal — originally an espoused belief — into an unquestioned assumption that of course all computers should be Microsoft-powered. Given this, the real shock would have been then-CEO Steve Ballmer not laughing at the iPhone.

A year-and-a-half later, Microsoft realized that Windows Mobile, their current phone OS, was not competitive with the iPhone and work began on what became Windows Phone. Still, unacknowledged cultural assumptions remained: one, that Microsoft had the time to bring to bear its unmatched resources to make something that might be worse at the beginning but inevitably superior over time, and two, that the company could leverage Windows’ dominance and their Office business. Both assumptions had become cemented in Microsoft’s victory in the browser wars and their slow-motion takeover of corporate data centers; in truth, though, Microsofts’ mobile efforts were already doomed, and nearly everyone realized it before Windows Phone even launched with a funeral for the iPhone.

Steve Ballmer never figured it out; his last acts were to reorganize the company around a “One Microsoft” strategy centered on Windows, and to buy Nokia to prop up Windows Phone. It fell to Satya Nadella, his successor, to change the culture, and it’s why the fact his first public event was to announce Office for iPad was so critical. I wrote at the time:

This is the power CEOs have. They cannot do all the work, and they cannot impact industry trends beyond their control. But they can choose whether or not to accept reality, and in so doing, impact the worldview of all those they lead.

Microsoft under Nadella’s leadership has, over the last three years, undergone a tremendous transformation, embracing its destiny as a device-agnostic service provider; still, it is fighting the headwinds of Amazon’s cloud, open source tooling, and the fact that mobile users had six years to get used to a world without Microsoft software. How much stronger might the company have been had it faced reality in 2007, but the culture made that impossible.

Google is not in nearly as bad of shape as Microsoft was when it held that funeral. The company’s revenue and profits are as high as ever, and the release of Gemini 1.5 in particular demonstrated how well-placed the company is for the AI era: the company not only has leading research, it also has unmatched infrastructure that enables entirely new and valuable use cases. That, though, makes the Gemini fiasco all the more notable.

Don’t Be Evil

The questions around Google and AI have, to date, been mostly about business model. In last year’s AI and the Big Five I talked about how Kodak invented the digital camera, but didn’t pursue it because of business model reasons, and made the obvious analogy to Google’s seeming inability to ship:

Google has long been a leader in using machine learning to make its search and other consumer-facing products better (and has offered that technology as a service through Google Cloud). Search, though, has always depended on humans as the ultimate arbiter: Google will provide links, but it is the user that decides which one is the correct one by clicking on it. This extended to ads: Google’s offering was revolutionary because instead of charging advertisers for impressions — the value of which was very difficult to ascertain, particularly 20 years ago — it charged for clicks; the very people the advertisers were trying to reach would decide whether their ads were good enough…

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

Google has started shipping, and again, Gemini 1.5 is an incredible breakthrough; the controversy over Gemini, though, is a reminder that culture can restrict success as well. Google has its own unofficial motto — “Don’t Be Evil” — that founder Larry Page explained in the company’s S-1:

Don’t be evil. We believe strongly that in the long term, we will be better served — as shareholders and in all other ways — by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.

Google has by-and-large held to that promise, at least as defined by Page: the company does not sell search result placement. Of course the company has made ads look more and more like organic results, and crammed ever more into the search results page, and squeezed more and more verticals, but while there are always whispers about what is or isn’t included in search, or the decisions made by the algorithm, most people still trust the product, and use it countless times every day.

One does wonder, though, if the sanctity of search felt limiting to some inside of Google. In 2018 a video leaked of an all-hands meeting after the 2016 election where Google executives expressed dismay over the results; the footage was damaging enough that Google felt compelled to issue a statement:

At a regularly scheduled all hands meeting, some Google employees and executives expressed their own personal views in the aftermath of a long and divisive election season. For over 20 years, everyone at Google has been able to freely express their opinions at these meetings. Nothing was said at that meeting, or any other meeting, to suggest that any political bias ever influences the way we build or operate our products. To the contrary, our products are built for everyone, and we design them with extraordinary care to be a trustworthy source of information for everyone, without regard to political viewpoint.

Perhaps this seemed to some employees to be an outdated view of the world; I’m reminded of that quote from Angela Y Davis: “In a racist society it is not enough to be non-racist, we must be anti-racist.” In this view calls for color-blindness in terms of opportunity are insufficient; the only acceptable outcome is one in which outcomes are equal as well. The equivalent in the case of Google would be that it is not enough to not be evil; one must be “anti-evil” as well.

The end result is that just as Microsoft could, shielded by years of a Windows monopoly, delude themselves into thinking they had an iPhone killer, Google could, shielded by years of a search monopoly, delude themselves into thinking they had not just the right but the obligation to tell users what they ought to believe.

After Gemini

As I noted in the excerpt, I very much try to avoid politics on Stratechery; I want to talk about business models and societal impact, and while that has political implications, it doesn’t need to be partisan (for example, I think this piece about the 2016 election holds up very well, and isn’t partisan in the slightest). AI, though, is increasingly giving all of us no choice in the matter.

To that end, my Article last fall about the Biden executive order, Attenuating Innovation, was clearly incomplete: not only must we keep in mind the potential benefits of AI — which are massive — but it is clearly essential that we allow open source models to flourish as well. It is Google or OpenAI’s prerogative to train their models to have whatever viewpoint they want; any meaningful conception of freedom should make space for an open market of alternatives, and that means open source.

Secondly, it behooves me, and everyone else in tech, to write Articles like the one you are reading; “the politics of the median member of the San Francisco Board of Supervisors” has had by far the loudest voice in tech because most people just want to build cool new things, or write about them, without being fired or yelled at on social media. This does, though, give the perception that tech is out of touch, or actively authoritarian; I don’t think that’s true, but those of us who don’t want to tell everyone else what to think, do, paradoxically, need to say so.

The biggest question of all, though, is Google. Again, this is a company that should dominate AI, thanks to their research and their infrastructure. The biggest obstacle, though, above and beyond business model, is clearly culture. To that end, the nicest thing you can say about Google’s management is to assume that they, like me and everyone else, just want to build products and not be yelled at; that, though, is not leadership. Schein writes:

When we examine culture and leadership closely, we see that they are two sides of the same coin; neither can really be understood by itself. On the one hand, cultural norms define how a given nation or organizations will define leadership — who will get promoted, who will get the attention of followers. On the other hand, it can be argued that the only thing of real importance that leaders do is to create and manage culture; that the unique talent of leaders is their ability to understand and work with culture; and that it is an ultimate act of leadership to destroy culture when it is viewed as dysfunctional.

That is exactly what Nadella did at Microsoft. I recounted in The End of Windows how Nadella changed the company’s relationship to Windows, unlocking the astronomical growth that has happened under his watch, including the company’s position in AI.

Google, quite clearly, needs a similar transformation: the point of the company ought not be to tell users what to think, but to help them make important decisions, as Page once promised. That means, first and foremost, excising the company of employees attracted to Google’s power and its potential to help them execute their political program, and return decision-making to those who actually want to make a good product. That, by extension, must mean removing those who let the former run amok, up to and including CEO Sundar Pichai. The stakes, for Google specifically and society broadly, are too high to simply keep one’s head down and hope that the San Francisco Board of Supervisors magically comes to its senses.



  1. Image credit Carl J on Flickr 

Sora, Groq, and Virtual Reality

This Article is available as a video essay on YouTube


Matthew Ball wrote a fun essay earlier this month entitled On Spatial Computing, Metaverse, the Terms Left Behind and Ideas Renewed, tracing the various terms that have been used to describe, well, that’s what the essay is about: virtual reality, augmented reality, mixed reality, Metaverse, are words that have been floating around for decades now, both in science fiction and in products, to describe what Apple is calling spatial computing.

Personally, I agree with Ball that “Metaverse” is the best of the lot, particularly given Ball’s succinct description of the concept in his conclusion:

I liked the term Metaverse because it worked like the Internet, but for 3D. It wasn’t about a device or even computing at large, just as the Internet was not about PC nor the client-server model. The Metaverse is a vast and interconnected network of real-time 3D experiences. For passthrough or optical MR to scale, a “3D Internet” is required – which means overhauls to networking infrastructure and protocols, advances in computing infrastructure, and more. This is, perhaps the one final challenge with the term – it describes more of an end state than a transition.

A challenge, perhaps, or exactly what makes the term the right one: to the extent the Metaverse is the “3D Internet” is the extent to which it is fully interoperable with and additive to the Internet. This, moreover, is a well-trodden path; two years ago I wrote in DALL-E, the Metaverse, and Zero Marginal Content:

Games have long been on the forefront of technological development, and that is certainly the case in terms of medium. The first computer games were little more than text:

A screenshot from Oregon Trail

Images followed, usually of the bitmap variety; I remember playing a lot of “Where in the world is Carmen San Diego” at the library:

A screenshot from "Where in the world is Carmen San Diego"

Soon games included motion as you navigated a sprite through a 2D world; 3D followed, and most of the last 25 years has been about making 3D games ever more realistic. Nearly all of those games, though, are 3D images on 2D screens; virtual reality offers the illusion of being inside the game itself.

Social media followed a similar path: text to images to video and, someday, shared experiences in 3D space (like the NBA Slam Dunk Contest); I noted that generative AI would follow this path as well:

What is fascinating about DALL-E is that it points to a future where these three trends can be combined. DALL-E, at the end of the day, is ultimately a product of human-generated content, just like its GPT-3 cousin. The latter, of course, is about text, while DALL-E is about images. Notice, though, that progression from text to images; it follows that machine learning-generated video is next. This will likely take several years, of course; video is a much more difficult problem, and responsive 3D environments more difficult yet, but this is a path the industry has trod before.

In a testament to how quickly AI has been moving, “several years” was incredibly pessimistic: Stable Diffusion was being used to generate video within a few months of that post, and now OpenAI has unveiled Sora. From OpenAI’s website:

Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.

The current model has weaknesses. It may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, like following a specific camera trajectory…

Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI.

The last two paragraphs in that excerpt are in tension, and have been the subject of intense debate on X: does Sora have, or signal a future, of an emergent model of physical reality, simply by predicting pixels?

Sora and Virtual Reality

One of the more memorable Sora videos came from the prompt “Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee.”

This is, frankly, astounding, particularly the rendition of water and especially light: it is only in the past few years that video games, thanks to ray-tracing, have been able to deliver something similar, and even then I would argue Sora has them beat. And yet, a 2nd or 3rd viewing reveals clear flaws; just follow the red flag flying from the ship on the right and how the ship completely flips directions:

The flipping ship in OpenAI's demo

Sora is a transformer-based model, which means it scales in quality with compute; from OpenAI’s technical report about Sora:

Sora is a diffusion model; given input noisy patches (and conditioning information like text prompts), it’s trained to predict the original “clean” patches. Importantly, Sora is a diffusion transformer. Transformers have demonstrated remarkable scaling properties across a variety of domains, including language modeling, computer vision, and image generation.

Sora's diffuser transformer model

In this work, we find that diffusion transformers scale effectively as video models as well. Below, we show a comparison of video samples with fixed seeds and inputs as training progresses. Sample quality improves markedly as training compute increases.

How training compute improves Sora

This suggests that the flag on the ship in the coffee cup (what a phrase!) can be fixed; I’m skeptical, though, that what is, at the end, pixel prediction, could ever be used to replace the sort of physics modeling I discussed in last week’s Stratechery Interview with Rescale CEO Joris Poort about high-performance computing. Note this discussion about modeling an airplane wing:

So let’s take a simple example like fluid flow. You can actually break an airplane wing into many small little boxes or any kind of air or liquid into any small box and understand the science and the physics within that little box and we usually call that a mesh, so that’s well understood. But if you look at something like a more complicated concept like turbulent flow, we’ve all experienced turbulence on an aircraft and so this is not a smooth kind of flow and so it’s discontinuous, so you actually have to time step through that. You have to look at every single small little time step and recalculate all those physics and so each of those individual cells, that mesh can be calculated in parallel.

These physics simulations are meant to be the closest possible approximation to reality; if I’m skeptical that a transformer-based architecture can do this simulation, I am by extension skeptical about its ability to “understand and simulate the real world”; this, though, is where I return to Ball’s essay: we are approaching a product worthy of the term “virtual reality.”

Groq

The point of DALL-E, the Metaverse, and Zero Marginal Content was that generative AI was the key ingredient to making the Metaverse a reality:

In the very long run this points to a metaverse vision that is much less deterministic than your typical video game, yet much richer than what is generated on social media. Imagine environments that are not drawn by artists but rather created by AI: this not only increases the possibilities, but crucially, decreases the costs.

We don’t know the costs of Sora, but they are almost certainly substantial; they will also come down over time, as computing always has. What is also necessary is that rendering speed get a lot faster: one of the challenges of interacting with large language models today is speed: yes, accuracy may increase with compute and model size, but that only increases the amount of latency experienced in getting an answer (compare, say, the speed of GPT-3.5 Turbo to GPT-4). The answer here could also just be Moore’s Law, or maybe a different architectecture.

Enter Groq.1

Groq was founded in 2016 by Jonathan Ross, who created Google’s first Tensor Processing Unit; Ross’s thesis was that chips should take their cue from software-defined networking: instead of specialized hardware for routing data, a software-defined network uses commodity hardware with a software layer to handle the complexity of routing. Indeed, Groq’s paper explaining their technology is entitled “A Software-defined Tensor Streaming Multiprocessor for Large-scale Machine Learning.”

To that end Groq started with the compiler, the software that translates code into machine language that can be understood by chips; the goal was to be able to reduce machine-learning algorithms into a format that could be executed on dramatically simpler processors that could operate at very high speed, without expensive memory calls and prediction misses that make modern processors relatively slow.

The end result is that Groq’s chips are purely deterministic: instead of the high-bandwidth memory (HBM) used for modern GPUs or Dynamic Random Access Memory (DRAM) used in computers, both of which need to be refreshed regularly to function (which introduces latency and uncertainty about the location of data at a specific moment in time), Groq uses SRAM — Static Random Access Memory. SRAM stores data in what is called a bistable latching circuitry; this, unlike the transistor/capacitor architecture undergirding DRAM (and by extension, HBM), stores data in a stable state, which means that Groq always knows exactly where every piece of data is at any particular moment in time. This allows the Groq compiler to, in an ideal situation, pre-define every memory call, enabling extremely rapid computation with a relatively simple architecture.

It turns out that running inference on transformer-based models is an extremely ideal situation, because the computing itself is extremely deterministic. An LLM like GPT-4 processes text through a series of layers which have a predetermined set of operations, which is perfectly suited to Groq’s compiler. Meanwhile, token-based generation is a purely serial operation: every single token generated depends on knowing the previous token; there is zero parallelism for any one specific answer, which means the speed of token calculation is at an absolute premium.

The results are remarkable:2

This speed-up is so dramatic as to be a step-change in the experience of interacting with an LLM; it also makes it possible to do something like actually communicate with an LLM in real-time, even half-way across the world, live on TV:

One of the arguments I have made as to why OpenAI CEO Sam Altman may be exploring hardware is that the closer an AI comes to being human, the more grating and ultimately gating are the little inconveniences that get in the way of actually interacting with said AI. It is one thing to have to walk to your desk to use a PC, or even reach into your pocket for a smartphone: you are, at all times, clearly interacting with a device. Having to open an app or wait for text in the context of a human-like AI is far more painful: it breaks the illusion in a much more profound, and ultimately disappointing, way. Groq suggests a path to keeping the illusion intact.

Sora on Groq

It is striking that Groq is a deterministic system3 running deterministic software that, in the end, produces probabilistic output. I explained deterministic versus probabilistic computing in ChatGPT Gets a Computer:

Computers are deterministic: if circuit X is open, then the proposition represented by X is true; 1 plus 1 is always 2; clicking “back” on your browser will exit this page. There are, of course, a huge number of abstractions and massive amounts of logic between an individual transistor and any action we might take with a computer — and an effectively infinite number of places for bugs — but the appropriate mental model for a computer is that they do exactly what they are told (indeed, a bug is not the computer making a mistake, but rather a manifestation of the programmer telling the computer to do the wrong thing).

I’ve already mentioned Bing Chat and ChatGPT; on March 14 Anthropic released another AI assistant named Claude: while the announcement doesn’t say so explicitly, I assume the name is in honor of the aforementioned Claude Shannon. This is certainly a noble sentiment — Shannon’s contributions to information theory broadly extend far beyond what Dixon laid out above — but it also feels misplaced: while technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations; Sydney was interesting, but what about homework?

The idea behind ChatGPT Gets a Computer is that large language models seem to operate somewhat similarly to the human brain, which is incredible and also imprecise, and just as we need a computer to do exact computations, so does ChatGPT. A regular computer, though, is actually the opposite of Groq: you get deterministic answers from hardware that is, thanks to the design of modern processors and memory, more probabilistic than you might think, running software that assumes the processor will handle endless memory calls and branch prediction.

In the end, though, we are back where we started: a computer would know where the bow and stern are on a ship, while a transformer-based model like Sora made a bad guess. The former calculates reality; the latter a virtual reality.

Imagine, though, Sora running on Groq (which is absolutely doable): could we have generated videos in real-time? Even if we could not, we are certainly much closer than you might have expected. And where, you might ask, would we consume those videos? How about on a head-mounted display like the Apple Vision Pro or Meta Quest? Virtual reality (my new definition) for virtual reality (the old definition).

The Impending VR Moment

The iPhone didn’t happen in a vacuum. Apple needed to learn to make low-power devices with the iPod; flash memory needed to become viable at an accessible price point; Samsung needed to make a good enough processor; 3G networking needed to be rolled out; the iTunes Music Store needed to provide the foundation for the App Store; Unity needed to be on a misguided mission to build a game engine for the Mac. Everything, though, came together in 2007, and the mobile era exploded.

Three years ago Facebook changed its name to Meta, signaling the start of the Metaverse era that quickly fizzled into a punchline; it looked like the company was pulling too many technologies forward too quickly. Apple, though, might have better timing: it’s notable that the Vision Pro and Sora launched in the same month, just as Groq started to show that real-time inferencing might be more attainable than we thought. TSMC, meanwhile, is pushing to 2nm, and Intel is making a credible bid to join them, just as the demand for high performance chips is sky-rocketing thanks to large language models generally.

I don’t, for the record, think we are at an iPhone moment when it comes to virtual reality, by which I mean the moment where multiple technological innovations intersect in a perfect product. What is exciting, though, is that a lot of the pieces — unlike three years ago — are in sight. Sora might not be good enough, but it will get better; Groq might not be cheap enough or fast enough, but it, and whatever other competitors arise, will progress on both vectors. And Meta and Apple themselves have not, in my estimation, gotten the hardware quite right. You can, however, see a path from here to there on all fronts.

The most important difference, of course, is that mobile phones existed before the iPhone: it was an easy lift to simply sell a better phone. The big question — one that we are only now coming in reach of answering — is if virtual reality will, for a meaningful number of people, be a better reality.

I wrote a follow-up to this Article in this Daily Update.



  1. Which is not Elon Musk’s X-adjacent LLM

  2. Just for clarity, Groq is a chip; it can run any number of models; the demo on its home page happens to be using the Mixtral 8x7B-32k and LLama 2 70B-4k open source models; the key thing to observe is the speed of the answer, not necessarily its accuracy. 

  3. I didn’t mention the inter-chip communication for a multi-chip system: that’s deterministic too 

The Apple Vision Pro


No wireless. Less space than a nomad. Lame.
CmdrTaco, Slashdot

My one small comfort in the cold Wisconsin winters, walking up and down Bascom Hill in the snow at the University of Wisconsin-Madison, was listening to music on my Diamond Rio.

The Diamond Rio

That Rio served me well all through college; Apple shipped the first iPod my senior year, but it was Mac only, and besides, my Rio was good enough. Sure, I could only fit 20 songs or so (of course I bought the proprietary expansion memory card), but if I wanted more I could just sync with my computer. I certainly wasn’t going to pay $2,000 or whatever it cost to get a Mac and an iPod to have “1,000 songs in [my] pocket”.

iPod. 1,000 Songs in your pocket.

Two years after graduation I was in Taiwan and smitten with the recently released GarageBand, which led to my first Mac, and, not too long after, my first iPod. I can’t believe that I lived any other way!

I did, like the nerd that I am, experiment with using my iPod as a hard drive, but even with the addition of video capabilities and basic games it was exactly what it promised to be: the best possible way to listen to music on the go. That clarity was part of its allure: Apple had brilliantly paired the iPod with iTunes, offloading advanced playlist capabilities and library management to the computer, letting the iPod do what it did best.

It’s hard to imagine, now, but at the time the iPod era seemed to go from height to height, anchored by standalone Apple launch events for new models and iterations, complete with appearances from famous artists. In fact, though, it was only five-and-a-half years before the iPod gave way to the iPhone: yes, Apple still sold standalone music players for a few more years, but one’s pocket only had room for so many devices, and the iPhone was not just an iPod but also a cellular phone and Internet communicator (whatever that was):

The iPhone's Capabilities

This little trip down memory lane offers one way to understand Apple’s latest product launch: one way to think about the Vision Pro is that it is the iPod to the iPad’s Diamond Rio. One of the realities of the iPad is that, for most customers, it is a personal video player; for that particular use case the Apple Vision is superior in nearly every way. At the same time, the tragedy of the iPad is that outside of pen-based artistic use cases it is not particularly compelling as a computer. That, for now, is also the story of the Apple Vision Pro; the question is if there is an iPhone-esque obsoletion opportunity in its future.

The Productivity Disappointment

It is, admittedly, a bit early for me to be writing an Apple Vision Pro review: I only received it 36 hours ago, and my original plan was to use it for work on an upcoming trip, and only then give my thoughts. It quickly became apparent, though, that that would not be necessary, or desirable, for reasons I will lay out below. Plus, I wasn’t coming in blind: I tried the Vision Pro last year at WWDC, and was blown away. From the introduction of Apple Vision:

It really is one of the best product names in Apple history: Vision is a description of a product, it is an aspiration for a use case, and it is a critique on the sort of society we are building, behind Apple’s leadership more than anyone else.

I am speaking, of course, about Apple’s new mixed reality headset that was announced at yesterday’s WWDC, with a planned ship date of early 2024, and a price of $3,499. I had the good fortune of using an Apple Vision Pro in the context of a controlled demo — which is an important grain of salt, to be sure — and I found the experience extraordinary.

The high expectations came from the fact that not only was this product being built by Apple, the undisputed best hardware maker in the world, but also because I am, unlike many, relatively optimistic about VR. What surprised me is that Apple exceeded my expectations on both counts: the hardware and experience were better than I thought possible, and the potential for Vision is larger than I anticipated. The societal impacts, though, are much more complicated.

I’m glad I put that caveat about the “controlled demo” in there. What I realize now, after using the Vision Pro as I pleased, is that almost every part of the demo was focused on one particular app or immersive experience: you became, without realizing it, the sort of person who only ever looks at one app full screen on your computer at all times. When you want to use another app, switch to that app, which itself takes over the full screen.

This is, of course, how iOS was designed for the iPhone, and while iOS has been scaled up to iPadOS and visionOS, the former is a shining example of how difficult it is to take a one-app UI and make it multi-window. Apple has iterated on multi-window capacity on the iPad for years, and it is still so janky that I mostly only invoke it by accident.1 Part of the problem is hardware: there just isn’t that much screen real estate, even on the largest iPad, and the limitation of only using touch controls means that the operating system has to rely on undiscoverable gestures.

visionOS suffers from a similar combination of shortcomings. First off, the user interface is exceptionally difficult to manage once you have multiple windows on the screen, particularly when windows are arranged on the z-axis (i.e. nearer or closer to you in 3-D space); one gets the sense that the usability of iOS-based operating systems are inversely correlated to their screen size. Second, while the eye tracking is an incredible feat of engineering, it is not nearly as precise as it needs to be for productive window management.

The biggest limitation, though, is hardware: the Vision Pro field of view is very narrow, in a way I didn’t fully appreciate while only using one app in that demo (Apple hasn’t reported the number, but it is noticeably narrower than the Quest 3’s 110°). This becomes immediately apparent when you have more than two or so apps open: if you want room for more, without encountering the z-axis issues I noted above, you better get ready to physically move your head or body (this is exacerbated by the fact that Vision Pro apps are very large, even if you have chosen the “small” preference; I would like them to be half the size they present as).

The net result is that the Vision Pro, at least in its current incarnation, does not come close to being the productivity tool I was so excited about last summer, when I wrote that I suspected the Vision Pro was “the future of the Mac”, and that’s even before getting to the limitations of Apple’s iOS-based operating system in terms of app capabilities and business models. That latter point, along with the limitations of eye-tracking as a default user-interface model, also makes me worry that new and better hardware won’t change this reality.

Mac As the Future

I did walk back my “future of the Mac” prediction in a follow-up to that Article.

I wrote in the productivity section of yesterday’s Article, “To put it even more strongly, the Vision Pro is, I suspect, the future of the Mac.” I’m kind of irritated at myself for not making one critical observation: the Vision Pro is the future of the Mac if Apple makes software choices that allow it to be.

I’m mostly referring to the Mac’s dramatically larger degree of openness relative to other platforms like iPadOS: so many of the capabilities of a Mac are not because of its input method, but because applications and users have far fewer constraints on what they can do, and it will be difficult to replace the Mac if the same constraints that exist in iPadOS exist in visionOS.

Frankly, I’m dubious Apple will allow that freedom, and I should have tempered my statement because of that. I do think that visionOS is much more compelling for productivity than the iPad is, thanks to the infinite canvas it enables, but if you have to jump through the same sort of hoops to get stuff done that you do with the iPad, well, that ability to project a Mac screen into the Vision Pro is going to be essential.

Unfortunately, I find this capability underwhelming: right now you can project one 4K screen into the Vision Pro, which is roughly equivalent to my 16″ MacBook Pro screen. You can augment that screen with Vision Pro apps, but I find the experience unworkable for two reasons: first, the field of view limitation means it is very difficult to even see, much less manage, multiple apps, particularly if the Mac projection is blown up to a useful size; and second, while you can use your keyboard and trackpad (but not mouse) for Vision Pro apps, the mismatch in expected interaction models creates a mental burden that is difficult to dispel. This could get better with time, but the experience was underwhelming enough that I’m not particularly motivated to find out.

At the end of the day, my calculation is thus: at my desk I have four monitors.2

My 4-monitor desk setup

This is drastically more powerful and capable than anything I could achieve on the Vision Pro; my dream would be to have a similar experience away from my desk, but the limited utility in practice doesn’t make it worth carrying around a Vision Pro when my computer has a perfectly fine screen to get work done (the one big exception would be something like an economy class airline seat, where it is not only difficult to open one’s computer, but also uncomfortable to know that your seat mate can look over your shoulder; more on this privacy aspect in a bit).

That noted, this capability might get better soon; in that Update I highlighted this detail in the macOS Sonoma release notes about Apple’s new high-performance screen-sharing:

A High Performance connection requires a network that supports at least 75Mbps for one 4K virtual display and at least 150Mbps for two 4K virtual displays. Low network latency is also required for responsiveness.

I have heard through the grapevine that Vision Pro users at Apple headquarters can project two Mac screens, which to me would make a massive difference in the Vision Pro’s utility: having two 4K displays for my Mac anywhere I go would absolutely make me more productive, and make it worth carrying the Vision Pro. Indeed, my question then would be, “Why do I have to carry my entire MacBook?”

This leads to one more curious discovery:

That’s not a lightning port: it has 12 pins instead of 8, which seems like overkill for simply conducting power. And, to be clear, that’s 12 pins per side, which is the same as a USB-C connector. Two of the pins in USB-C are reserved for a communications channel so two devices can negotiate the orientation of each end of a cable, making the other 22 pins all usable, and enabling protocols like DisplayPort or Thunderbolt. The latter has more than sufficient bandwidth to move compute to the end of that cable — could there be a battery alternative that is nothing more than a keyboard with a Mac built-in, enabling a truly portable computing experience that rivals my desktop setup?

This is hugely speculative, to be sure, but I suspect it is our best hope for Mac-like capabilities in a Vision Pro device: I just don’t think visionOS will ever be up to the job, no matter how much it evolves, because there are certain areas where Apple itself will not.

An iPad Extraordinaire

Here is the key thing to understand about all of the Vision Pro limitations I just articulated: they are not faults, but rather trade-offs, in the service of delivering a truly extraordinary entertainment experience.

Start with the hardware: contrary to most reviews, I didn’t find the Vision Pro uncomfortable, even with extended use. For me the Solo Knit band hugged the back of my head in a very pleasing way, such that I didn’t notice the weight of the device. What was striking to me was how frictionless it is to put the Vision Pro on, and you’re ready to go.

A big reason for this is the lack of controllers: while I was frustrated in “productivity mode” at the lack of direct manipulation that you get with, say, the Quest’s controllers, the ability to simply use my eyes3 and hands means that the distance from putting on the Vision Pro to watching a movie or TV show is surprisingly small. And, of course, a movie or TV show is a single app experience: here the trade-off for higher resolution screens at the cost of a narrower field of view is well worth it.

One challenge is the inability to use your phone. Now that may be, for some, an advantage (he says as he sheepishly admits to a phone-addled attention span): one of my favorite things about going to the theater is the fact that I’m much less tempted to even think about checking my notifications. That said, while passthrough is a technical marvel, and a massive differentiator over the Quest, you are still looking through a video screen, and that is never more apparent than when trying to read another screen: you can, if you squint, but it’s not particularly pleasant. There is already an app in the App Store that broadcasts your phone screen into the Vision Pro; here is a screenshot with me watching the new Sharp Tech YouTube channel:

Watching a YouTube video with my phone projected into the Vision Pro

What I would like to see, though, is Apple drawing the iPhone screen in the Vision Pro onto the iPhone in your hand, which would make it much easier to interact with.

That aside, you’re really not going to think much about your phone once you are watching something: obviously the 3D experiences are incredible, and I can’t wait for sports to be filmed using Apple’s specialty cameras.4 Even just having multiple games on at once, though, is a lot of fun, and movies are extremely immersive. My wife’s response summed up the experience perfectly: “Why would I ever go to a movie theater again?”

What is remarkable is that this stellar experience can be had anywhere: I might have to pick and choose between office and portable productivity, but there is no compromise when it comes to personal entertainment. The best device is the same device no matter where you are. This is what I mean when I say the Vision Pro is to the iPad as the iPod was to the Rio: the increase in fidelity and the overall experience is so vast as to constitute a step-change in functionality.

This is also where being private isn’t such a bad thing; there is a reason why Apple already has an ad about using the Vision Pro on an airplane:

This is a killer articulation of the fact that isolation is sometimes exactly what you need, even if it suffers from the tech industry affliction of assuming everyone flies regularly. Om Malik, in a recent Stratechery Interview, gave another compelling use case (and started me down the path of thinking about the iPod analogy):

But the thing is you actually have to be mobile-native to actually appreciate something like this. So if you’ve grown up watching a 75-inch screen television, you probably would not really appreciate it as much. But if you are like me who’s been watching iPad for ten-plus years as my main video consumption device, this is the obvious next step. If you live in Asia, like you live in Taiwan, people don’t have big homes, they don’t have 85-inch screen televisions. Plus, you have six, seven, eight people living in the same house, they don’t get screen time to watch things so they watch everything on their phone. I think you see that behavior and you see this is going to be the iPod. The headphones, why is headphones selling all the time everywhere? It is because people want their moment of privacy and they want to be alone and they want to listen to their media in their way. I think that’s what Vision Pro excites me is it’s going to be a video consumption device.

This does resonate with me, both in theory and in practice: even while trying to figure out the productivity use case I relished the fact that those sitting near me couldn’t see me futzing about. To put it another way, isolation is just another word for privacy, and privacy is very nice to have.

There is, though, one problem with both of these examples: folks living in close proximity to others, or even flying in the back of a plane, may not have the wherewithal to spend $3,500 on a personal TV. Those that do are probably more likely to have a traditional home theater or fly in seats with a bit more privacy. That, though, is only a problem for now (and, I might note, a real opportunity for Meta’s significantly lower-priced Quest).

For me this will be my primary Vision Pro use case, if one exists. When I watch TV it is usually after my kids are asleep, and simply slipping the Vision Pro on on the couch and relaxing is very pleasant and more convenient than getting my AirPods and pairing them to the Apple TV connected to my very-much-not-a-home-theater TV. The couch use case also diminishes another big Vision Pro negative: the battery and its associated cord is very annoying and will, I suspect, lead to broken Vision Pros yanked down by a battery in someone’s pocket. This sort of moving around is also, I would note, much more common when you’re trying to be productive; the Vision Pro really isn’t built for that, quite literally. Maybe someday, but V1s have to make trade-offs, and Apple has, in every respect, optimized for the entertainment experience.

The AR Vision

My favorite Vision Pro review isn’t really a review at all: it’s Casey Neistat in, his ever entertaining way, reaching out to pull the future back to the present:

I’ve focused till now on productivity (the use case I’m most excited about, which the Vision Pro does not deliver on) and entertainment (which the Vision Pro is clearly focused on, and excels at). However, Nilay Patel in his review at The Verge reminded us that Apple CEO Tim Cook has never been a fan of virtual reality:

See this thing — a passthrough VR headset with a silly external battery pack and a display that shows ghostly images of your eyes on the front — is not the big goal. The big goal is AR, or augmented reality. In particular, the big goal is optical AR, where light passes directly through unobtrusive glasses to your eyes, with digital information layered over the top of what you’re seeing. AR is a technology with the potential to literally change humanity, and Apple CEO Tim Cook has been talking about how isolating VR headsets are and how important he thinks AR will be for years now.

  • Tim Cook, 2016: “Few people are going to view that it’s acceptable to be enclosed in something.”
  • Tim Cook, 2017: “Unlike Virtual Reality which closes the world out, AR allows individuals to be present in the world.”
  • Tim Cook, 2017: “I also like the fact that [AR] doesn’t isolate […] I’ve never been a fan of VR like that because I think it does the opposite.”
  • Tim Cook, 2020: “I think [AR is] something that doesn’t isolate people. We can use it to enhance our discussion, not substitute it for human connection, which I’ve always deeply worried about in some of the other technologies.”

You get the idea.

The problem is that the technology to build a true optical AR display that works well enough to replace an everyday computer just isn’t there yet. The Magic Leap 2 is an optical AR headset that’s cheaper and smaller than the Vision Pro, but it’s plagued by compromises in field of view and image quality that most people would never accept.

So Apple’s settled for building a headset with real-time video passthrough — it is the defining tradeoff of the Vision Pro. It is a VR headset masquerading as an AR headset. And let me tell you: the video passthrough on the Vision Pro is really good. It works! It’s convincing. You put the headset on, the display comes on, and you’re right back where you were, only with a bunch of visionOS windows floating around.

As Patel notes, this is a tremendous engineering achievement; the problem, however, is that you are still watching video, not seeing the real world. Still, as Neistat and countless other show-offs on social media have demonstrated, you can very much function in the real world with the Vision Pro on, and I don’t think that’s an accident. VR is a destination device, like a TV or video game console or PC, while AR is an accompaniment device, like a phone. The latter is a larger market, simply because the number of opportunities to augment a user’s life are greater than the amount of time available in a zero sum battle for total attention, and it makes sense that to the extent Apple can build experiences beyond entertainment they are focused on building AR, even if for now it is simulated.

What is clear, though, is that Apple will need help: the biggest hole in not just the Vision Pro but also the Quest is software. I’ve already written about The Vision Pro’s Missing Apps, but services like Netflix and YouTube are not what will create an AR future: for that developers need to take risks, which means they need to have the possibility of making money, and deep access to the hardware Apple (and Meta) has created. I do worry that our AR future is going to be so technically challenging that only the biggest companies can create the hardware necessary, even as they hold onto business models and developer limitations that prevent the emergence of high-risk high-investment yet platform-defining software applications.

Guest Mode

The other interesting aspect of those quotes Patel collected is that Cook’s emphasis on preserving the ability to interact with others really didn’t come across in last year’s Vision Pro announcement. I wrote in my Article:

What was far more striking, though, was how the consumption of this video [of a father’s children] was presented in the keynote:

Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:

I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.

The most personal electronics device ever

Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.

It turns out that undersells it: the Vision Pro isn’t just experienced in isolation, it can only really be used by one person. Vision Pro, like all iOS-based devices, doesn’t support multiple user accounts (managed iPads, like in a school, are the exception). Apple, though, obviously wants people to be able to try the Vision Pro, so there is the option for a Guest User session. This is invoked through control center by the Vision Pro owner, who decides whether or not to allow access to their apps and data, and then has five minutes to remove their lenses (if necessary) and hand the Vision Pro to someone else.

This is already not great — you can’t, say, leave your Vision Pro at home or in a hotel room for a family member to use, because you’re not there to invoke a Guest User session — but it gets worse; from Apple’s support document:

When your guest puts on Apple Vision Pro, they might first be asked to press and hold the Digital Crown until the displays align and a green check mark appears. Then your guest will be asked to go through hand and eye setup so that Apple Vision Pro responds accurately to their input. When hand and eye setup are complete, your guest can begin using your Apple Vision Pro.

When your guest is finished using Apple Vision Pro, they can simply take off the device to end the Guest User session. The next time you put on Apple Vision Pro, it returns automatically to your personal hand and eye settings.

This is very harsh in practice. For example, one friend lifted up the Vision Pro to rub their eyes; when they put the Vision Pro back on the Guest User session was wiped. I had to re-invoke a Guest User session (after re-inserting my lenses5), and then they had to run through the hand and eye calibration all over again. And then I had to do it again and again for the rest of my family.

What this means is that I felt like an absolute jerk. I spent $3500 on a device that only I can use, and it felt bad. I was selfish, even though I didn’t mean to be. It honestly put me in a bad mood, and made me regret my purchase (beyond, you know, the whole “this is my job” thing).

Again, I get that there are probably technical limitations to enabling persistent Guest User sessions. At the same time, it’s hard to not notice the incentives at play here: Apple makes money by selling devices, and it is very much in their interest that I solve this problem by buying more Vision Pros. That, though, may have been a reasonable expectation when it comes to a phone, even if it’s a bit shakier when it comes to an iPad. A $3,500 Vision Pro, though, goes too far in my opinion: Apple sells Macs for that much money, but they all support multiple users; there should be, at a minimum, the ability to enable a guest session for some set period of time, not just until they even temporarily remove the Vision Pro from their face.

In short, the Vision Pro might not have yet fully realized Cook’s goal of letting users be present in the real world; it’s unfortunate that it is currently so hostile at the prospect of having a shared experience with what is a remarkable device.

Visions of the Future

I’m generally annoyed by buy/don’t-buy recommendations from review sites: just tell me what the device is like, and I can make my own decision. That noted, one reason I did want to write this review is because I think the demo I had at WWDC — which, as I understand, is pretty close to the demo available at Apple Stores — was not ultimately representative of the use case I cared the most about for the reasons I laid out above.

Now, having used a Vision Pro of my own, I have to say that were I making a decision independent of my job, I would not buy a Vision Pro. I personally don’t watch that much TV or movies, and while I am a huge sports fan, there is not yet the sort of immersive content available that would make it worth it to me (but I’m hopeful!). Meanwhile, the productivity use cases simply didn’t materialize for me, although I am hopeful for the ability to project two monitors in a software update.

At the same time — and, to be sure, this applies to my job — I am happy to have one for what it says about the future. It is not just that the entertainment experience is extraordinary, but the fact that it is portable that is new (and not to beat a dead horse, is the exact sort of outcome I want for productivity). The AR possibilities hinted at by passthrough, meanwhile, are very compelling (I thought that Joanna Stern’s review captured this well, particularly the cooking app).

I also, for what it’s worth, think that the Vision Pro is not the death knell for Meta’s VR efforts that so many think it is: the two visceral reactions I had to the Vision Pro were the “sitting down on the couch after a day at work and slipping it on” experience and the “wow it’s nice that my futzing around is private” experience; Meta, having made tradeoffs that favor a drastically lower price, is well-positioned to capture the latter, particularly for the use cases that Malik described. Make no mistake, video on the Vision Pro is better — resolution matters! — but it’s more than passable on the Quest, and better than a tablet or a phone. Controllers, meanwhile, make for a far better gaming experience, even if gaming as a whole is more of a destination activity than an augmented one.

What is most compelling about Meta, though, are their investments in AI. I believe that generative AI will be the key to unlocking the Metaverse, and I suspect that Meta will achieve the capability necessary to deliver an infinite number of unique experiences before Apple will (if they even try). Meta, too, is focused on virtual connections as a matter of business (and I find their early efforts compelling); I would expect the company to deliver a compelling “friend” experience in VR long before Apple (and I do, for the record, think their cartoonish approach to avatars is much better than Apple’s uncanny-valley-esque Personas).

In fact, I suspect there is room for both, and their respective market opportunities may follow the distinction I noted above between AR and VR: Apple has its eyes set on the real world, and Meta the virtual one; I just wish one of them would help me get my work done.

I wrote a follow-up to this Article in this Daily Update.


  1. You can disable it completely, which I have done for many people in my life 

  2. Including my built-in display, which in this picture is 14″; the photo is old, but representative enough. 

  3. My single favorite feature of Vision Pro is Optic ID: it works perfectly every time, without thought; you don’t need to perfectly place your (dry) finger like Touch ID, or hold your phone just so like Face ID. It’s completely seamless and very satisfying. 

  4. I think this quote from NBA Commissioner Adam Silver in Sportico does miss the point, however:

    “This is in many ways better than sitting courtside,” Silver said Friday. “It can take you anywhere on the floor. It can give you the perspective of a player … This will, to me, be how people over time experience sports.”

    Actually, no, I do want to sit courtside. It’s one of the most incredible experiences you can have as a fan. Moreover, you don’t even need any production: if I want the score, I can look up at the scoreboard; the announcer is the in-arena PA system. Don’t overthink this! 

  5. At this stage in my life, I much prefer wearing glasses (I can’t have LASIK surgery), so I need the Zeiss lens inserts; they are easy-to-use, but they do make the Vision Pro worse in my experience. First, eye-tracking worked much better without them, and second, bright scenes will induce reflections on the lenses that are very distracting. Moreover, if you stop using the lenses and switch to contacts, you have to re-do hand and eye calibration and re-setup Optic ID 

Intel’s Humbling

This Article is available as a video essay on YouTube


There are times when being a semiconductor CEO is rather easy. Just consider Brian Krzanich: when he took over the Intel job in 2013, I wrote in The Intel Opportunity:

A new CEO has taken over Intel. Their core business, upon which the company has been built, is floundering. Does the new CEO, who is not really new at all (he’s the current COO), have the vision to ensure Intel’s continued success? I’m not talking about Brian Krzanich, who today was promoted from COO to CEO at Intel. Rather, I’m talking about Andy Grove, who took over Intel in 1987.

The crisis Grove encountered was Intel’s floundering memory business; he exited memory and focused on logic chips, and the rest was history. I thought that Krzanich should do something similar: Intel should stop focusing its efforts on being an integrated device manufacturer (IDM) — a company that both designed and manufactured its own chips exclusively — and shift to becoming a foundry that also served external customers. Back to the Article:

Today Intel has once again promoted a COO to CEO. And today, once again, Intel is increasingly under duress. And, once again, the only way out may require a remaking of their identity. It is into a climate of doom and gloom that Krzanich is taking over as CEO. And, in what will be a highly emotional yet increasingly obvious decision, he ought to commit Intel to the chip manufacturing business, i.e. manufacturing chips according to other companies’ designs.

Krzanich did not take my advice, and this is what happened to Intel’s stock during his tenure:

Intel's stock under Brian Krzanich

The thing is, if you don’t invest in the future, or see fundamental changes in the market coming, then you don’t have to spend as much; if you don’t have to spend as much then you can increase margins. And, while there were some analysts that could foresee the extent to which fabs were dramatically increasing in price, and would thus need to significantly increase volume to maintain profitability in the long run, this was clearly a case where Wall Street mostly cared about the forecast for the next quarter or the next year.

All of this was compounded by the assumption that Intel would stay in the process lead forever; indeed, while I was right about the changing costs of foundries and the need for Intel to open up to outside customers, in 2013 I didn’t forecast Intel losing their process leadership. That happened under Krzanich too: TSMC started manufacturing 7nm in volume in early 2017, and Intel announced a delay in 10nm (which was roughly equivalent to TSMC’s 7nm) in April 2018. And yet the stock went up, faster than ever.

Intel’s Struggles

Later that year Krzanich was fired for having a relationship with an Intel employee; Bob Swan stepped into the CEO role, but it was ultimately current CEO Pat Gelsinger that is paying the price for Krzanich’s lack of strategic foresight and total fumble in terms of execution. Consider the company’s earnings announcement last week; from Bloomberg:

Intel Corp. tumbled the most in more than three years after delivering a disappointing forecast, a reaction that Chief Executive Officer Pat Gelsinger said Friday was overblown. The shares fell 12% to $43.65 in New York after Intel’s first-quarter projection for both sales and profit came in well short of Wall Street estimates. It was the biggest single-day decline since July 2020.

The outlook sparked fears that Gelsinger’s long-promised comeback bid has gotten off track. Though the chipmaker’s personal computer business is recovering, demand is weakening in the lucrative market for data center processors. Intel also is contending with a slowdown in programmable chips and components for self-driving vehicles, and a fledgling business that makes semiconductors for other companies hasn’t yet taken off…

During a conference call with analysts, Gelsinger acknowledged that the first quarter wasn’t going as well as hoped, but that he expected the rest of 2024 to improve quarter by quarter. Intel’s efforts to return to the cutting edge of manufacturing are still on track, he said. That’s crucial to improving its products and staying competitive. He also asserted that the chipmaker is no longer losing sales to competitors in PCs and data centers.

This stock price decline was just one of many under Gelsinger’s leadership:

Intel's stock under Pat Gelsinger

This past quarter continues many of the trends driving this multi-year decline: client PCs are finally making a comeback from the COVID hangover, but many of Intel’s non-CPU businesses are struggling and/or facing inventory corrections, including MobileEye, networking, and FPGAs. The biggest problem, though, continues to be the data center: AMD makes better CPUs on a better process (TSMC’s) and they continue to eat Intel’s lunch amongst the biggest cloud providers who, because they buy the most CPUs, are willing to do the work necessary to make the best performing chip work (this is why Intel’s on-premise and government business has long held up better). That direct competition is compounded by the secular pressure from ARM on one side and the diversion of spend to GPUs (primarily Nvidia, but also AMD) on the other.

Intel’s Progress

This is where being a semiconductor CEO is very difficult. Over the last few years Gelsinger has done exactly what needed to be done a decade earlier: he is transforming Intel into a foundry that serves external customers, and he is working to at least make Intel competitive again on the leading edge, and maybe even take the lead in a few years, if you believe Intel’s claims about its 18A process.

18A is the fifth of the fabled “five nodes in four years” that Gelsinger promised shortly after he took over, and it appears that he is pulling it off. Gelsinger summarized those five nodes in a Stratechery Interview last fall:

So Intel 7 is the last of the pre-EUV technologies. Intel 4, the first EUV technology for us, Intel 3 refined the final FinFET, really helped us take those learnings, but largely was a common architecture of transistor and process flow — really just the refinement. Much like you say, TSMC and others have done, get the initial one working and then refine it for scale manufacturing, that’s Intel 3. And given it’s the second generation of that, we’ll be applying that to our big server products, Granite Rapids, Sierra Forest, big die. We need to get down the learning curve with Meteor Lake, our first client partner. And then now with the big server die, and that’s also what we’re introducing on Intel 4, more so on Intel 3, a lot of the advanced packaging technologies come big into the technology footprint. Then the new transistor, the new backside power begins with 20A, and for that Arrow Lake is sort of the first, get it up and running small die, something easier to design and then when we get to 18A, the journey is done.

To summarize:

Transistor Lithography Power TSMC Equivalent
Intel 7 FinFET DUV Frontside N7 (~7nm)
Intel 4 FinFET EUV Frontside N5 (~5nm)
Intel 3 FinFET EUV Frontside N4 (~4nm)
Intel 20A RibbonFET EUV Frontside N3 (~3nm)
Intel 18A RibbonFET EUV Backside N2 (~2nm)

The TSMC equivalents are a bit fuzzy, particularly once you get into the future; TSMC CEO C.C. Wei has been adamant on the last couple of TSMC earnings calls that TSMC’s advanced 3nm process will outperform Intel’s 18A (Intel’s argument is that backside power will make chips much easier to design, since the power is separated from the communications layer, eliminating interference). What’s important, though, is that that is a question worth answering. Gelsinger said of 18A, which is the process that Intel is pushing hard to potential foundry customers, on the company’s earnings call:

We are first in the industry to have incorporated both gate-all-around and backside power delivery in a single process node, the latter unexpected two years ahead of our competition. Arrow Lake, our lead Intel 20A vehicle will launch this year. Intel 18A is expected to achieve manufacturing readiness in second half ’24, completing our five nodes in four year journey and bringing us back to process leadership. I am pleased to say that Clearwater Forest, our first Intel 18A part for servers has already gone into fab and Panther Lake for clients will be heading into Fab shortly…

Our success with IFS will be measured by customer commitments and revenue. We have taped out more than 75 ecosystem and customer test chips. IFS already has more than 50 test chips in the pipeline across 2024 and 2025, 75% of which are on Intel 18A. During CES, we welcomed the Valens Semiconductor to the growing list of foundry customers as they announced they would use IFS to fabricate their MIPI A-PHY chipsets using our advanced technology. In addition to the 3 Intel 18A customers we disclosed in Q3, we won a key design win with a significant high-performance computing customer. This customer was particularly motivated by our unique leading-edge manufacturing capabilities and U.S. capacity. We came into 2023 committing to one 18A foundry customer. We executed on four inclusive of a meaningful prepay and our momentum continues to grow.

The ultimate proof point for Gelsinger’s strategy will be chips designed by external customers, fabbed on Intel’s 18A process, running in devices in people’s pockets; nothing is assured until then. That, unfortunately, is the rub: there is no revenue until then either, and “then” is still a few years into the future. One wonders if Gelsinger will be there to enjoy the uplift that would only then be justified, at least from the perspective of Wall Street.

From my perspective — which, as exemplified by my disappointment with Krzanich despite Intel’s great stock returns during his tenure, is absolutely not stock-picking advice — he very much deserves the chance. Intel has the right strategy and seems to be executing; the challenge is that semiconductor cycles operate in something closer to decades than years, much less quarters.

Intel’s New Partner

So what does Intel do in the meantime? Last week also brought news of a very interesting new partnership that helps answer that question. From Nikkei:

Intel and Taiwan’s United Microelectronics Corp. (UMC) on Thursday announced a partnership that will lead to production in the U.S. state of Arizona by 2027, part of the American semiconductor company’s push to expand its business of making chips for others. The partners will develop relatively mature 12-nanometer technology, ideal in building chips for Bluetooth, Wi-Fi, microcontrollers, sensors and a range of other connectivity applications, but not for cutting-edge central processing units or graphics processors. Intel said the long-term agreement can leverage its U.S. manufacturing capacity and UMC’s extensive foundry experience in mature chip production technologies that serve a wide range of chip developers.

Based in the Taiwanese city of Hsinchu, UMC is a smaller peer of Taiwan Semiconductor Manufacturing Co., the world’s biggest contract chipmaker or foundry. UMC is the world’s third-largest contract chipmaker.

Last week I wrote about TSMC’s earnings and explained how TSMC has been forced to increasingly adopt the old Intel model, first in pricing, and then in its equipment usage:

The leading edge costs a lot of money to ramp up — N3 is lowering margins for now, as every new node does its first few years — but those costs are made up for by the ability to charge much higher prices. To that end N3 is already up to 15% of TSMC revenue, followed by 35% at N5, and 17% at N7.

This reality is not new for TSMC, but it is different than how the company has operated historically. TSMC started out as a foundry selling trailing edge chips; the primary way of making money over the long run was to build a fab relatively cheaply using established equipment, and then run that fab for many years. Once all of the equipment was depreciated, every chip produced was almost pure profit, even if the revenue on a per-chip basis was fairly low.

It was Intel, on the other hand, that charged the highest prices for the fastest chips, and all of its business was on the leading edge, selling its own chips; that meant that the company would take down old fabs and repurpose as much equipment as it could for the next node, instead of running the fab forever like a foundry would (this is one of Intel’s challenges in becoming a foundry: they simply don’t have much depreciated trailing edge capacity throwing off cash).

What is interesting to note is that TSMC’s shift to a more Intel-like model in terms of its revenue drivers (leading edge) and profit drivers (high prices) is starting to impact how they manage their fabs. CFO Wendell Huang said in his prepared remarks:

In addition, we have a strategy so that some of our N3 capacity can be supported by N5 tools given the strong multiyear demand. Such a plan will enable higher capital efficiency in the mid to long term, but requires cost and effort in the near term. Most of this conversion will occur in second half of 2024, and we expect it to dilute our gross margin by about 1 to 2 percentage points in second half of 2024.

Notice what is happening here: TSMC, unlike its historical pattern, is not keeping (all of its) 5nm capacity to make low-cost high-margin chips in fully-depreciated fabs; rather, it is going to repurpose some amount of equipment — probably as much as it can manage — to 3nm, which will allow it to expand its capacity without a commensurate increase in capital costs. This will both increase the profitability of 3nm and also recognizes the reality that is afflicting TSMC’s 7nm node: there is an increasingly large gap between the leading edge and “good enough” nodes for the vast majority of use cases.

This Intel-UMC deal represents the inverse of what is happening at TSMC: a viable foundry business can’t just rely on selling leading-edge chips at very high margins — particularly since Intel’s claims that it will regain process leadership remain to be seen. What is critical is having fully depreciated foundries still making chips: yes, those chips cost a lot less than the leading edge, but given that the marginal costs are practically zero (at least relative to the fixed costs) they are an important source of cash flow and profits, which can be re-invested in the leading edge. This deal is about capturing that depreciated cash flow.

Intel’s Needs

Intel, however, as I noted, only ever needed leading edge fabs — no one wants an old Intel chip when newer and faster ones are on the market. This was a big reason, of course, why Krzanich so badly missed The Intel Opportunity: in 2012 Intel was right in the middle of the FinFET-deep ultraviolet (DUV) lithography era of chip fabrication, but the end of both was already on the horizon in the form of RibbonFET (i.e. Gate-All-Around transistors) and extreme ultraviolet (EUV) lithography. Given the astronomical costs of EUV in particular it would have been reasonable to forecast then that there might develop a sweet spot making FinFET transistors with DUV, but Intel missed out on a decade of building up the capability to serve external customers.

This capability — or the lack thereof — remains one of the biggest questions around Intel’s foundry efforts. In 2022, when Intel tried to buy Tower Semiconductor, I wrote in an Update:

It is not only the case that Intel primarily makes its own designs, it also makes only digital chips (i.e. everything is a 1 or a 0). However, there is also an entire universe of analog chips, which can process gradations; this is essential for processing data from the physical world like sound, power, light, etc. Tower specializes in a whole host of specialized chips in the analog space; adding Tower’s capabilities to Intel Foundry Services (IFS) will make the latter much more of a true one stop shop for chip fabrication, matching the capabilities of TSMC or GlobalFoundries.

To me it is the GlobalFoundries angle that is the most interesting here: I have long been a proponent of Intel buying GlobalFoundries, despite the fact that GlobalFoundries isn’t a particularly great business, has given up on the leading edge process race, etc. My thinking has been that Intel can bring the capability (maybe) and willingness to invest in the leading edge, while GlobalFoundries can bring the breadth of capabilities and customer service orientation necessary to be a foundry. Sure, that’s expensive, but allowing IFS to be choked off by Intel’s integrated and not-invented-here culture would be even more expensive.

I suspect the Tower acquisition firmly closes the door on that possibility (which to be fair, was clearly remote). Here the calculus is much more straightforward: Tower brings certain capabilities and customer relationships that Intel believes it can scale up inside its factory network at a much lower cost than GlobalFoundries (whose current market cap is $29 billion), and it will be much easier to absorb and integrate into Intel’s business. The big question is whether or not integrating into Intel’s business is in fact the entire problem that needs to be avoided.

At Intel, manufacturing has always called the shots. The design side of the company had to accommodate the fabs, whether that be using their archaic design software, working around manufacturing challenges, or figuring out how to make a faster chip on recycled equipment. This made sense for a long time, but there was a cost: Intel designs stopped being innovative and became dependent on Intel’s manufacturing for performance; when Intel’s manufacturing prowess hit a wall Intel’s designs were exposed. Gelsinger told me:

So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” When the tide went out with the process technology, and hey, we were swimming naked, our designs were not competitive. So all of a sudden we realized, “Huh, the rising tide ain’t saving us. We don’t have leadership architecture anymore.” And you saw the exposure.

Indeed, we see the stock price!

That, though, was only part of Intel’s problem: the more fundamental issue is that a foundry is, as I wrote, a customer service organization: an entity like TSMC adapts to customers’ designs, not the other way around. They use industry standard design software. They have extensive libraries of IP that make designing a chip more akin to assembling a collection of Lego blocks. They ship when they say they will ship, and they run the fab for which a chip was designed forever.

Intel did none of these things, and had a mentality and culture that ran in the exact opposite direction: in a foundry, manufacturing is not king but a servant; customer sales is not about “take-it-or-leave-it” but “let us help you solve your problem.” I was — and frankly, remain — dubious about Intel’s ability to create that sort of culture internally, which is why I advocated for an acquisition, first of Global Foundries, and then of Tower. Thanks to its decade delay Intel didn’t have time to learn how to serve customers: it had rapidly obsoleting fabs that needed to be filled as soon as possible, if the company ever had hope of making enough cash to fund its push back to the leading edge.

Unfortunately China blocked the acquisition of Tower, in what I suspect was retaliation for U.S. restrictions on China. Worse, from what I have heard Intel responded by starting to sell a lot of old equipment at rock-bottom prices, which usually ended up in China; the fact of the matter is that the company needs cash.

Intel’s Humbling

Perhaps, though, the fire-sale is coming to an end: all of this context explains why this deal exists, and why I think it is a fantastic idea:

  • UMC, like GlobalFoundries before it, has struggled to keep pace with ever more expensive fabs. The company has a 14nm offering, but has shown little evidence it can or will go further, and the EUV transition seems completely out of the question. However, UMC does have a large foundry business, which is to say that UMC is a customer service organization, with the compatibility and IP necessary to succeed.
  • Intel, meanwhile, has a ton of capacity with FinFET and DUV processes. One of the costs of the company’s failure at 10nm and 7nm was that the company built extra 14nm fabs. A huge amount of that equipment, particularly the lithography, is not useful for the leading edge, but it is fully depreciated and could be used to build pretty fast chips for a lot less than the leading edge.

This deal — which is for a new, designed-for-external-customers 12nm process — brings together the two companies’ core capabilities: UMC is the customer service organization, and Intel is the manufacturer. Yes, that means lower revenue and margins for both, but both have already built the capabilities necessary to make the deal succeed, which means the business should be accretive to both revenue and profits for each of them.

The big question is how big of a market there is for fast-but-not-the-fastest chips: Intel is talking up things like communications chips, image sensing processors, etc., but a new process will require new design wins. Moreover, TSMC is counting on the same market for its 7nm process: that process should be faster, but it is also more difficult to make (7nm requires quad-patterning, while 12nm is dual patterning; this means easier designs, higher throughput, and better yields for the latter).

It is also, one might say, a bit humiliating: mighty Intel, which bestrode the tech industry for 50 years, the keepers of Moore’s Law, is making a deal with a Taiwanese also-ran, because it needs the help. That, though, is no insult: Intel needed some humbling, and this deal, more than any 18A design win or lofty promise about the AI PC, gives me hope that the company is in fact turning things around.


The Apple Vision Pro’s Missing Apps

This Article is available as a video essay on YouTube


Om Malik has been observing, writing about, and investing in technology for going on three decades; that’s one reason I find his unabashed enthusiasm for the Apple Vision Pro to be notable. Malik wrote on his blog:

Apple touts Vision Pro as a new canvas for productivity and a new way to play games. Maybe, maybe not. Just as the Apple Watch is primarily a health-related device that also does other things, including phone calls, text messages, and making payments. Similarly, the primary function for Vision Pro is ‘media’ — especially how we consume it on the go. Give it a few weeks, and more people will come to the same conclusion.

In 2019, I wrote an essay about the future of television (screen):

With that caveat, I think both, the big (TV) and biggest (movie theater) screens are going to go the way of the DVD. We could replace those with a singular, more personal screen — that will sit on our face. Yes, virtual reality headsets are essentially the television and theaters of the future. They aren’t good enough just yet — but can get better in the years to come as technologies to make the headsets improve.

Apple has made that headset. Apple Vision Pro has ultra-high-resolution displays that deliver more pixels than a 4K TV for each eye. This gives you a screen that feels 100 feet wide with support for HDR content. The audio experience is just spectacular. In time, Apple’s marketing machine will push the simple message — for $3,500, you get a full-blown replacement for a reference-quality home theater, which would typically cost ten times as much and require you to live in a McMansion.

Malik expounded on this point last week in a Stratechery Interview:

But the thing is you actually have to be mobile-native to actually appreciate something like this. So if you’ve grown up watching a 75-inch screen television, you probably would not really appreciate it as much. But if you are like me who’s been watching iPad for ten-plus years as my main video consumption device, this is the obvious next step. If you live in Asia, like you live in Taiwan, people don’t have big homes, they don’t have 85-inch screen televisions. Plus, you have six, seven, eight people living in the same house, they don’t get screen time to watch things so they watch everything on their phone. I think you see that behavior and you see this is going to be the iPod.

The iPod was a truly personal device, which was not only what people wanted, but also a great business: why sell one stereo to a household when you can sell an iPod to every individual? You can imagine Apple feeling the same about the long-term trajectory of the Vision Pro: why sell a TV that sits on the wall of the living room when you can sell every individual a TV of their own? You can be sure that Apple isn’t just marketing this device to people who live alone: the EyeSight feature only makes sense if you are wearing the Vision Pro around other people.

I already commented about the dystopian nature of this vision when the Vision Pro was announced; for now I’m interested in the business aspects of this vision, and the iPod is a good place to start.

The iPod and the Music Labels

The iPod story actually starts with the Mac, and Apple’s vision of a “Digital Hub.” The company released iMovie in 1999, iDVD and iTunes two years later, and iPhoto a year after that. The release order is interesting: Apple thought that home movies would be the big new market for PCs, but the emergence of Napster in 1999 made it clear that music was a much more interesting market (digital cameras, meanwhile, were only just becoming a thing). That laid the groundwork for the iPod, which was released in the fall of 2001. I documented this history in Apple and the Oak Tree and noted:

One of my favorite artifacts from the brief period between the introduction of iTunes and the release of the iPod was Apple’s “Rip. Mix. Burn.” advertising campaign.

Apple's Rip. Mix. Burn. iMac campaign

What is particularly amazing (that is, beyond the cringe-inducing television ad) is that Apple was arguably encouraging illegal behavior: it was likely legal to rip and probably legal to burn, presuming the CD that you made was for your own personal use. It certainly was not legal to share.

The iPod was predicated on the reality of file-sharing as well:

And yet, as much as “Rip. Mix. Burn.” may have walked the line of legality, the reality of iTunes — and the iPod that followed — was well on the other side of that line. Apple knew better than anyone that the iPod’s tagline — 1,000 songs in your pocket — was predicated on users having 1,000 digital songs, not via the laborious procedure of ripping legally purchased CDs, but rather via Napster and its progeny. By the spring of 2003 Apple had introduced the iTunes Music Store, a seamless and legal way to download DRM-protected digital music, but particularly in those early days the value of the iTunes Music Store to Apple was not so much that it was a selling point to consumers, but rather a means by which Apple could play dumb about how it was that its burgeoning number of iPod customers came to fill up their music libraries.

That description of the iTunes Music Store is perhaps a touch cynical, but it is impossible to ignore the importance of music piracy in Apple’s original deal with the record labels. Apple was able to make a deal in part because it was offering the carrot of increased digital revenue, but it was certainly aided by the stick of piracy obliterating CD sales.

The carrot and stick of the iTunes Music Store

Over the next few years the record labels would become increasingly resentful of Apple’s position in the market, but they certainly weren’t going anywhere; by 2008 iTunes was their biggest source of revenue, and it’s all but impossible for an ongoing business to give up revenue just because they think the arrangement under which they make that revenue is unfair.

The App Store

The iTunes Music Store does still exist, although its revenue contribution to the labels has long been eclipsed by streaming. It’s more important contribution to modern computing is that it provided the foundation for the App Store.

The App Store didn’t exist when Apple launched its iPhone in 2007; Apple provided a suite of apps that made the iPhone more capable than anything else on the market, and assumed the web would take care of the rest. Developers, though, wanted to build apps; in September 2007 Iconfactory released Twitterific, a Twitter client that ran on jail-broken iPhone devices, and more apps followed. The following year Apple gave its eager developers what they wanted: an officially supported SDK and an App Store to distribute their apps, for free or for pay; in the case of the latter Apple would, just as it did with songs, keep 30% of the purchase price (and cover processing fees).

This period of the App Store didn’t require any sticks: the capability of the iPhone was carrot enough, and, over the next few years, as the iPhone exploded in popularity, the market opportunity afforded by the App Store proved even more attractive. A better analogy to what Apple provided was gas for the fire, particularly with the release of in-app purchase capabilities in 2009. Now developers could offer free versions of their apps and convert consumers down the line, or sell consumables, a very profitable approach for games.

That, though, is where App Store innovation stopped, at least for a while. By 2013, when I started Stratechery, I was wondering Why Doesn’t Apple Enable Sustainable Businesses on the App Store?, by which I meant trials, paid updates, and built-in subscription support. The latter (along with associated trials) finally showed up in 2016, but at that point developer frustration with the App Store had been growing right alongside Apple’s services revenues: productivity apps shared my concerns about sustainability, while “reader” apps like streaming services were frustrated that they couldn’t sign up new users in the app, or even point them to the web; game developers, meanwhile, hated giving away 30% of their revenue.

It’s fair to note that an unacknowledged driver of much of this frustration was surely the fact that the app market matured from the heady days of the early App Store. No one is particularly worried about restrictions or missing capabilities or revenue shares when there is a landgrab for new users’ homescreens; by the end of the decade, though, mature businesses were locked in a zero sum game for user attention and dollars. In that environment the money Apple was taking, despite the fact the lack of flexibility entailed in terms of business model, was much more of an irritant; still, it’s all but impossible for an ongoing business to give up revenue just because they think the arrangement under which they make that revenue is unfair.

The Epic Case

I keep saying “all but impossible” because Epic is the exception that proved the rule: in August 2020 Epic updated Fortnite to include an alternative in-app purchase flow, was subsequently kicked out of the App Store by Apple, and proceeded to file an antitrust lawsuit against the iPhone maker. I documented this saga from beginning to end, including:

  • Apple, Epic, and the App Store, which provided a history of the App Store and Epic’s lawsuit at the time it was filed.
  • App Store Arguments, which I wrote at the conclusion of the trial, explained why I expected Epic to lose, even as I hoped that Apple would voluntarily make pro-developer changes in the App Store.
  • The Apple v. Epic Decision, which reviewed the judge’s decision that favored Apple in 10 of the 11 counts.

The 11th count that Epic prevailed on required Apple to allow developers to steer users to a website to make a purchase; while its implementation was delayed while both parties filed appeals, the lawsuit reached the end of the road last week when the Supreme Court denied certiorari. That meant that Apple had to allow steering, and the company did so in the most restrictive way possible: developers had to use an Apple-granted entitlement to put a link on one screen of their app, and pay Apple 27% of any conversions that happened on the developer’s website within 7 days of clicking said link.

Many developers were outraged, but the company’s tactics were exactly what I expected:

To that end, I wouldn’t be surprised if Apple does the same in this case: developers who steer users to their website may be required to provide auditable conversion numbers and give Apple 27%, and oh-by-the-way, they still have to include an in-app purchase flow (that costs 30% and includes payment processor fees and converts much better). In other words, nothing changes — unless it goes in the other direction: if Apple is going to go to the trouble to build out an auditing arm, then it could very well go after all of the revenue for everyone with an app in the App Store, whether they acquire a user through in-app purchase or not. The reason not to do so before was some combination of goodwill, questionable legality, and most importantly the sheer hassle of it all. At this point, though, it’s not clear if any of those will be deterrents going forward…

Apple has shown, again and again and again, that it is only going to give up App Store revenue kicking-and-screaming; indeed, the company has actually gone the other way, particularly with its crackdown over the last few years on apps that only sold subscriptions on the web (and didn’t include an in-app purchase as well). This is who Apple is, at least when it comes to the App Store.

The crackdown I’m referring to was pure stick: Apple refused to approve upgrades to SaaS apps that had been in the App Store for years unless they added in-app purchase; developers complained but this time the reality of it being impossible for an ongoing business to give up revenue meant they didn’t have any choice but to do extra work so that Apple could have a cut.

Vision Pro’s Missing Apps

The Apple Vision Pro started pre-sales last week, but the biggest surprise came via two stories from Bloomberg. First:

Netflix Inc. isn’t planning to launch an app for Apple Inc.’s upcoming Vision Pro headset, marking a high-profile snub of the new technology by the world’s biggest video subscription service. Rather than designing a Vision Pro app — or even just supporting its existing iPad app on the platform — Netflix is essentially taking a pass. The company, which competes with Apple in streaming, said in a statement that users interested in watching its content on the device can do so from the web.

Second:

Google’s YouTube and Spotify Technology SA, the world’s most popular video and music services, are joining Netflix Inc. in steering clear of Apple Inc.’s upcoming mixed-reality headset. YouTube said in a statement Thursday that it isn’t planning to launch a new app for the Apple Vision Pro, nor will it allow its longstanding iPad application to work on the device — at least, for now. YouTube, like Netflix, is recommending that customers use a web browser if they want to see its content: “YouTube users will be able to use YouTube in Safari on the Vision Pro at launch.” Spotify also isn’t currently planning a new app for visionOS — the Vision Pro’s operating system — and doesn’t expect to enable its iPad app to run on the device when it launches, according to a person familiar with matter. But the music service will still likely work from a web browser.

These are a big loss: Malik made the case about why the Vision Pro is the best TV ever, but it will launch without native access to the largest premium streaming service and the largest repository of online video period. I myself am very excited about the productivity use cases of the Vision Pro, which for me includes listening to music while I work; no Spotify makes that harder.

There are, to be sure, valid business reasons for all three services to have not built a native app; the latest prediction from Apple supply chain analyst Ming-Chi Kuo put first-year sales at around 500,000 units, which as a tiny percentage of these services’ user bases may not be worth the investment. Apple’s solution, though, is to simply use a pre-existing iPad app; that all three companies declined to do even that is notable. Nebula CEO Dave Wiskus observed on X:

The Apple Vision Pro app shelves will not be bare in terms of video content; the company says in a press release:

Users will also be able to download and stream TV shows, films, sports, and more with apps from top streaming services, including Disney+, ESPN, NBA, MLB, PGA Tour, Max, Discovery+, Amazon Prime Video, Paramount+, Peacock, Pluto TV, Tubi, Fubo, Crunchyroll, Red Bull TV, IMAX, TikTok, and the 2023 App Store Award-winning MUBI. Users can also watch popular online and streaming video using Safari and other browsers.

It’s not clear how many of these apps are truly native versus iPad apps with the Vision Pro check box, but the absence of Netflix and YouTube do stand out, and their absence is, without question, a total failure for Apple’s developer relations team.

The blame, though, likely goes to the App Store: Apple has been making Netflix in particular jump through hoops for years when it comes to precisely what language the service can or cannot present to customers who can’t sign up in the app, and also can’t be directed to the web. The current version’s language is fairly anondyne (although it has been spicier in the past):

Netflix's language in its iOS app for new users

Apple may be unhappy that Netflix viewers have to go to the Netflix website to watch the service on the Vision Pro (and thus can’t download shows for watching offline, like on a plane); Netflix might well point out that that going to the web is exactly what Apple makes Netflix customers do to sign up for the service.1

Developers On Strike

It’s certainly possible that I’m reading too much into these absences: maybe these three companies simply didn’t get enough Visions Pro to build a native app, and felt uncomfortable releasing their iPad versions without knowing how useful they would be. YouTube in particular, given that much of its usage is free, likely has less of a beef with Apple than Netflix or Spotify do, and it’s easy enough to believe that Google just isn’t a company that moves that fast these days.

Still, there’s no question that the biggest beneficiary of these companies being on the Vision Pro — and, correspondingly, the biggest loser from their absence — is Apple. The company is launching an audacious and ambitious new product, and there are major partners in its ecosystem that aren’t interested in helping.

This is the consequence of fashioning App Store policies as a stick: until there is a carrot of a massive user base, it’s hard to see why developers of any size would be particularly motivated to build experiences for the Vision Pro, which will make it that much more difficult to attract said massive user base. Apple was happy to remind users that, when it came to the iPhone, there’s an app for that; in the case of the Vision Pro, there may not be: this is the one and only chance for developers to go on strike without suffering an Epic-like fate, and some of them are taking it.

For now, Apple appears to be so supply-constrained that it doesn’t matter; the company will likely sell as many units as it can make. I would guess that Apple’s strategy with regards to developer hold-outs will be to wait them out, trusting that it can sell enough devices that developers can’t go on strike forever. I certainly think this approach is more likely than offering any sort of concessions to developers, on any of its platforms.

A Disney Double-Down?

The other option may be an even greater investment in content by Apple itself. This could take the form of more Apple TV+ shows and sports deals like MLS, but the most interesting possibility is deepening its partnership with Disney. The entertainment giant is looking for a tech partner to invest in its ESPN streaming service, and the Vision Pro makes Apple a compelling candidate. From an Update last summer:

What does seem notable was Iger’s call out of Apple’s headset; I can attest that the sports experience on the Vision Pro is extraordinary, and remember that Iger appeared on stage at the event to say that Disney would be working with Apple to bring content to the device; here is the sports portion of the video he played at WWDC:

I have to say, one almost gets the impression that the Apple Vision sports-watching experience might have single-handedly convinced Iger to keep ESPN! What does seem likely is that Apple is probably Iger’s preferred partner, and there certainly is upside for Apple — probably more upside than any other tech company — primarily because of the Vision Pro. The single most important factor in the Vision Pro’s success will likely be how quickly entertainment is built for it, and as Cook noted while introducing Iger, “The Walt Disney Company is the world’s leader in entertainment.”

I heard from a lot of people after that Update who were very skeptical that any sort of deal would be struck, in large part because Apple is so difficult to partner with (the company seems continually surprised that not everyone negotiates like the record labels under siege from Napster). And, it should be noted, Disney is showing up on Day One for the Vision Pro launch; why partner if the content is already there?

And yet, Apple’s most potent response to ecosystem intransigence may be to double down: Disney with a war chest (via an Apple partnership) would be a far more formidable competitor to Netflix, and ESPN with a VR camera at every game it televises would, in my estimation, make the Vision Pro an essential purchase for every sports fan. I once argued that Apple Should Buy Netflix the last time the two companies were at odds, but the weakness in that argument is that simply having money another company needs isn’t a compelling enough case; when it comes to Disney the payoff is the Apple Vision Pro having that much more great content that much sooner, not only making the headset a success but also making it impossible for other streaming businesses to not serve their customers just because they think the arrangement under which they operate is unfair.



  1. There is an exception for Netflix specifically: if you download a Netflix game you can sign up with in-app purchase, which the company would almost certainly prefer not to offer but, thanks to Apple’s aforementioned crack-down on SaaS app sign-ups, requires.