Political Chips

Last Friday AMD surpassed Intel in market capitalization:

Intel vs AMD market caps

This was the second time in history this happened — the first was earlier this year — and it may stick this time; AMD, in stark contrast to Intel, had stellar quarterly results. Both stocks are down in the face of a PC slump, but that is much worse news for Intel, given that they make worse chips.

It’s also not a fair comparison: AMD, thirteen years on from its spinout of Global Foundries, only designs chips; Intel both designs and manufactures them. It’s when you include AMD’s current manufacturing partner, TSMC, that Intel’s relative decline becomes particularly apparent:

Intel, AMD, and TSMC market caps

Of course an Intel partisan might argue that this comparison is unfair as well, because TSMC manufactures chips for a whole host of companies beyond AMD. That, though, is precisely Intel’s problem.

Intel’s Stumble

The late Clay Christensen, in his 2004 book Seeing What’s Next, predicted trouble for Intel:

Intel’s well-honed processes — which are almost unassailable competitive strengths in fights for undershot customers hungering for performance increases — might inhibit its ability to fight for customers clamoring for customized products. Its exacting manufacturing process could hamper its ability to deliver customized products. Its sales force could have difficulty adapting to a very different sales cycle. It would have to radically alter its marketing process. The VCE model predicts that operating “fast fabs” will be an attractively profitable point in the value chain in the future. The good news for IDMs such as IBM and Intel is that they own fabs. The bad news is that their fabs aren’t fast. Entrants without legacy processes could quite conceivably develop better proprietary processes that can rapidly deliver custom processors.

This sounds an awful lot like what happened over the ensuing years: one of TSMC’s big advantages is its customer service. Given the fact that the company was built as a pure play foundry it has developed processes and off-the-shelf building blocks that make it easy for partners to build custom chips. This was tremendously valuable, even if the resultant chips were slower than Intel’s.

What Christensen didn’t foresee was that Intel would lose the performance crown; rather, he assumed that performance would cease to be an important differentiator:

If history is any guide, motivated innovators will continue to do the seemingly impossible and find unanticipated ways to extend the life of Moore’s Law. Although there is much consternation that at some point Moore’s Law will run into intractable physical limits, the only thing we can predict for certain is that innovators will be motivated to figure out solutions.

But this does not address whether meeting Moore’s Law will continue to be paramount to success. Everyone always hopes for the emergence of new, unimagined applications. But the weight of history suggests the unimagined often remains just that; ultimately ever more demanding applications will stop appearing or will emerge much more slowly than anticipated. But even if new, high-end applications emerge, rocketing toward the technological frontier almost always leaves customers behind. And it is in those overshot tiers that disruptions take root.

How can we tell if customers are overshot? One signal is customers not using all of a product’s functionality. Can we see this? There are ever-growing populations of users who couldn’t care less about increases in processing power. The vast majority of consumers use their computers for word processing and e-mail. For this majority, high-end microprocessors such as Intel’s Itanium and Pentium 4 and AMD’s Athlon are clearly overkill. Windows XP runs just fine on a Pentium III microprocessor, which is roughly half as fast as the Pentium 4. This is a sign that customers may be overshot.

Obviously Christensen was wrong about a Pentium III being good enough, and not just because web pages suck; rather, the infinite malleability of software really has made it possible to not just create new kinds of applications but to also substantially rework previous analog solutions. Moreover, the need for more performance is actually accelerating with the rise of machine-learning based artificial intelligence.

Intel, despite being a chip manufacturer, understood the importance of software better than anyone. I explained in a Daily Update earlier this year about how Pat Gelsinger, then a graduate student at Stanford, convinced Intel to stick with a CISC architecture design because that gave the company a software advantage; from an oral history at the Computer Museum:

Gelsinger: We had a mutual friend that found out that we had Mr. CISC working as a student of Mr. RISC, the commercial versus the university, the old versus the new, teacher versus student. We had public debates of John and Pat. And Bear Stearns had a big investor conference, a couple thousand people in the audience, and there was a public debate of RISC versus CISC at the time, of John versus Pat.

And I start laying out the dogma of instruction set compatibility, architectural coherence, how software always becomes the determinant of any computer architecture being developed. “Software follows instruction set. Instruction set follows Moore’s Law. And unless you’re 10X better and John, you’re not 10X better, you’re lucky if you’re 2X better, Moore’s Law will just swamp you over time because architectural compatibility becomes so dominant in the adoption of any new computer platform.” And this is when x86– there was no server x86. There’s no clouds at this point in time. And John and I got into this big public debate and it was so popular.

Brock: So the claim wasn’t that the CISC could beat the RISC or keep up to what exactly but the other overwhelming factors would make it the winner in the end.

Gelsinger: Exactly. The argument was based on three fundamental tenets. One is that the gap was dramatically overstated and it wasn’t an asymptotic gap. There was a complexity gap associated with it but you’re going to make it leap up and that the CISC architecture could continue to benefit from Moore’s Law. And that Moore’s Law would continue to carry that forward based on simple ones, number of transistors to attack the CISC problems, frequency of transistors. You’ve got performance for free. And if that gap was in a reasonable frame, you know, if it’s less than 2x, hey, in a Moore’s Law’s term that’s less than a process generation. And the process generation is two years long. So how long does it take you to develop new software, porting operating systems, creating optimized compilers? If it’s less than five years you’re doing extraordinary in building new software systems. So if that gap is less than five years I’m going to crush you John because you cannot possibly establish a new architectural framework for which I’m not going to beat you just based on Moore’s Law, and the natural aggregation of the computer architecture benefits that I can bring in a compatible machine. And, of course, I was right and he was wrong.

That last sentence needs a caveat: Gelsinger was right when it came to computers and servers, but not smartphones. There performance wasn’t free, because manufacturers had to be cognizant of power consumption. More than cognizant, in fact — power usage was the overriding concern. Tony Fadell, who created the iPod and led the development of the first three generations of the iPhone, told me in an interview earlier this year:

You have to have that point of view of that every nanocoulomb is sacred and compatibility doesn’t matter, we’re going to use the best bits, but we’re not going to make sure it has to be the same look and feel. It doesn’t have to have the same principles that is designed for a laptop or a standalone desktop computer, and then bring those down to something that’s smaller form factor, and works within a certain envelope. You have to rethink all the principles. You might use the bits around, and put them together in different ways and use them differently. That’s okay. But your top concept has to be very, very different about what you’re building, why you’re building it, what you’re solving, and the needs of that new environment, which is mobile, and mobile at least for a day or longer for that battery life.

The key phrase there is “compatibility doesn’t matter”; Gelsinger’s argument for CISC over RISC rested on the idea that by the time you remade all of the software created for CISC, Intel would have long since overcome the performance delta between different architectures via its superior manufacturing, which would allow compatibility to trump the competition. Smartphones, though, provided a reason to build up the software layer from scratch, with efficiency, not performance, as the paramount goal.1

All of this still fit in Christensen’s paradigm, I would note: foundries like TSMC and Samsung could accommodate new chip designs that prioritized efficiency over performance, just as Christensen predicted. What he didn’t foresee in 2004 was just how large the smartphone market would be. While there are a host of reasons why TSMC took the performance crown from Intel over the last five years, a major factor is scale: TSMC was making so many chips that it had the money and motivation to invest in Moore’s Law.

The most important decision was shifting to extreme ultraviolet lithography at a time when Intel thought it was much too expensive and difficult to implement; TSMC, backed by Apple’s commitment to buy the best chips it could make, committed to EUV in 2014, and delivered the first EUV-derived chips in 2019 for the iPhone.

Those EUV machines are made by one company — ASML. They’re worth more than Intel too (and Intel is a customer):

Intel, AMD, TSMC, and ASML market caps

The Dutch company, to an even greater degree than TSMC, is the only lithography maker that can afford to invest in the absolute cutting edge.

From Technology to Economics

In 2021’s Internet 3.0 and the Beginning of (Tech) History, I posited that the first era of the Internet was defined by technology, i.e. figuring out what was possible. Much of this technology, including standards like TCP/IP, DNS, HTTP, etc. was developed decades ago; this era culminated in the dot com bubble.

The second era of the Internet was about economics, specifically the unprecedented scale possible in a world of zero distribution costs.

Unlike the assumptions that undergird Internet 1.0, it turned out that the Internet does not disperse economic power but in fact centralizes it. This is what undergirds Aggregation Theory: when services compete without the constraints of geography or marginal costs, dominance is achieved by controlling demand, not supply, and winners take most.

Aggregators like Google and Facebook weren’t the only winners though; the smartphone market was so large that it could sustain a duopoly of two platforms with multi-sided networks of developers, users, and OEMs (in the case of Android; Apple was both OEM and platform provider for iOS). Meanwhile, public cloud providers could provide back-end servers for companies of all types, with scale economics that not only lowered costs and increased flexibility, but which also justified far more investments in R&D that were immediately deployable by said companies.

Chip manufacturing obviously has marginal costs, but the fixed costs are so much larger that the economics are not that dissimilar to software (indeed, this is why the venture capital industry, which originated to support semiconductor startups, so seamlessly transitioned to software); today TSMC et al invest billions of dollars into a single fab that generates millions of chips for decades.

That increase in scale is why a modular value chain ultimately outcompeted Intel’s integrated approach, and it’s why TSMC’s position seems so impregnable: sure, a chip designer like MediaTek might announce a partnership with Intel to maybe produce some lower-end chips at some point in the future, but there is a reason it is not a firm commitment and not for the leading edge. TSMC, for at least the next several years, will make the best chips, and because of that will have the most money to invest in what comes next.

Scale, though, is not the end of the story. Again from Internet 3.0 and the Beginning of (Tech) History:

This is why I suspect that Internet 2.0, despite its economic logic predicated on the technology undergirding the Internet, is not the end-state…After decades of developing the Internet and realizing its economic potential, the entire world is waking up to the reality that the Internet is not simply a new medium, but a new maker of reality…

To the extent the Internet is as meaningful a shift [as the printing press] — and I think it is! — is inversely correlated to how far along we are in the transformation that will follow — which is to say we have only gotten started. And, after last week, the world is awake to the stakes; politics — not economics — will decide, and be decided by, the Internet.

Time will tell if my contention that an increasing number of nations will push back against American Internet hegemony by developing their own less efficient but independent technological capabilities is correct; one could absolutely make the case that the U.S.’s head start is so overwhelming that attempts to undo Silicon Valley centralization won’t pan out anywhere other than China, where U.S. Internet companies have been blocked for a generation.

Chips, though, are very much entering the political era.

Politics and the End-State

Taiwan President Tsai Ing-wen shared, as one does, some pictures from lunch on social media:

Taiwan President Tsai Ing-wen's Facebook post featuring TSMC founder Morris Chang

The man with glasses and the red tie in the first picture is Morris Chang, the founder of TSMC; behind him is Mark Liu, TSMC’s chairman. They were the first guests listed in President Tsai’s write-up of the lunch with House Speaker Nancy Pelosi, which begins:


Taiwan and the United States not only share the values ​​of democracy, freedom and human rights, but also continue to work together on economic development and democratic supply chains.

That sentence captures why Taiwan looms so large, not only on the occasion of Pelosi’s visit, but to world events for years to come. Yes, the United States supports Taiwan because of democracy, freedom and human rights; the biggest reason why that support may one day entail aircraft carriers is because of chips and TSMC. I wrote two years ago in Chips and Geopolitics:

The international status of Taiwan is, as they say, complicated. So, for that matter, are U.S.-China relations. These two things can and do overlap to make entirely new, even more complicated complications.

Geography is much more straightforward:

A map of the Pacific

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only.

The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel’s focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don’t require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC.

This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC’s foundries — and Samsung’s — are within easy reach of Chinese missiles is a major issue.

China, meanwhile, is investing heavily in catching up, although Semiconductor Manufacturing International Corporation (SMIC), its Shanghai-based champion, only just started manufacturing on a 14nm process, years after TSMC, Samsung, and Intel. In the long run, though, the U.S. faced a scenario where China had its own chip supplier, even as it threatened the U.S.’s chip supply chain.

This reality is why I ultimately came down in support of the CHIPS Act, which passed Congress last week. I wrote in a Daily Update:

This is why Intel’s shift to being not simply an integrated device manufacturer but also a foundry is important: yes, it’s the right thing to do for Intel’s business, but it’s also good for the West if Intel can pull it off. That, by extension, is why I’m fine with the CHIPS bill favoring Intel…AMD, Qualcomm, Nvidia, et al, are doing just fine under the current system; they are drivers and beneficiaries of TSMC’s dominance in particular. The system is working! Which, to the point above, is precisely why Intel being helped disproportionately is in fact not a flaw but a feature: the goal should be to counteract the fundamental forces pushing manufacturing to geopolitically risky regions, and Intel is the only real conduit available to do that.

Time will tell if the CHIPS Act achieves its intended goals; the final version did, as I hoped, explicitly limit investment by recipients in China, which is already leading chip makers to rethink their investments. That this is warping the chip market is, in fact, the point: the structure of technology drives inexorably towards the most economically efficient outcomes, but the ultimate end state will increasingly be a matter of politics.

I wrote a follow-up to this Article in this Daily Update.

  1. As an example of how efficiency trumped performance, the first iPhone’s processor was actually underclocked — better battery life was more of a priority than faster performance.