Intel’s earnings are not until next Wednesday, but whatever it is that CEO Pat Gelsinger plans to discuss, it seems to me the real news about Intel came from the earnings of another company: TSMC.
From the Wall Street Journal:
Taiwan Semiconductor Manufacturing Co., the world’s largest contract chip maker, said it would increase its investment to boost production capacity by up to 47% this year from a year earlier as demand continues to surge amid a global chip crunch. TSMC said Thursday that it has set this year’s capital expenditure budget at $40 billion to $44 billion, a record high, compared with last year’s $30 billion.
Tim Culpan at Bloomberg described the massive capex figure as a “warning” to fellow chipmakers Intel and Samsung:
From a technology perspective, Samsung is the nearest rival. Yet a comparison is skewed by the fact that the South Korean company also makes display screens and puts most of its semiconductor spending toward commodity memory chips that TSMC doesn’t even bother to make.
Then there’s Intel, the U.S. would-be challenger that’s decided to join the foundry fray. In addition to manufacturing chips under its own brand, Intel Chief Executive Officer Pat Gelsinger last year decided he wants to take on TSMC and Samsung — and a handful of others — by offering to make them for external clients.
But Intel trails both of them in technology prowess, forcing the California company into the ironic position of relying on TSMC to produce its best chips. Gelsinger is confident that he can catch up. Maybe he will, but there’s no way the firm will be able to expand capacity and economies of scale to the point of being financially competitive.
It’s worse than that, actually: by becoming TSMC’s customer Intel is not only denying itself the scale of its own manufacturing needs, but also giving that scale to TSMC, improving the economics of their competitor in the process.
Gelsinger’s Design Tools
One of my favorite quotes from Michael Malone’s The Intel Trinity is about how “Moore’s Law” — the observation by Intel co-founder and second CEO Gordon Moore that transistor counts for integrated circuits doubled every two years — was not a law, but a choice:
[Moore’s Law] is a social compact, an agreement between the semiconductor industry and the rest of the world that the former will continue to strive to maintain the trajectory of the law as long as possible, and the latter will pay for the fruits of this breakneck pace. Moore’s Law has worked not because it is intrinsic to semiconductor technology. On the contrary, if tomorrow morning the world’s great chip companies were to agree to stop advancing the technology, Moore’s Law would be repealed by tomorrow evening, leaving the next few decades with the task of mopping up all of its implications.
Moore made that observation in 1965, and for the next 50 years that choice fell to Intel to make. One of the chief decision-makers was a young man in his 20s named Patrick Gelsinger. Gelsinger joined Intel straight out of high school, and worked on the team developing the 286 processor while studying electrical engineering at Stanford; he was the 4th lead for the 386 while completing his Masters. After he graduated Gelsinger became the lead of the 486 project; he was only 25.
Intel was, at this time, a fully integrated device manufacturer (IDM); while that term today refers to a company that designs and fabricates its own chips (in contrast to a company like Nvidia, which designs its own chips but doesn’t manufacture them, or TSMC, which manufactures chips but doesn’t design them), the level of integration has decreased over time as other companies have come to specialize in different parts of the manufacturing process. Back in the 1980s, though, Intel still had to figure out a lot of things for the first time, including how to actually design ever more microscopic chips. Gelsinger, along with three co-authors, described the problem in a 2012 paper entitled Coping with the Complexity of Microprocessor Design at Intel — a CAD History:
In his original 1965 paper, Gordon Moore expressed a concern that the growth rate he predicted may not be sustainable, because the requirement to define and design products at such a rapidly-growing complexity may not keep up with his predicted growth rate. However, the highly competitive business environment drove to fully exploit technology scaling. The number of available transistors doubled with every generation of process technology, which occurred roughly every two years. As shown in Table I, major architecture changes in microprocessors were occurring with a 4X increase of transistor count, approximately every second process generation. Intel’s microprocessor design teams had to come up with ways to keep pace with the size and scope of every new project.
Processor Intro Date Process Transistors Frequency 4004 1971 10 um 2,300 108 KHz 8080 1974 6 um 6,000 2 MHz 8086 1978 3 um 29,000 10 MHz 80286 1982 1.5 um 134,000 12 MHz 80386 1985 1.5 um 275,000 16 MHz Intel486 DX 1989 1 um 1.2 M 33 MHz Pentium 1993 0.8 um 3.1 M 60 MHz
This incredible growth rate could not be achieved by hiring an exponentially-growing number of design engineers. It was fulfilled by adopting new design methodologies and by introducing innovative design automation software at every processor generation. These methodologies and tools always applied principles of raising design abstraction, becoming increasingly precise in terms of circuit and parasitic modeling while simultaneously using ever-increasing levels of hierarchy, regularity, and automatic synthesis. As a rule, whenever a task became too painful to perform using the old methods, a new method and associated tool were conceived for solving the problem. This way, tools and design practices were evolving, always addressing the most labor-intensive task at hand. Naturally, the evolution of tools occurred bottom-up, from layout tools to circuit, logic, and architecture. Typically, at each abstraction level the verification problem was most painful, hence it was addressed first. The synthesis problem at that level was addressed much later.
This feedback loop between design and implementation is exactly what is necessary at the cutting edge of innovation. Clayton Christensen explained in The Innovator’s Solution:
When there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.
This is what Gelsinger did with the 486; from the afore-linked paper:
While the 386 design heavily leveraged the logic design of the 286, the 486 was a more radical departure with the move to a fully pipelined design, the integration of a large floating point unit, and the introduction of the first on-chip cache – a whopping 8K byte cache which was a write through cache used for both code and data. Given that substantially less of the design was leveraged from prior designs and with the 4X increase in transistor counts, there was enormous pressure for yet another leap in design productivity. While we could have pursued simple increases in manpower, there were questions of the ability to afford them, find them, train them and then effectively manage a team that would have needed to be much greater than 100 people that eventually made up the 486 design team…For executing this visionary design flow, we needed to put together a CAD system which did not exist yet.
To make a new chip, Intel needed to make new tools, as part of an overall integrated effort that ran from design to manufacturing.
Fast forward three decades and Intel is no longer on the cutting edge; instead the leading chip manufacturer in the world is TSMC, a company built on the idea that it does not do design; Morris Chang told the Computer History Museum:
When I was at TI and General Instrument, I saw a lot of IC [Integrated Circuit] designers wanting to leave and set up their own business, but the only thing, or the biggest thing that stopped them from leaving those companies was that they couldn’t raise enough money to form their own company. Because at that time, it was thought that every company needed manufacturing, needed wafer manufacturing, and that was the most capital intensive part of a semiconductor company, of an IC company. And I saw all those people wanting to leave, but being stopped by the lack of ability to raise a lot of money to build a wafer fab. So I thought that maybe TSMC, a pure-play foundry, could remedy that. And as a result of us being able to remedy that then those designers would successfully form their own companies, and they will become our customers, and they will constitute a stable and growing market for us.
This was skating to Christensen’s puck; again from The Innovator’s Solution:
Overshooting does not mean that customers will no longer pay for improvements. It just means that the type of improvement for which they will pay a premium price will change. Once their requirements for functionality and reliability have been met, customers begin to redefine define what is not good enough. What becomes not good enough is that customers can’t get exactly what they want exactly when they need it, as conveniently as possible. Customers become willing to pay premium prices for improved performance along this new trajectory of innovation in speed, convenience, and customization. When this happens, we say that the basis of competition in a tier of the market has changed.
TSMC was willing to build any chip that these new fabless companies could come up with; what is notable is that it was Gelsinger and Intel who made this modular approach possible, thanks to the tools they built for the 486. Again from that paper:
The combination of all these tools was stitched together into a system called RLS1 which was the first RTL2 to layout system ever employed in a major microprocessor development program…RLS succeeded because it combined the power of three essential ingredients:
- CMOS (which enabled the use of a cell library)
- A Hardware Description Language (providing a convenient input mechanism to capture design intent)
- Synthesis (which provided the automatic conversion from RTL to gates and layout)
This was the “magic and powerful triumvirate”. Each one of these elements alone could not revolutionize design productivity. A combination of all three was necessary! These three elements were later standardized and integrated by the EDA3 industry. This kind of system became the basis for all of the ASIC4 industry, and the common interface for the fabless semiconductor industry.
The problem is that Intel, used to inventing its own tools and processes, gradually fell behind the curve on standardization; yes, the company had partnerships with EDA companies like Synopsys and Cadence, but most of the company’s work was done on its own homegrown tools, tuned to its own fabs. This made it very difficult to be an Intel Custom Foundry customer; worse, Intel itself was wasting time building tools that once were differentiators, but now were commodities.
This bit about tools isn’t new; Gelsinger announced Intel’s support for Synopsis and Cadence at last March’s announcement of Intel Foundry Services (IFS), and of course they did: Intel can’t expect to be a full-service foundry if it can’t use industry-standard design tools and IP libraries.
What I have come to appreciate, though, is exactly why Gelsinger announced IFS as only one part of Intel’s broader “IDM 2.0” strategy:
- Part 1 was Intel’s internal manufacturing of its own designs; this was basically IDM 1.0.
- Part 2 was Intel’s plan to use 3rd-party manufacturers like TSMC for its cutting edge products.
- Part 3 was Intel Foundry Services.
I had been calling for something like IFS since the beginning of Stratechery, and had even gone so far as to advocate a split of the company a month before Gelsinger’s presentation. “IDM 2.0” suggested that Intel wasn’t going to go quite that far — and understandably so, given the history of Part 1 — but the more I think about Part 2, and how it connects to the other pieces, I wonder if I was closer to the mark than I realized.
Microsoft and Intel
In 2018, when I traced the remarkable turnaround Satya Nadella had led at Microsoft in an article entitled The End of Windows, I noted that Nadella’s first public event was to announce Office on iPad. This was an effort that was launched years previously under former CEO Steve Ballmer, but hadn’t launched because the Windows Touch version wasn’t ready. That, though, was precisely why Nadella’s launch was meaningful: he was signaling to the rest of the company that Windows would no longer be the imperative for Office; that same week Nadella renamed Windows Azure to Microsoft Azure, sending the exact same message.
I thought of this timing when Gelsinger spoke about TSMC making chips for Intel; that too was an initiative launched under a predecessor (Bob Swan). The assumption made by nearly everyone, though, was that Intel’s partnership with TSMC would only be a stopgap while the company got its manufacturing house in order, such that it could compete directly; indeed, that is the assumption underlying the opening of this Article.
This, though, is why TSMC’s announcement about its increased capital expenditure was such a big deal: a major driver of that increase appears to be Intel, for whom TSMC is reportedly building a custom fab. From DigiTimes:
TSMC plans to have its new production site in the Baoshan area in Hsinchu, northern Taiwan make 3nm chips for Intel, according to industry sources. TSMC will have part of the site converted to 3nm process manufacturing, the sources said. The facilities of the site, dubbed P8 and P9, were originally designed for an R&D center for sub-3nm process technologies. The P8 and P9 of TSMC’s Baoshan site will be capable of each processing 20,000 wafers monthly, and will be dedicated to fulfilling Intel’s orders, the sources indicated.
TSMC intends to differentiate its chip production for Intel from that for Apple, and has therefore decided to separate its 3nm process fabrication lines dedicated to fulfilling orders from these two major clients, the sources noted. The moves are also to protect the customers’ respective confidential products, the sources said. Intel’s demand could be huge enough to persuade TSMC to modify the pure-play foundry’s manufacturing blueprints, the sources indicated. The pair’s partnership is also likely to be a long-term one, the sources said.
Intel and Microsoft are bound by history, of course, but obviously their businesses are at the opposite ends of the computing spectrum: Intel deals in atoms and Microsoft in bits. What was common to both, though, was an unshakeable belief in the foundations of their business model: for Microsoft, it was the leverage over not just the desktop, but also productivity and enterprise servers, delivered by Windows; for Intel it was the superiority of their manufacturing. Nadella, before he could change tactics or even strategy, had to shake the Windows hangover that had corrupted the culture; Gelsinger needed to do the same to Intel, which meant taking on his own factories.
Think about the EAD issue I explained above: it must have been a slog to cajole Intel’s engineers into abandoning their homegrown solutions in favor of industry standards when the only beneficiaries were potential foundry customers — that is one of the big reasons why Intel Custom Foundry failed previously. However, if Intel were to manufacture its chips with TSMC, then it would have no choice but to use industry standards. Moreover, just as Windows needed to learn to compete on its own merits, instead of expecting Office or Azure to prop it up, Intel’s factories, denied monopoly access to cutting edge x86 chips, will now have to compete with TSMC to earn not just 3rd-party business, but business from Intel’s own design team.
The Intel Split
When I wrote that Intel should be broken up I focused on incentives:
This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a strait-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.
The key thing to understand about chips is that design has much higher margins; Nvidia, for example, has gross margins between 60~65%, while TSMC, which makes Nvidia’s chips, has gross margins closer to 50%. Intel has, as I noted above, traditionally had margins closer to Nvidia, thanks to its integration, which is why Intel’s own chips will always be a priority for its manufacturing arm. That will mean worse service for prospective customers, and less willingness to change its manufacturing approach to both accommodate customers and incorporate best-of-breed suppliers (lowering margins even further). There is also the matter of trust: would companies that compete with Intel be willing to share their designs with their competitor, particularly if that competitor is incentivized to prioritize its own business?
The only way to fix this incentive problem is to spin off Intel’s manufacturing business. Yes, it will take time to build out the customer service components necessary to work with third parties, not to mention the huge library of IP building blocks that make working with a company like TSMC (relatively) easy. But a standalone manufacturing business will have the most powerful incentive possible to make this transformation happen: the need to survive.
Intel is obviously not splitting up, but this TSMC investment sure makes it seem like Gelsinger recognizes the straitjacket Intel was in, and is doing everything possible to get out of it. To that end, it seems increasingly clear that the goal is to de-integrate Intel: Intel the design company is basically going fabless, giving its business to the best foundry in the world, whether or not that foundry is Intel; Intel the manufacturing company, meanwhile, has to earn its way (with exclusive access to x86 IP blocks as a carrot for hyperscalers building their own chips), including with Intel’s own CPUs.
Gelsinger’s Grovian Moment
It’s not clear that this will work, of course; indeed, it is incredibly risky, given just how expensive fabs are to build, and how critical it is that they operate at full capacity. Moreover, Intel is making TSMC stronger (while TSMC benefits from having another tentpole customer to compete with Apple). Given Intel’s performance over the last decade, though, it might have been more risky to stick with the status quo, in which Intel’s floundering fabs take down Intel’s design as well. In that regard it makes sense; in fact, one is reminded of the famous story of Moore and Gelsinger’s mentor, Andy Grove, deciding to get out of memory, which I wrote about in 2016:
Intel was founded as a memory company, and the company made its name by pioneering metal-oxide semiconductor technology in first SRAM and then in the first commercially available DRAM. It was memory that drove all of Intel’s initial revenue and profits, and the best employees and best manufacturing facilities were devoted to memory in adherence to Intel’s belief that memory was their “technology driver”, the product that made everything else — including their fledgling microprocessors — possible. As Grove wrote in Only the Paranoid Survive, “Our priorities were formed by our identity; after all, memories were us.”
The problem is that by the mid-1980s Japanese competitors were producing more reliable memory at lower costs (allegedly) backed by unlimited funding from the Japanese government, and Intel was struggling to compete…
Grove explained what happened next in Only the Paranoid Survive:
I remember a time in the middle of 1985, after this aimless wandering had been going on for almost a year. I was in my office with Intel’s chairman and CEO, Gordon Moore, and we were discussing our quandary. Our mood was downbeat. I looked out the window at the Ferris Wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and asked, “If we got kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” I stared at him, numb, then said, “Why don’t you and I walk out the door, come back in and do it ourselves?”
Gelsinger was once thought to be next-in-line to be Intel’s CEO; he literally walked out the door in 2009 and for a decade Intel floundered under business types who couldn’t have dreamed of building the 486 or the tools that made it possible; now he has come back home, and is doing what must be done if Intel is to be both a great design company and a great manufacturing company: split them up.
I wrote a follow-up to this Article in this Daily Update.