An Interview with Intel CEO Pat Gelsinger About Intel’s Progress Towards Process Leadership

Good morning,

Today’s Stratechery Interview is with Intel CEO Pat Gelsinger. I previously spoke to Gelsinger in February 2022. At the time we discussed Intel’s ambitious IDM 2.0 strategy, including the audacious goal of shipping five nodes in four years. Two years on and Intel, after a brutal 12 months coming down off of COVID highs, is starting to look pretty interesting. The latest sign of progress came just this week in Taipei; from Nikkei:

U.S. chip group Intel is on track to deliver five upgrades to its advanced manufacturing process in four years, CEO Pat Gelsinger said on Tuesday as the company faces pressure to reassure PC and server-making clients that its technology will remain competitive. Speaking at Intel Innovation Day in Taipei, Gelsinger said the company’s most advanced chip design, the 18A, will move into the test production phase by the first quarter of 2024.

“For 18A, we have many test wafers coming out at this moment,” the CEO said. “The invention phase of the 18A is now complete, and now we’re racing to production.”

This production node represents Intel’s big bet to reclaim semiconductor manufacturing leadership by 2025. The company also announced it will use this production technology to make chips for outside customers such as Ericsson, instead of using it only for its own products.

I had the opportunity to speak with Gelsinger just minutes before he went on stage. We talked about Intel’s progress to date, why 18A is a big deal not just in terms of process size but also new transistor technology, and why the company thinks it can compete with TSMC. We also discussed the challenges Gelsinger has faced since our last interview, including mistakes that were made, problems that were fixed, and why Intel is better as one company; why Intel needs the CHIPS Act; and why AI provides an opportunity for growth.

To listen to this interview as a podcast, click the link at the top of this email to add Stratechery to your podcast player.

On to the interview:

An Interview with Intel CEO Pat Gelsinger About Intel’s Progress Towards Process Leadership

This interview is lightly edited for clarity.

Packaging

Pat Gelsinger, welcome to Taiwan, and welcome back to Stratechery.

Pat Gelsinger: Hey, always good to be with you, Ben, and I appreciate you and your work, so thank you so much.

I mean, I know you appreciate me because you flew here just to talk to me, so thank you for that.

PG: (laughing) Of course, why else would I come to Taiwan? There’s no other good reason to be here.

So when you were first on Stratechery two years ago, one of my first questions was actually about Meteor Lake specifically, which you’ve announced is going to be officially launched next month, and the rather remarkable fact that is included a TSMC-made chiplet. Now you’re about to formally launch it and it has an Intel-made CPU, a TSMC-made GPU, a few other things going on, a couple of things have changed since then. I think that slide actually said The TSMC-chip would be N3, and now I believe that chiplet is going to be N5.

I’m curious today, how are you feeling about that? How are you feeling about the overall strategy? I mean, on one hand, do you wish you had made the GPU? What’s it been like being a customer of another foundry, and has that given you insight on what you need to do as the foundry going forward now that you’ve been on the other side?

PG: I think all of those aspects are true. One is some of the collateral, some of the other reasons that we went to TSMC as a foundry for some of the tiles at Meteor Lake, that’s been super helpful, and I think the design progressed more rapidly than it would’ve had we not used them, so clearly I think the design work was beneficial.

Second, as you and I talked about last time, this idea of chiplets I think is the new way that all chips get designed. The idea of advanced packaging, multiple chips into the advanced package, and whether that’s an MCP [multi-chip packaging], whether that’s a two-and-a-half or 3D construction, I do think that becomes the standard.

Well, tell me more about that. Why is advanced packaging the future? I know this has been a big focus for Intel, it’s something you want to talk about, and from everything I know you have good reason to want talk about it, your technology is leading the way. Why is that so important in addition to the traditional Moore’s Law shrinking transistors, et cetera? Why do we need to start stacking these chiplets?

PG: Well, there’s about ten good reasons here, Ben.

Give me the top ones.

PG: I’ll give you the top few. One is, obviously, one of your last pieces talked about the economics of Moore’s Law on the leading edge node, well now you’re able to take the performance-sensitive transistors and move them to the leading edge node, but leverage some other technologies for other things, power delivery, graphics, IP sensitive, I/O sensitive, so you get to mix-and-match technologies more effectively this way.

Second, we can actually compose the chiplets to more appropriate die sizes to maximize defect density as well, and particularly we get to some of the bigger server chips, if you have a monster server die, well, you’re going to be dictated to be n-2, n-3, just because of the monster server die size. Now I get to carve up that server chip and leverage it more effectively on a 3D construct. So I get to move the advanced nodes for computing more rapidly and not be subject to some of the issues, defect density, early in the life of a new process technology. Additionally, we’re starting to run into some different scaling aspects of Moore’s Law as well.

Right.

PG: SRAMs in particular, SRAM scaling will become a bigger and bigger issue going forward. So I actually don’t get benefit by moving a lot of my cache to the next generation node like I do for logic, power, and performance. I actually want to have a 3D construct where I have lots of cache in a base die, and put the advanced computing on top of it into a 3D sandwich, and now you get the best of a cache architecture and the best of the next generation of Moore’s law so it actually creates a much more effective architectural model in the future. Additionally, generally, you’re struggling with the power performance and speed of light between chips.

Right. So how do you solve that with the chiplet when they’re no longer on the same die?

PG: Well, all of a sudden, in the chiplet construct, we’re going to be able to put tens of thousands of bond connections between different chiplets inside of an advanced package. So you’re going to be able to have very high bandwidth, low latency, low power consumption interfaces between chiplets. Racks become systems, systems become chips in this architecture, so it becomes actually a very natural scaling element as we look forward, and it also becomes very economic for design cycles. Hey, I can design a chiplet with this I/O.

You can use it for a long time.

PG: Right. And use it for multiple generations. So that was I think the top five, but I have many more.

That’s fine, we can stick with those. The yield advantages of doing smaller dedicated chiplets instead of these huge chips is super obvious, but are there increased yield challenges from putting in all these tens of thousands of bonds between the chips, or is it just a much simpler manufacturing problem that makes up for whatever challenges there might be otherwise?

PG: Clearly, there are challenges here, and that’s another area that Intel actually has some quite unique advantages. One of these, we can do singulated die testing. Literally, we can carve up and do testing at the individual chiplet level before you actually get to a package, so you’re able to take very high yielding chiplets into the rest of the manufacturing process. If you couldn’t do that, now you’re subject to the order of effects of being able to have defects across individual dies, so you need to be able to have very high yielding individual chiplets, you need to be able to test those at temperature as well, so you really can produce a good, and then you need a high yielding manufacturing process as you bring them into an advanced substrate.

Why is Intel differentiated in this? What is your advantage that you think is sustainable? You’ve talked about it already being the initial driver of your Foundry Service.

PG: Yeah, it’s a variety of things. We’ve been building big dies and servers for quite a while, so we have a lot of big die expertise. Our advanced packaging with Foveros and EMIB (Embedded Multi-Die Interconnect Bridge) and now hybrid bonding and direct copper-to-copper interface, we see ourself as years ahead of other technology providers in the industry. Also, as an integrated IDM, we were developing many of these testing techniques already. So we have our unique testers that we have that allow us to do many of these things in high yield production today at scale, and we’re now building factories that allow us to do wafer level assembly, multi-die environment.

So it really brings together many of the things that Intel is doing as an IDM, now bringing it together in a heterogeneous environment where we’re taking TSMC dies. We’re going to be using other foundries in the industry, we’re standardizing that with UCIe. So I really see ourself as the front end of this multi-chip chiplet world doing so in the Intel way, standardizing it for the industry’s participation with UCIE, and then just winning a better technology.

You’ve mentioned that this is a hundreds-of-millions business versus a hundreds-of-billions business for the chips, but it’s off to a strong start, that implies some of your initial IDM customers for packaging are probably making their chips at TSMC and then bringing them to you for packaging. In the long run, you could imagine this going both ways — people make chips at Intel and package at TSMC as well, or do you feel your advantage in this specific area is such that you are going to dominate this layer, and chips may be spread out, but packaging, basically, Intel is going to be the only game in town?

PG: I think it’s going to go both ways and clearly OSATs (Outsources Semiconductor Assembly and Test), the lower end of assembly test packaging, generally has been a lower margin, high volume game.

A commodity business, in some respects.

PG: Yeah, and at that level, and I think part of what we need to do is enable that ecosystem over time. But I do think these advanced 3D wafer level assembly packaging techniques is an area that the industry will be moving to. We’re going to become a major supplier in volume of those technologies, and as I’ve described it, it’s a big on-ramp for my overall Foundry Services business.

Next Generation Transistors

So you’ve talked about this five nodes in four years, and I think that what’s notable is I think you’re introducing several technologies along the way, which helps explain why you’ve labeled it as five nodes, even if some of the nodes are arguably iterations of generations, or whatever it might be, which, by the way, TSMC does as well — I’m not sure how four nanometer exists when it was just a five nanometer process, but we’re not talking about them right now.

So correct me, just to make sure I have this right in my head, I have Intel 7 was where you needed to learn to use EUV effectively, Intel 4 brings this 3D packaging in, Intel 3 is Intel 4, but for Foundry customers, a more mass-market thing. Intel 20A is bringing in PowerVia, backside power, and RibbonFET transistors, and 18A is doing that at scale, for Foundry. Do I have the right moving pieces in my mind and why you’re calling them five distinct nodes?

PG: Yeah, you have it approximately correct. Intel 7, the last pre-EUV node for Intel.

Got it. Okay. So EUV is mostly in 4.

PG: Yeah, right. So that was one where, hey, we had a lot of challenges.

We’ve just got to get it out the door.

PG: Quad-patterning and so on, just had to gut it out and get it done, we had a lot of designs on it. So Intel 7 is the last of the pre-EUV technologies.

Intel 4, the first EUV technology for us, Intel 3 refined the final FinFET, really helped us take those learnings, but largely was a common architecture of transistor and process flow — really just the refinement. Much like you say, TSMC and others have done, get the initial one working and then refine it for scale manufacturing, that’s Intel 3. And given it’s the second generation of that, we’ll be applying that to our big server products, Granite Rapids, Sierra Forest, big die. We need to get down the learning curve with Meteor Lake, our first client partner.

Don’t worry, I get the lakes mixed up too.

PG: And then now with the big server die, and that’s also what we’re introducing on Intel 4, more so on Intel 3, a lot of the advanced packaging technologies come big into the technology footprint. Then the new transistor, the new backside power begins with 20A, and for that Arrow Lake is sort of the first, get it up and running small die, something easier to design and then when we get to 18A, the journey is done.

Right. That’s the big goal. Walk me through backside power, which you’re calling PowerVia, correct?

PG: Yeah.

And then RibbonFET and number one, explain them to me and to my readers. And number two, why are they so important? And number three, which one is more important? Are they inextricably linked?

PG: Yeah, PowerVia, let’s start with the easier one first. Basically, when you look at a metal stack up and a modern process, leading edge technology might have fifteen to twenty metal layers. Metal one, metal two…

And the transistors all the way at the bottom.

PG: Right, and transistors down here. So it’s just an incredible skyscraper design. Well, the top level of metals is almost entirely used for power delivery, so now you have to take signals and weave them up through this lattice. And then you want big fat metals and why do you want them fat? So you get good RC characteristics, you don’t get inductance, you’re able to have low IR drop right across these big dies.

But then you get lots of interference.

PG: Yeah and then they’re screwing up your metal routing that you want for all of your signals. So the idea of taking them from the top and using wafer-level assembly and moving them to the bottom is magic, right? It really is one of those things, where the first time I saw this laid out, as a former chip designer, I was like, “Hallelujah!”, because now you’re not struggling with a lot of the overall topology, die planning considerations, and you’re going to get better metal characterization because now I can make them really fat, really big and right where I want them at the transistor, so this is really pretty powerful. And as we’ve done a lot of layout designs now we get both layout efficiency because all of my signal routing get better, I’m able to make my power delivery and clock networks far more effectively this way, and get better IR characteristics. So less voltage drops, less guard banding requirements, so it ends up being performance and area and design efficiency because the EDA tools —

Right. It just becomes much simpler.

PG: Everybody loves this. That’s PowerVia, and this is really an Intel innovation. The industry looked at this and said, “Wow, these guys are years ahead of anything else”, and now everybody else is racing to catch up, and so this is one where I say, “Man, we are ahead by years over the industry for backside power or PowerVia, and everybody’s racing to get their version up and running, and we’re already well underway and in our second and third generation of innovation here”.

On the transistor, that’s the gate-all-around (GAA) or we call it RibbonFET, our particular formulation for that. Samsung and TSMC have their variation of that, so I’ll say on PowerVia, well ahead, while everybody’s working on GAA and you can say, “Why is Intel better?”, well hey, when you’ve done every major transistor innovation for the last twenty-five years.

Right.

PG: The idea that we’re going to bring out the next major transistor like we did with High-K metal gate like we did with strained silicon like we did with the first FinFET transistors, we’ve done this before. We know what that’s like in the innovative process to get it there, but I think that’s a bit more of a horse race with us and the others in the industry, who will really deliver gate-all-around at scale and now with 20A and 18A, we’re following that same formula. Get the first one running on an easier client part with 20A to Arrow Lake, and then get it in volume with 18A and that’s our server products. It’s our client products, and it’s going to be a major Foundry node for us.

It feels like you’ve, in many respects, cut everything to the bone and bet the company on 18A.

PG: Yeah.

Because of these technologies, is that a fair characteristic?

PG: Betting the entire company? I don’t know that I’d go all that way, but this is the biggest bet we have ever made as a company because it also puts incredible stress on the financials of the company as well, because Intel 4, hey, we never really took it high volume production until 3.

You’re doing these five nodes to move down the learning curve.

PG: Bam, bam, bam, so you’re racing through capital very rapidly, you’re driving the development teams very aggressively. “Oh, you just got your breather on getting Intel 4 into production. Okay, you got six weeks to get the next one up and running and ready for the qualification process to start and then we’re six months from right into 20A, and then six months later, right into 18A”. It’s an incredibly intense schedule, but we fell behind, we had to be intense, we had to really bet hard to make 18A the winner and every indication is, as I said, I’ve been looking at scanning electron microscope images for forty years, this is a work of art, Ben.

Taking On TSMC

Well, I mean it’s super compelling. I think people forget, because they are stuck on the EUV transition, which was obviously a big deal in the last sort of decade or whatever, but before that the FinFET transition was pretty difficult and you’re right, Intel had a pretty substantial advantage for quite a while there. So on one hand, this is super optimistic because if you can pull off all these transitions, both RibbonFET and backside power, then you are in, I do think a very compelling position.

At the same time, Intel fumbled the last transition. What’s different now? Why can customers have confidence or the market or wherever it might be that Intel is actually not just going to catch up, but is leaping ahead in a way that it is confident its competitors are going to have a hard time doing so?

PG: I do think that there’s merit in some of the skepticism. “Hey, you fumbled last time, what have you done now?” Well, I’ve changed the leadership, I’ve changed the development model, I’ve changed many of the people in it, I’ve thrown incredible amounts of capital at this to go give them what they need, run the wafers, et cetera, associated with it.

We also had a lot of pent-up innovation. We had congestion at the integration phase, but not at the invention phase, so it was sort of like we had our rich components research, and they were ready to bring these things forward, so while we were stalled in the productization, we weren’t stalled in the innovation cycle. So I had a candy store of ideas like PowerVia.

PowerVia probably the big one there, yeah

PG: Ready to go, right, and so they were ready to go forward. Now given enough capital, enough team, we were able to go quickly into it, and I’d also say the evidence is starting to mount. Hey, Intel 4, it’s in production, I’m ramping it, we’re going to launch it shortly. Intel 3, hey, we have the production steppings in fab now for Granite Rapids, out of fab for Sierra Forest, these are looking good. I’m demoing, I’m showing them, they’re going into a customer, so 4 and 3, there’s evidence building.

20A, we’re now — I’ve demoed it publicly for the first time. 18A, we’re getting good feedback from people like Arm who are designing their IP on them and saying, “Wow, this area performance improvement that you get from this — pretty good.” So you’re getting third party affirmation and essentially, I’m not just betting the company on 18A, I’m now betting all of our products on 18A. As we take Clearwater Forest and Panther Lake, our first two major 25 products are now going into a fab starting in Q1.

So I’ll just say we’re giving you more and more evidence to eliminate the skepticism, give you facts and as I like to say, transistors don’t lie. Come in, build your part, run your test chips, and if you like the results, you’re going to start to use this as a Foundry.

TSMC has said that they feel pretty confident that even their N3P node is better than 18A in PPA: power, performance, and area. It’s interesting because some of the reasons they cited was it’s technologically mature, it’ll be in the market first, much better cost and I’m interested in those vectors.

Is this an area where, again, from your perspective, and it’s not in the market yet, and you have signed some customers, that particularly from that technologically mature perspective, is this an area where you feel maybe when we’re getting down to 3 nanometer and 2 nanometer and whatever it might be, being technologically mature might not be the best thing? Where Intel was technologically mature in the 2010s, and that turned out to be a very poor idea.

PG: The way I think about this is, we created FinFET and to say the last generation of FinFET still is pretty good? I agree. And hey, that’s what we’re doing with Intel 3, that’s what they’re doing with N3, that’s the end of the fuel injection era, if you could. Hey, we’re now moving to the next generation of engines, right? This is like the EV compared to — and on many characteristics, that’s still a really good node. It’s going to be the last of the FinFET generation, and I expect it to last for quite a while.

Right. You go back, I mean, one of the biggest nodes today is what? 28 nanometers, which is pre-FinFET itself.

PG: In no way do I think that just because we’ve now demonstrated 18A, we’ve given the first PDKs (Process Design Kit) for it, the world is going to say, “Oh, let’s stop doing all that 3 nanometer stuff and let’s move over here”, that’s not going to happen. But I am pretty dead set that we are going to capture major designs because everybody, when they finish their 3 nanometer designs, they’re going to say, “What’s next?”, and the combination of RibbonFET and PowerVia is proving to be very compelling. Compelling on area, compelling on performance, compelling on power capabilities.

Well, I’d say that to me what is compelling is from a ease of design perspective, which seems particularly critical for Foundry, that’s exactly where TSMC has been highly differentiated, and I think questions about Intel have come up. Even if your technology is there — I mean, you’ve been moving to standardized software for example, and I do have questions about building a customer service culture and things along those lines — but something that makes it easier to build a customer service culture is if your product is just much easier to work with because of something like backside power.

PG: Yeah, and that’s what we’re finding, and that’s why I say the Arm example is a good one. Their designers have said, “Wow, this is really good. I get major area improvements, and they got a higher performance design and it’s easy to use. It’s now supported with IP libraries, Synopsys, Cadence, tool flows, people can use this. It is good.”

Separating Design and Manufacturing

So what hasn’t changed quickly enough? Just to step back and look back over your two to three years, in our previous interview we talked about the importance of the Foundry business being separate from the product business. This is something that I was very anchored on looking at your announcement, and it’s why I was excited about Meteor Lake for example, because to me that was a forcing function for Intel to separate the manufacturing and design parts. At the same time, you are not actually unveiling a separate P&L for it until early next year. What took so long? Was that the area where maybe you actually were moving too slowly?

PG: Well, when you say something like separate the P&L, it’s sort of like Intel hasn’t done this in almost our 60-year history. The idea that we’re going to run fully separate operations with fully separate financials, with fully separate allocation systems at ERP and financial levels, I joke internally, Ben, that the ERP and finance systems of Intel were old when I left, that was thirteen years ago and we are rebuilding all of the corporate systems that we sedimented into IDM 1.0 over a five-decade period.

Tearing all of that apart into truly separate operational disciplines as a fabless company and as a foundry company, that’s a lot of work. Do I wish it could have gone faster? Of course I do, but I wasn’t naive to say, “Wow, I can go make this happen really fast.” It was five decades of operational processes, and internally we’ll be by the time we publish the financials in Q1 of next year, we’ll have gone through multiple quarters of trial running those internally, and now that’ll be the first time that we present it to the Street that way.

As we talk to Foundry customers we’re saying, “Come on in, let’s show you what we’re doing, test us.” And MediaTek, one of our early Foundry customers, “Hey, give us the feedback, give me the scorecard. How am I doing? What else do you need to see?”, start giving us the NPS scores for us as a Foundry customer, there’s a lot of work here. Yeah, I wish it would go faster but no, I’m not disappointed that it’s taken this long.

How much of a loss was Tower in this regard? Because I think one of the things that was attractive to me about that acquisition was bringing in leadership that was used to being a Foundry, and even if they were more focused on the analog side of things, it was more about the mindset and mentality. Part of the reason for skepticism about Intel being a Foundry is the culture, it used to be a “Our way or the highway”, but now you have to have a different approach. Is that something that — obviously China never approved it, it is what it is — but is that something that you do look back on with regret?

PG: Yeah, I do. Obviously I thought that was going to be a good opportunity for us to bring thousands of people into the company who’ve been doing that and can start running up against the Intel Way, if you would, and helping us to transform more rapidly, that was part of the reason I wanted to do the acquisition. Obviously the whole US/China dynamic was something I underestimated when I started that.

Yeah, to me this has been the leading example of how Chinese retaliation.

PG: And hey, we were just caught in purgatory as I call it, and disappointing. Obviously we found a silver lining to that since we’re now bringing Tower into the New Mexico facility and we took the breakup fee and we turned it into a prepay and building that manufacturing line, so I’m still getting some benefit for it, but boy —

Not the cultural benefit to the extent you might have.

PG: Yeah, exactly. So I just misjudged that scenario.

That said, over the period of time, the 18 months since we began that acquisition we’ve hired a lot of people with Foundry expertise and we’ve really built up our customer team, our EDA (electronic design automation) enablement team, our PDA standardization team, and obviously there’s good news—

And you’ve been doing a lot of browbeating of the Intel side to tell them they need to listen?

PG: Oh my gosh, it goes on and on and on. And customers like MediaTek, hey, they give us customer report cards, “how are you doing”?

And that’s not about the finances necessarily.

PG: No, but this recent surge of the AI designs needing more advanced packaging, this has just been a great on-ramp accelerator to the overall Foundry transformation.

Why Intel Won’t Split

I do have some AI questions and we’ll get to that in a moment, but just one more to call back to our previous interview, you said at the time that if Intel wanted to separate out product from manufacturing then they hired the wrong CEO, they should have got some sort of P/E guy. If you were hired, you were going to do IDM 2.0.

In retrospect you were right to lock down, “This is what we’re doing” — should you have done more to get the board onboard with the financial implications? “Maybe we should cut the dividend then, we have to be realistic about how much is this going to cost if we’re going to actually do that”? Is that something that took you a little too long to sort of not just change the culture on the Intel floor but in the boardroom?

PG: Yeah, the board was very committed to the journey when I started. I wrote a strategy document, I basically asked for all of them to unanimously hire me, “We’re not going to have division, this is a hard journey and unanimously agree to the strategy”, so everybody was bought into the IDM 2.0 strategy.

I’ll say there were two things that I didn’t judge properly when we began it, I didn’t estimate some of the harsh economic cycle that would emerge on the other side of COVID so because of that, hey, I should have started some of the cost savings initiatives and driven them more—

When the money was flowing in.

PG: Yeah, when I had more room to do so and the balance sheet and the cash flows to go do so. So in retrospect, I should have started that harder sooner.

Is there a bit though where it’s a lot easier to make those cuts when it’s super obvious everyone needs to be done?

PG: Oh, it is.

Which is why there would’ve been a good leadership to have done it earlier if you could have pulled it off.

PG: Right, I could have done it earlier, and so that’s one of the things. The other thing is I wanted to cut the dividend right away, I didn’t have agreement from my CFO. We immediately eliminated the stock buybacks that Intel was doing so I got one done, I didn’t get the other one done until it became much more acutely aware of the overall economic situation. It was much harder to come back to the shareholders and say, “After the fact, we’re going to go now look at this as well.” It would’ve been much easier for me to do that on day one, and I’d have another six billion on our balance sheet as a result.

So this sort of leads into maybe there’s a sense on Wall Street or investors that, “Man, I still wish Intel would separate the design from the product”. It’s interesting because I sit here today and I have been impressed by your progress, I would say, and I think 18A is incredibly exciting for all the reasons you just articulated.”

Given that, when you’re in the semiconductor business, you are looking years or decades out. I consider it a feather in my cap, I came in and my third ever article on Stratechery was like “Intel has to get into the Foundry business, they’re in huge trouble” as an IDM. So I would say in some respects my perspective hasn’t changed. I could make an argument that design and manufacturing do need to be split in the long run, not because manufacturing is an anchor on design, but there is a scenario where design is becoming an anchor on manufacturing, because over the intervening ten years, you’ve had the rise of Arm in particular, not just on the client side, you have Qualcomm now coming out with chips they claim are very high performance. Obviously you have Apple, but then also on the data center Graviton and other efforts there.

Where are you at from the x86 perspective? I know that’s your baby, but is there a situation where because of these lost last ten years there is going to be sort of a permanent scar, that is actually where people might even be thinking about the entire business sort of backwards in its long-term prospects?

PG: Yeah, so first before we go to the other side of this let me make two points that are super important to really realize why I think it’s so critical to keep this together. The first is what wafers I’m I going to be running in the factory for the rest of this decade? The vast majority of those are Intel product wafers.

You need volume to run a Foundry.

PG: Absolutely! And the reason that Ben Stratechery can’t go start a new Foundry and just say, “Hey, I’m going to go invest $50 billion in new capital, go create these process technologies, go start building those fabs” is because it takes you seven or eight years to fill those fabs. What wafers are you going to run, Ben? One is you just need volume, you need capacity to go down the learning curve, but you also need those volumes to create the cash flows to fund those factories as well.

So because of that, there just are fundamental business reasons why we need to keep those very coupled. It’s the wafers, it is the cash flow but it is also the technology drivers that you have. They’re customer zero as we’re going through these designs and those are powerful cycles that even by the end of the decade, if I well exceed my expectations for how many wafers I’m going to run for Foundry customers, the majority of my wafers at the end of the decade are still Intel business unit wafers. So that coupling is very powerful and very important, and it’s the only way that I can build a world-class at volume Foundry either financially or technically.

Now, on the other side of it, though, I have to create clean separation between these two businesses and that’s what the internal Foundry model is all about, because I need to be able to go to Qualcomm or AMD or—

And legitimately claim you are at the same level as our internal customer.

PG: Yeah, that’s right. I’m going to run your wafers, we commit to the T’s and C’s (terms and conditions) in your capacity corridor, those are your wafers.

And we’re not going to ask you to change your design to accommodate our design over here.

PG: So we have to get a lot more standard, a lot more EDA-enabled, a lot more IP-enabled, and T’s and C’s and a culture of obsessive customer focus, and I have to build that.

By the way, the more I build that the happier my internal businesses are because they’re benefiting from those standardized PDKs, improved design IP capabilities, they’re not singularly carrying the burden of innovating every technology as well. So I really believe it becomes a positive reinforcing cycle and this takes seven, eight years to build this kind of business model.

So for anybody to say, “Hey Pat, at your two-and-a-half year journey, you don’t have enough Foundry customers yet.” It’s sort of like, duh, this takes a while to build that up and fifty years to get where we are. In two-and-a-half years creating this level of separation, starting to see the Foundry customers emerge, yeah, I feel pretty good.

Fixing Intel Design

What about x86?

PG: On the other side, we also have to say whatever configuration of transistors you want to put on those Foundry nodes, okay with me. I’ve commented on the good relationship we’re seeing with Arm, hey, I want to be the best partner for Arm going forward. And that’s the majority of logic Foundry wafers are Arm-based today.

Well, I think part of the cultural shift is making — you basically just said this — but making your design teams understand they don’t get to leverage the manufacturing to maintain differentiation, they have to compete on their own terms.

PG: Absolutely, they’re a fabless company as we go forward. And against that, they have to fight to deliver the best product. Why is Arm interesting in the data center? It’s only because of power and TCO (Total Cost of Ownership). So build a better power TCO x86 chip — and a chip like Sierra Forest is pretty compelling that way where we’re bringing much more cores, much more efficiency, voltage and power focus. The only reason many of these alternatives emerged is because we weren’t doing our jobs on the product side to deliver the best products, unquestioned performance.

Yeah, that’s part of the question, how much of the failure — everyone’s talked about the manufacturing — but it does feel like there were a lot of product failures along the way as well.

PG: The product teams could screw up and still win if the process was ahead and we have tons-

And so there was probably a lot of sort of latitude and laziness that built up over three decades.

PG: And hey we had times where Intel was the unquestioned architectural leader in the industry. When we made the transition to out-of-order execution in the P6.

Right.

PG: Everybody said, “Man, not only did they have the best transistors, they have the best architecture as well”. When we made the move to the second generation of speculative execution with the Core design, it was the best power performer chip in the Intel.

That was actually how you responded to the last wake-up call with Athlon.

PG: It was the best architecture in the industry, so at different periods of time, we’ve had the best design, tick-tock execution, the most disciplined model, we’ve had the best architecture at different times. But through it all, we’ve had the best transistors until five years ago. So all of a sudden this, I’ll say this higher tide for Intel was gone and we weren’t design executing.

You were exposed.

PG: So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” When the tide went out with the process technology, and hey, we were swimming naked, our designs were not competitive. So all of a sudden we realized, “Huh, the rising tide ain’t saving us. We don’t have leadership architecture anymore.” And you saw the exposure.

Is that a harder problem to fix than building a bunch of fabs? I mean, building a fab sounds very difficult, but have you lost too much talent in design? What’s the issue?

PG: I feel like at this point, are we world-class on process technology today? We’re getting there, we’re no longer broken. Are we world-class on design technology and architecture today? No, but we’re also no longer broken. Granite Rapids, Sierra Forest, Meteor Lake, the demonstrations of Arrow Lake, Lunar Lake, sending Clearwater Forest into fab in Q1 and Panther Lake, the 2025 products. People are saying, “Oh, these guys are executing on or ahead of schedule on the design side as well.” I still have a lot of work to do to get them back to world-class and unquestioned architecture leadership, but we’re no longer broken.

Intel and the CHIPS Act

Can Intel pull this off without help? I mean, you’ve pushed vigorously, I would say, for the CHIPS Act and there was actually just a story in the Wall Street Journal, I saw it as I was driving in, that said Intel is the leading candidate for money for a national defense focused foundry, a secure enclave I think they called it, potentially in Arizona. But you mentioned the money aspect of being a foundry, and you have to be the first customer, but you’re the first customer with an also-threatened business that has— you talked about your earnings, you’re not filling your fabs currently as it is, and you don’t have trailing edge fabs spinning off cash to do this, you don’t have a customer base. Is this a situation where, “Look, if the US wants process leadership, we admit we screwed up, but we need help”?

PG: There are two things here. One is, hey, yeah, we realize that our business and our balance sheet, cash flows are not where they need to be. At the same time, there’s a fundamental economic disadvantage to build in US or Europe and the ecosystem that has emerged here (Taiwan), it’s lower cost.

Right. Which TSMC could tell you.

PG: Right. And hey, you look at some of the press that’s come out around their choice of building in the US, there’s grave concerns on their part of some of those cost gaps. The CHIPS Act is designed to close those cost gaps and I’m not asking for handouts by any means, but I’m saying for me to economically build major manufacturing in US and Europe, those cost gaps must be closed, because if I’m going to plunk down $30 billion for a major new manufacturing facility and out of the gate, I’m at a 30%, 40% cost disadvantage —

Even without the customer acquisition challenges or whatever it might be.

PG: At that point, no shareholders should look at me and say, “Please build more in the US or Europe.” They should say, “Well, move to Asia where the ecosystem is more mature and it’s more cost-effective to build.” That’s what the CHIPS Act was about: if we want balanced, resilient supply chains, we must close that economic gap so that we can build in the US and Europe as we have been. And trust me, I am fixing our issues but otherwise, I should go build in Asia as well, and I don’t think that’s the right thing for the world. We need balanced supply chains that are resilient for the Americas and for Europe and in Asia to have this most important resource delivered through supply chains around the world. That’s what the CHIPS Act was about.

I am concerned though. My big concern, just to put my cards on the table, is the trailing edge, where it’s basically Taiwan and China, and obviously China has its own issues, but if Taiwan were taken off the map, suddenly, part of what motivated the CHIPS Act was we couldn’t get chips for cars. Those are not 18A chips, maybe those will go into self-driving cars, I don’t want to muddy the waters, but that’s an issue where there’s no economic case to build a trailing edge fab today. Isn’t that a better use of government resources?

PG: Well, I disagree with that being a better use of resource, but I also don’t think it’s a singular use of resource on leading edge. And let me tease that apart a little bit. The first thing would be how many 28 nanometer fabs should I be building new today?

Economically, zero.

PG: Right, yeah, and I should be building zero economically in Asia as well.

Right. But China is going to because at least they can.

PG: Exactly. The economics are being contorted by export policy, not because it’s a good economic investment as well.

Right. And that’s my big concern about this policy, which is if China actually approaches this problem rationally, they should flood the market like the Japanese did in memory 40 years ago.

PG: For older nodes.

For older nodes, that’s right.

PG: Yeah because that’s what they’re able to go do and that does concern me as well. At the same time, as we go forward, how many people are going to be designing major new designs on 28 nanometers? Well, no. They’re going to be looking at 12 nanometers and then they’re going to be looking at 7 nanometers and eventually they will be moving their designs forward, and since it takes seven years for one of these new facilities to both be built, come online, become fully operational in that scale, let’s not shoot behind the duck.

And so your sense is that you are going to keep all these 12, 14 nanometer fabs online, they’re going to be fully depreciated. Even if there was a time period where it felt like 20 nanometer was a tipping point as far as economics, a fully depreciated 14 nanometer fab—

PG: And I’m going to be capturing more of that because even our fab network, I have a whole lot of 10 nanometer capacity. I’m going to fill that with something, I promise you, and it’s going to be deals like we just did with Tower. We’re going to do other things to fill in those as well because the depreciated assets will be filled. I’m going to run those factories forever from my perspective, and I’ll find good technologies to fill them in.

AI and the Edge

Let’s talk about AI. I know we’re running short on time, but there’s the question. I feel like AI is a great thing for Intel, despite the fact everyone is thinking about it being GPU-centric. On one hand, Nvidia is supply constrained and so you’re getting wins. I mean, you said Gaudi is supply constrained, which is not necessarily as fast as an Nvidia chip, I think, is safe to say. But I think the bull case, and you articulated this in your earnings call, is AI moving to the edge. Tell me this case and why it’s a good thing for Intel.

PG: Well, first I do think AI moves to the edge and there are two reasons for that. One is how many people build weather models? How many people use weather models? That’s training versus inference, the game will be in inference. How do we use AI models over time? And that’ll be the case in the cloud, that’ll be the case in the data center, but we see the AI uses versus the AI training becoming the dominant workload as we go into next year and beyond. The excitement of building your own model versus, “Okay, now we build it. Now what do we do with it?”

And why does Intel win that as opposed to GPUs?

PG: For that then you say, in the data center, you say, “Hey, we’re going to add AI capabilities.” And now gen four, Sapphire Rapids is a really pretty good inferencing machine, you just saw that announced by Naver in Korea. The economics there, I don’t now have to port my application, you get good AI performance on the portion of the workload where you’re inferencing, but you have all the benefits of the software ecosystem for the whole application.

But importantly, I think edge and client AI is governed by the three laws. The laws of economics: it is cheaper to do it on the client versus in the cloud. The laws of physics: it is faster to do it on the client versus round tripping your data to the cloud. And the third is the laws of the land: do I have data privacy? So for those three reasons, I believe there’s all going to be this push to inferencing to the edge and to the client and that’s where I think the action comes. That’s why Meteor Lake and the AIPC is something—

Do you think though it’s competitive today or is this an issue where because Nvidia’s supply constraints, you have time and space to catch up such that you’re good enough by the time GPUs are cheap?

PG: For the big high end, hey, Gaudi is a good chip today. I ran some of the ML performance benchmarks today versus H100s and A100s, we’re starting to show up.

I have on my somewhat skeptical face, but continue.

PG: Hey, they win a lot, I win some. It just means we are now starting to show up and I’m a lot cheaper than that so I win the TCO at scale.

You’re cheaper and you’re more available. So those are two good advantages.

PG: So availability, starting to get good performance, and certainly the market once an alternative is there, Xeon is going to be widely deployed, and they’re all going to be AI-enhanced. And then for client and edge, Xeons at the edge are going to be very dominant. We have all of our Xeon Ds and those products, they’ll get AI-enhanced. And then Meteor Lake, Lunar Lake, Aerolake, Panther Lake, we have a very robust roadmap on the client side. So all of that taken together, AI is a workload and we are going to make it available everywhere and that’s what our overall strategy’s about. Compete at the high end, deliver it in Xeon, but make it available at the client and edge.

Well, it’s been a couple years since we talked. I have to say I’ve been pretty impressed by the progress, but in two years it’ll be very interesting to look back and I think by then we’ll see did you make it or not?

PG: Yeah. Did we make it happen? Hey, I said it was a five-year journey, we’re halfway through the journey and I feel like we’re over halfway done with the assignment.

Thank you for coming on.

PG: Thank you.


This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!