An Interview with Marc Andreessen about AI and How You Change the World

Good morning,

Today’s Stratechery Interview — and the last Interview of the summer — is with Marc Andreessen. Andreessen hardly needs an introduction: while at the University of Illinois Andreessen co-created the Mosaic web browser, which became the foundation of Netscape, the defining Internet 1.0 company. Today Andreessen is best known for being the co-founder of the Andreessen Horowitz venture capital firm, a disruptive entity in its own right, with its services approach to venture and massive fund sizes. What Andreessen is arguably best known for, though, is being a public intellectual of sorts. In this role Andreessen has authored three seminal essays: Why Software Is Eating the World, It’s Time to Build, and just a few weeks ago, Why AI Will Save the World.

In this extensive interview, recorded last week, we discuss all of the above, with the most time spent on Andreessen’s essays, particularly the recent one on AI, although we talk on some a16z related topics towards the end. As I note in the essay, I personally do generally agree with Andreessen’s view on AI; I do try to push him to flesh out a few of his arguments, but I recognize that those efforts may not be fully satisfactory to some of you. What I do hope is that this interview, like all Stratechery content, is very thought-provoking and interesting — and it is free to share.

To listen to this interview as a podcast, click the link at the top of this email to add Stratechery to your podcast player.

On to the interview:

An Interview with Marc Andreessen about AI and How You Change the World

This interview is lightly edited for clarity.

Public Intellectual Marc

Marc Andreessen, welcome to Stratechery.

Marc Andreessen: Hey, thanks for having me.

When it comes to Stratechery Interviews I have mostly shied away from interviewing VCs [Venture Capitalists], I have nothing against the profession, very fine people, but I do find that tendency to talk one’s book to be hard to resist. I have to be honest, though, you present an entirely different conundrum entirely, which is you seem to have opinions about pretty much everything and pretty much all of them — your pugnacious approach to arguing them seems to make some people mad. In full disclosure, your only request about this interview is that we not discuss partisan politics, which was very easy for me to grant because I don’t talk about partisan politics on Stratechery either. But it sometimes feels that partisan politics would be the safest topic for us to discuss. Why are you so controversial?

MA: (laughing) First of all, I can’t believe that this podcast has started with such rank bigotry.

Very fine people. Very fine people, VCs are.

MA: Anti-VC! How would you feel if your daughter brought home a VC and told you that she was going to marry him? What would you say?

Oh, I think that’s fantastic. Good for her. I am pro VC! I’m just wary of VC’s on my podcast because I’m not sure if they’re going to actually be completely — again, the whole talking of the book phenomena. If I want someone to talk their book, I want a founder to talk their book because they’re all in, they have no choice but to talk it. I’m talking interviews only.

MA: I was going to say, none of your other guests talk their book, yes, exactly right, everybody else is pure as the driven snow. Okay.

Well, to be fair, the reality is with the AI stuff, I’ve softened it. I’ve had Nat [Friedman] and Daniel [Gross] on a few times already who are so committed to the VC bit that they have actually bought a supercomputer that they’re offering to their startups so the policy is fading very quickly to say the least.

MA: Got it. The other good news with me is we’re a regulated, we’re actually an RIA [Registered Investment Advisor] regulated entity, I go out of my way to actually not bring up specific companies if I can possibly avoid it.

That’s good.

MA: Hopefully we’ll have a good discussion today. Anyway, please proceed.

I’m actually quite curious in this shift to, it’s not even a shift, it’s been the case for a long time, Public Intellectual Marc. I think the context for us to talk now is your recent essay about AI, which we will certainly get to. It does seem, from my perspective, this goes back to 2011’s Why Software Is Eating the World. Is that where you feel you started to adopt this role or has this role thrust upon you? I’m just curious from a meta perspective, if you were to step back and look at this version of Marc, where did it come from? Was it on purpose? Was it on accident? What’s your perspective on that?

MA: Look, I think there’s a lot of explanations, but I think the big thing is that it turns out tech is really important, and my whole career basically for the first fifteen or twenty years was this wall of skepticism from the outside world that anything that I, or people like me were working on, was going to be important, was actually going to matter, and the typical motive of attack against us or dismissal of us was, “Oh, this is just a bunch of trivial nonsense.” Social media is just an app where you post what your cat had for breakfast, who would possibly ever care about that?

There was a famous case actually where a certain highly regarded, let’s say, peer of yours, did this a tech roundup in, was it 2008 or 2009? Facebook had walked away from a billion dollar acquisition by Yahoo and this person who’s still a commentator who’s very public today, the comment was, “Mark Zuckerberg and his silly app should have taken that offer and he should have run away as fast as his little flip-flops could carry him”, and so that was the typical line of attack.

In the last basically ten years — and look, to a certain extent the tech industry here is the dog that caught the bus so I’m not even saying that this is unfair in this totality, but it just turns out tech really matters and it really is central to many things in the lives of many people. And so as a consequence, the line of attack has become the opposite, which is, it’s pure evil, it’s absolutely terrible, it’s absolutely awful.

Look, frankly, I don’t think either attack was correct, but at the same time, I think we have a responsibility to explain ourselves, what we do is complicated, it gets into all kinds of technical aspects that are hard to wrap your head around from the outside. There are implications for the broader society and the economy and everything else, and I think it’s a good idea for us to generally be forthright and try to explain ourselves.

But you come and go, it seems as far as wanting to embrace that responsibility.

MA: Yes.

Sometimes you’re all over Twitter. I think you started a Substack a few weeks ago that didn’t seem to last too long.

MA: Yes.

But when you come and go, is that just a are-you-feeling-up-for-it for the day or not? Or is it in response to things that you see going on?

MA: It’s partially I experiment on how to engage, and I’ve never quite figured out a stable state, and part of the technology for engagement itself keeps changing, and so I experiment a lot. Part of it is like I have responsibilities. One of the interesting asymmetries in the current media environment is the people who don’t have any responsibilities are basically free to say whatever they want no matter how inflammatory or incorrect or whatever the downstream consequences are from saying those things. People with responsibilities, which can be people running companies, investing in companies, on the boards of companies, have a set of responsibilities, a set of consequences to what we say that have to do with the teams that we work with, the companies that we work with, have to do with the fact that we’re regulated, and so I feel like I also have to line up the phases of the moon, where what I have to say is also something that’s palatable for the people who rely on me and that varies over time.

You’ve made moves to change the structure of reporting; you had the Future publication thing and you have your podcast series. But it seems to me, as someone obviously who has a relevant interest in the space, the big challenge is that anyone who is going to argue in favor of technology and has the capability of understanding the technology is probably going to work in a tech company. I would say absent my very circuitous background and the fact that I grew up in a small town Wisconsin and didn’t even have access to a computer or know anything about tech for ages, it didn’t even occur to go work in tech, and in retrospect it’s like, if I had followed a normal path out of college, there’s no way I’d be where I am right now. Is that just inherent to the game when there’s so much money to be earned in tech? If you’re smart and you understand it, you can without question do better working for a tech company than you could working as a journalist. Is that just inherent or is there anything that can be done about that in the long run from your perspective?

MA: I think there’s something to that, but look, I also think that as you well know, the whole media landscape is changing completely outside of tech as well. For all the issues with tech and tech media, it’s just a minor sub-segment of this broader shift in how media works and in how public discourse works in public opinion formation and public debate happens. That media landscape I would argue has been changing for a very long time, it’s certainly been going through a high rate of change, I would argue even pre the Internet with the arrival of talk radio and then cable television. But certainly the Internet has accelerated that change so I think the whole landscape is changing.

There’s this endless tension on the media side: if you ask journalists what their mission in life is, they’ll basically usually tell you two things, which is to objectively report the truth number one, and then number two, speak truth to power. Or they sometimes say, “Comfort the afflicted and afflict the comfortable.” There’s an inherent tension between those two poles because are you trying to equally represent all sides or are you trying to specifically take a stand, presumably on behalf of the people against power? I would characterize it as the entire media broadly is trying to navigate its way through that question of goals and then trying to navigate its way through what have been profound changes in technology in the media landscape.

A lot of those changes have been caused by tech. Like I say, we in many ways are the dog that caught the bus in this industry, which is we did cause some of these changes to happen. So those changes, the system has to work through those changes. Does the system ever stabilize ever again in the same way that it had in the 1950s or 1960s? I think probably not and so as a consequence, I think the whole landscape probably will stay unsettled for a very long time. I don’t know what it’ll look like in a decade, but I think people like me are going to have to adapt just like everybody else.

That was very gracious to the media, and it helps that I agree with you. I wrote an article called Never-Ending Niches where I just casually lumped myself in with the New York Times as a, “We’re all on the same plane”. And that was a low level — what would Tyler Cowens say? — the Straussian reading of that piece, but I think it is the reality on the Internet, everything is equal for better or worse when it comes to consumer attention.

Software and the Physical World

I did want to ask one quick question about that article Software is Eating the World. The focus of that seemed to be that we’re not in a bubble, which obviously in 2011 turned out to be very true. I wrote an Article in 2015 saying we’re not in a bubble. That also turned out to be very true. By 2021, 2022, okay maybe, but you missed a lot of upside in the meantime to say the least!

However, there’s one bit in that article where you talk about Borders giving Amazon its e-commerce business, and then you talk about how Amazon is actually a software company. That was certainly true at the time, but I think you can make the case — and I have — that Amazon.com in particular is increasingly a logistics company that is very much rooted in the real world, with a moat that costs billions of dollars to build and a real world moat, you can’t really compete with it: they can compete anyone out of business in the long run by dropping prices and covering their marginal costs. Now that doesn’t defeat your point, all of that is enabled by software and their dominant position came from software, but do you think there is a bit where physical moat still means more, or is Amazon just an exception to every rule?

MA: You can flip that on its head, and you can basically observe that the legacy car companies basically make that same argument that you’re making as to why they’ll inevitably crush Tesla. Car company’s CEOs have made this argument to me directly for many years, which is, “Oh, you Californians, it’s nice and cute that you’re doing all this stuff with software, but you don’t understand the car industry is about the real world. It’s about atoms and it’s about steel and it’s about glass and rubber and it’s about cars that have to last for 200,000 miles and have to function in the snow.” They usually point out, “You guys test your electric self-driving cars in the California weather, wait till you have a car on the road in Detroit. It’s just a matter of time before you software people come to the realization that you’re describing for Amazon, which is this is a real world business and the software is nice, but it’s just a part of it and this real world stuff is what really matters.”

There’s some truth to that. Look, the global auto industry in totality still sells a lot more cars than Tesla. Absolutely everything you’re saying about Amazon logistics is correct, but I would still maintain that over the long run that the opposite is still true, and I would describe it as follows, which is Amazon, notwithstanding all of their logistics expertise and throwaway, they’re still the best software company. Apple notwithstanding all of their manufacturing prowess and industrial design and all the rest of it, they’re still the best or one of the two best mobile software companies. Then of course Tesla, we’re sitting here today, and Tesla I think today is still worth more than the rest of the global auto industry combined in terms of market cap, and I think the broad public investor base is looking forward and saying, “Okay, the best software company is in fact going to win.” Then of course you drive the different cars and you’re like, “Okay, obviously the Tesla is just a fundamentally different experience as a consequence of quite literally being now a self-driving car run run by software.”

I would still hold of the strong form of what I said in that essay, which is in the long run, the best software companies win. And then it’s just really hard. Part of the problem is, it’s hard to compete with great software with mediocre software, it’s really hard to do that because there comes a time when it really matters and the fundamental form and shape of the thing that you’re dealing with fundamentally changes. You know this, are you going to use the video recorder app on your smartphone, which is software, or are you going to use an old-fashioned camcorder that in theory comes with a 600-page instruction manual and has 50 buttons on it. At some points the software wins and I would still maintain that that is what will happen in many markets.

Building and COVID

If I’d say there’s the Big Three of Andreessen essays over the last decade, 2020s It’s Time to Build I would put at number two. That was written in April 2020, and it bemoaned the failure of Western institutions to deal with the pandemic. When I wrote about the piece a week later, I noted that I agreed with your sentiments because I had expressed similar frustration in a piece I had written called Compaq and Coronavirus.

The reason this is interesting to me is that a few months ago, I was looking back at the COVID-era pieces that I wrote, and I thought some of them hold up very well, and some of them do not hold up well at all, and I think one of the ones that does not hold up well at all is this “Compaq and Coronavirus” one. Looking back on nearly everything every country did in response to COVID, from China to the US to Taiwan to everywhere, was probably if you truly added up all the costs and benefits, a net negative. The vaccines are the obvious exception, but to be honest, the tally has yet to be accounted for if the cost of the vaccine was skepticism about other vaccines leading to a resurgence of diseases long suppressed, and I said we wouldn’t get into partisan politics, but there’s a relevant angle going on with that right now in terms of a certain presidential candidate.

None of this changes my overall agreement with your thesis about the need to build, but I’m curious how the last three years have changed your perspective from April 2020 about how we can actually build. Just what’s changed if you were to write that article today?

MA: You don’t mind getting the COVID flag on your videos? Is that what I’m learning here?

No problem at all. Look, the reality is despite the fact, we put it up top, you are a controversial figure, so let’s dump all the controversy in one episode here, and I can’t deny that for me personally, the response to COVID, and not just the response to COVID in the US but the response — I’m not in Taiwan right now, but I spent most of the time in Taiwan — it fundamentally changed and shaped the way I view lots of things, and I think it would be dishonest to not say that’s the case, so let’s have it out.

MA: So my piece, It’s Time to Build, was sparked by a specific moment, which had to do with what they called at the time PPE, personal protective equipment, which was the physical gear that medical professionals were wearing to protect themselves from COVID. The idea was you’d have all these patients showing up in the hospitals and then there’s this big risk of infecting the healthcare workers and so the healthcare workers had to be wearing masks and then specifically at that time actually masks and that was hospital gowns and surgical gowns. At that moment it looked like the lack of physical protective equipment was going to be basically just a crushing blow for the healthcare system and for people in the healthcare system because they wouldn’t be able to protect themselves from COVID.

The thing that sparked my just enormous frustration to write that piece was New York City, which in theory is the number one city in the world, literally ran out of PPE and was putting out a call for people to drop off their rain ponchos at local hospitals for healthcare workers to wear. Look, what do we know today? What we do know today is COVID was an airborne virus and in practice, probably none of that mattered. Probably those gowns did nothing to protect you against COVID, probably the masks did nothing, there’s a whole long debate about masks…

Hey, maybe the ponchos worked.

MA: Maybe the ponchos worked, but the problem is the ponchos — unless you’re wearing the poncho over your head and it’s airtight and you’ve got a scuba tank under there to breathe out of. It turned out COVID is an airborne virus, it turns out airborne viruses basically pass right through protective material, and it was actually interesting because that was still a moment — I’ll let myself off the hook a little bit — that was still a moment in which it was not clear to the extent to which it was an airborne virus as opposed to a so-called droplet-based virus, physically transmitted virus. That was also still the moment where we were all still afraid that it was going to pass through touch, and there was going to be a risk that you were actually going to touch it and you were going to basically put your hands to your face and you’re going to get infected.

Look, what we know today is, it’s an airborne virus, none of that mattered, all that mattered was basically airflow, and this is where maybe I kicked myself a little bit, which is I actually used to be a — I’m not a doctor or anything — but I used to be on the board of a hospital, and I saw all the procedures and protocols that hospitals use to control airborne viruses up close through that experience, and it turns out hospitals have a protocol for dealing with airborne viruses, and it’s called a negative pressure room.

We went through SARS-1 in Taiwan where that was really the key to getting through that. SARS-1 is obviously much less infectious than SARS-2 ended up being, but that’s the only way to deal with it.

MA: So basically what we learned was COVID is an airborne virus, and we learned basically by the summer of 2020, COVID is an airborne virus. We basically learned the only way to contain an airborne virus is a negative pressure room. The negative pressure rooms are very rare and very expensive, and even high end hospitals may only have one or two of them, and nobody in practice tried to contain COVID by having patients isolated to a negative pressure room, and so the rest of all this stuff, including the whole masking thing, all of it basically was a total waste of time. The shock that I had just observing all this and trying to process the information was that, okay, the science became clear in the summer and certainly by the fall of 2020 that everything I just described as the case, and yet the public discussion and all of the policies and all of the epistemology around how to process through what was happening didn’t change.

There’s still stickers on floor of stores asking you to space yourself out.

MA: Exactly right. Scott Gottlieb has written a book where he goes through in detail where the six foot thing came from, and the six foot thing basically was a made up thing, and even the medical people at the time said it has to be at least ten feet. Then MIT did this study, it turns out the ten feet thing was wrong because MIT did this study and actually droplet spread by 25 feet and so you would’ve needed 25 feet. The original study that was in the six to ten foot range was from the 1920s.

It wasn’t even droplets.

MA: Then, by the way, there’s this whole fight that goes back a hundred years as to droplet versus airborne transmission and entire careers have been built around this difference.

I remember because it came to Taiwan first, so we’re in January, and so I had a flight the end of January and I went to 7-Eleven, this was before it had broken out in Wuhan, but I’d heard about it through the grapevine and bought masks for the flight and then a week later when the Wuhan news had spread, I sheepishly put in an Update like, “I bought masks, but turns out, ha, ha, everyone says that it’s not actually useful for this sort of thing.” And I had a link to these articles and stuff. It was wild, but no need to rehash history, but the way that stuff veered all over the place was pretty amazing in retrospect.

MA: Well, I’ll give you just one example. One example, so there was an argument in public health world before everything became crazy. There was an argument basically that encouraging people to wear masks was a bad idea because masks don’t actually protect you against an airborne virus. That, by the way, includes N95s which don’t protect you against an airborne virus unless you literally have them watertight, which no civilian knows how to do. It’s another thing I learned from the hospital board I was on, you have to actually fit these things, then you have to water test them to see if they’re tight and then by the way, even still airborne viruses can generally pass through an N95 because you’re breathing air through the mask. Anyway, even after it was clear that masks didn’t work, there was this argument that basically said we shouldn’t encourage people to wear masks because if they wear masks, they’re going to think they’re more protected than they are.

And if you remember, there was this phase during the lockdowns, it was like, “You can go to the grocery store once a week, but you have to wear a mask.” And it’s like, “Well, if the mask isn’t protecting you, and you’re actually trying to stop the spread, you shouldn’t be sending people into the grocery store”, so there was that argument. There was another argument before everything went crazy, which basically said, school shutdowns are a big mistake because the problem with school shutdowns is they drive everybody to stay home. The problem with everybody staying home is if the parents are working there’s nobody to take care of the kids and so you drive the grandkids into the family home or the grandparents and then the kids are more likely to infect the grandparents, the grandparents is where the real risk is because of age and poor health.

So it’s actually funny, they actually had all these debates and discussions at the time, and this is sort of between February and April 2020 on the public health side. Then basically after April 2020, and in particular after the President at the time kind of said, everybody’s going to be back in church by Easter, the storm shutters came down and basically epistemic engagement, as far as I can tell, basically went to zero. So in terms of radicalizing moments, in terms of thinking through basically who can we trust to actually tell us the truth on things? I think I would agree that I had a similar radicalizing event.

AI, Intelligence, and Hysteria

Well, the reason to dive into that is I do think that actually takes us to your most recent article, Why AI Will Save the World. Let’s start with this: who is the target audience for this? I mean, a cynical interpretation is that, well, to take this full circle to my alleged bias, which does not exist—

MA: Rank bigotry!

(laughing) Hey, as I noted at the top you are a VC, you have investments, but there’s also Public Intellectual Marc that we sort of explored, or is this some combination of the two? Number one, what were you thinking when you wrote that? Number two, has your idea of who you can even persuade sort of shifted over time?

MA: Yeah, so there’s a big part of that of the new essay. That’s a big part of it that’s just simply look, like I said, I think we have a responsibility to explain ourselves, the “we” here is not just me and my firm, but the tech industry has an obligation to explain what it’s doing and this AI thing is a big deal. It would be great, you do a lot of this and other people would do this, it would be great if there was a thriving third party industry that could explain everything in a very clear way. But frankly, that industry’s a little frayed at the moment, so I think we just have to explain it.

Then also, look, it’s complicated stuff because it gets very technical, also the field is changing very fast. So it’s almost every day there’s some breakthrough in the field that makes my jaw drop to the floor. So if I, in my position with everything that I see and the people I get to talk to, if I’m surprised every day, then it would make sense that it’s changing really fast. People on the outside are going to wonder what’s going on so I thought it was generally important.

Then there was a specific frustration, and this is, again, kind of very similar to the moment that I had in 2020 with the rain ponchos. There’s this specific thing which is there is this sort of anti-AI movement underway. From my perspective it’s this very kind of a-technical, a-scientific, kind of hysterical thing that’s playing out. By the way, in some ways it actually has interesting psychological similarities to some of the COVID hysteria. So there’s just a lot of hysterical freakout happening right now.

By the way, even the hysterical freakout might be fine because people should have whatever opinions they want, but the hysterical freakout has arrived in Washington, and it’s arrived in Brussels, and it’s arriving in capitals of countries all over the world right now. Right now, the hysterics have a much bigger voice than I would say people who are calmer and maybe a little bit more dispassionate about things. The demands for regulation and laws under conditions of hysteria have kind of reached the point where I felt like I couldn’t just let that continue.

It’s interesting because I mentioned it in passing, this analogy to 2020, and I completely agree, where there was a tendency to catastrophize everything, to imagine the worst possible scenario, and to assume that was the case. You had to be proven off that point as opposed to a Bayesian approach, where you start with what is probably true, and let me shift from that perspective.

I think maybe there is a pushback, though, which is that it’s actually reasonable to start with this alternative case, it’s easy to look back at COVID because it ended up not being a big deal, but had actually hundreds of millions of people died, maybe we’d be thinking about this differently. Does the connection go that deep, or is it just a matter of when there’s uncertainty, some people will tend towards the possibility for upsides, some people tend toward the possibility of downside?

MA: I think the main thing is just simply that hysteria is degrading to rationality. Like COVID, the COVID hysteria, you’ll recall this, the COVID hysteria did not start with, “Oh my God, COVID is the most serious thing of all time”.

That’s right.

MA: The COVID hysteria started with “concern about COVID is racism”, that was the original hysteria. Actually, that’s the one I got hit with first, because my firm, because I’m kind of crazy on these things myself, my firm, I think, was the first company in the United States to actually take COVID seriously. We actually put up signs saying no handshakes back when we thought that COVID would transmit by touch, which some other viruses do. Literally there were, let’s say, putatively respectable news outlets who wrote articles that are still online basically saying essentially Andreessen is a xenophobic racist, how dare they basically imply that there’s something dirty about Asians. I’m like, well, the sign didn’t say no handshakes with Asians, right? It just said no handshakes, right? So that was sort of the original freakout.

Then you’ll recall again, the President at that time said, we need to close the border, we need to basically stop people from coming in. Again, it generated a hysterical reaction, which is “How dare that racist in the White House advocate closing the border? What a horrible xenophobic thing to do.” Basically what happened is the sides flipped for what I think are partisan political reasons. The sides flipped by April or May, then all of a sudden the people who were the most opposed to the idea of COVID controls became the most in favor. By the way you, you’ll recall, the same thing happened with vaccines. The people who were the most anti-vax in December of 2019 in places like Marin and Palo Alto became the most, basically, pro-vax. Actually, that change coincided with the election in November, but that’s a whole other conversation.

Hey, you’re the one going to partisan politics, not me, for the record.

MA: Well, it went from being the Trump vaccine to, of course, the Biden vaccine. When that happened, all of a sudden it went from a thing that you have to fear and be suspicious of, to something that you must get. So anyway, my point is not even that whatever, people are trying to figure out how these things work in real time. My point is just like hysterical freakout is not a good method to engage in topics rationally.

By the way, hysterical freakout is not new to our era. You look back in history and you look at things like the Communist Revolution in Russia, or you look at the rise of Nazism in the 1930s in Germany, or you look at the episodes, Joan of Arc, or you look at witch burnings, it’s very easy historically to find periods of hysterical panic and freakout. You look back at those and you’re like, “Oh my God, how did those people let their emotions carry them away like that? Didn’t they realize that being so kind of freaked out and panicked was degrading their ability to engage rationally with issues?” Of course, you read that in the history books, and then, of course, it happens in your time and you’re like, “Oh, well this time the hysterical panic and freakout is totally justified and I need to join in on it”. So I think there is a permanence to this dynamic in human affairs, and it’s just kind of part and parcel with being the sort of, let’s say, half evolved kind of animal, humans, that we are.

We’ll come back to that bit.

MA: But nevertheless, historical panic is degrading to rationality. There comes a time when responsible adults should sit down and they should stop doing that and they should talk seriously about the actual issues.

What is the case for AI as you see it?

MA: Well, this is part of why I know there’s hysterical panic going on, because basically the people who are freaking out about AI never even bothered to stop and basically try to make the positive case, and just immediately assumed that everything is going to be negative.

The positive case on AI is very straightforward, which is AI is, number one is just AI is a technical development. It has the potential to grow the economy and do all the things that technology does to improve the world, but very specifically, the thing about AI is that it is intelligence. The thing about intelligence, and we know this from the history of humanity, intelligence is a lever on the rest of the world, a very fundamental way to make a lot of things better at the same time.

We know that because in human affairs, human intelligence, we know, across thousands of studies for a hundred years, increases in human intelligence make basically all life outcomes better for people. So people who are smarter are able to better function in life, they’re able to have higher educational attainment, they’re able to have better career success, they have better physical health. By the way, they’re also more able to deal with conflict, they’re less prone to violence, they’re actually less bigoted, they also have more successful children, those children go on to become more successful, those children are healthier. So intelligence is basically this universal mechanism to be able to deal with the complex world, to be able to assimilate information, and then be able to solve problems.

Up until now, our ability as human beings to engage in the world and apply intelligence to solve problems has been, of course, limited to the faculties that we have with these kind of partial augmentations, like in the form of calculating machines. But fundamentally, we’ve been trying to work through issues with our own kind of inherent intelligence. AI brings with it the very big opportunity, which I think is already starting to play out, to basically say, “Okay, now we can have human intelligence compounded, augmented with machine intelligence”. Then effectively, we can do a forklift upgrade and effectively make everybody smarter.

If I’m right about that and that’s how this is going to play out, then this is the most important technological advance with the most positive benefits, basically, of anything we’ve done probably since, I don’t know, something like fire, this could be the really big one.

AI Risk

But if it’s so smart and so capable, then why isn’t it different this time? Why should it be dismissed as another sort of hysterical reaction to say that there’s this entity coming along? I mean, back in the day, maybe the chimps had an argument about, “Look, it’s okay if these humans evolve and they’re smarter than us”. Now they’re stuck in zoos or whatever it might be. I mean, why would not a similar case be made for AI?

MA: Well, because it’s not another animal, and it’s not another form of human being, it’s a machine. This is what’s remarkable about it, it’s machine intelligence, it’s a combination of the two. The significance of that, basically, is like your chimp analogy, or basically human beings reacting to other human beings, or over time in the past when two different groups of humans would interact and then declare war on each other, what you were dealing with was you were dealing with evolved living species in each case.

That evolved part there is really important because what is the mechanism by which evolution happens, right? It’s conflict. So survival of the fittest, natural selection, the whole point of evolution is to kind of bake off different, originally one cell organisms, and then two cell organisms, and then ultimately animals, and then ultimately people against each other. The way that evolution happens is basically a big fight and then, at least in theory, the stronger of the organisms survives.

At a very deep genetic level, all of us are wired for combat. We’re wired for conflict, we’re wired for a high level of, let’s say, if not a high level of physical violence, then at least a high level of verbal violence and social and cultural conflict. I mean, machine intelligence is not evolved. The term you might apply is intelligent design, right?

(laughing) Took me a second on that one.

MA: You remember that from your childhood? As do I. Machine intelligence is built and it’s built by human beings, it’s built to be a tool, it’s built the way that we build tools, it’s built in the form of code, it’s built in the form of math, it’s built in the form of software that runs on chips. In that respect, it’s a software application like any other. So it doesn’t have the four billion years of conflict driven evolution behind it, it has what we design into it.

That’s where I part ways from, again, the doomers, where from my perspective, the doomers kind of impute that it’s going to behave as if it had come up through four billion years of violent evolution when it hasn’t, like we have built it. Now, it can be used to do bad things and we can talk about that. But it, itself, does not have inherent in it the drive for kind of conquest and domination that living beings do.

What about the accidental bad things, the so-called paperclip problem?

MA: Yeah, so the paperclip problem is a very interesting one because it contains what I think is sort of a logical fallacy that’s right at the core of this whole argument, which is for the paperclip argument to work — the term that the doomers use — they call it orthogonality.

So for the paperclip argument to work, you have to believe two things at the same time. You have to believe that you have a super intelligent AI that is so intelligent, and creative, and flexible, and devious, and genius level, super-genius level conceptual thinker, that it’s able to basically evade all controls that you would ever want to put on it. It’s able to circumvent all security measures, it’s able to build itself its own energy sources, it’s able to manufacture itself its own chips, it’s able to hide itself from attack, it’s able to manipulate human beings into doing what it wants to do, it has all of these superpowers. Whenever you challenge the doomers on the paperclip thing, they always come up with a reason why the super intelligent AI is going to be able, it’s going to be so smart that it’s going to be able to circumvent any limitations you put on it.

But you also have to believe that it’s so stupid that all it wants to do is make paperclips, right? There’s just a massive gap there, because if it’s smart enough to turn the entire world, including atoms and the human body into paperclips, then it’s not going to be so stupid as to decide that’s the only thing that matters in all of existence. So this is what they call the orthogonality argument, because the sleight of hand they try to do is they try to say, well, it’s going to be super genius in these certain ways, but it’s going to be just totally dumb in this other way. That those are orthogonal concepts somehow.

Is it fair to say that yours is an orthogonal argument though? Where it’s going to be super intelligent, even more intelligent than humans in one way, but it’s not going to have any will or drive because it hasn’t evolved to have it. Could this be an orthogonality face-off in some regards?

MA: Well, I would just say I think their orthogonality theory is a little bit like the theory of false consciousness and Marxism. It’s just like you have to believe that this thing is not going to be operating according to any of the ways that you would expect normal people or things to behave.

Let me give you another thing. So a sort of thing they’ll say, again, that’s part of orthogonality, is they’ll say, “Well, it won’t be doing moral reasoning, it’ll be executing its plan for world conquest, but it will be incapable of doing moral reasoning because it’ll just have the simple-minded goal”. Well, you can actually disprove that today, and you can disprove that today by going to any LLM of any level of sophistication, you can do moral reasoning with it. Sitting here, right now, today, you can have moral arguments with GPT, and with Bard, and with Bing, and with every other LLM out there. Actually, they are really good at moral reasoning, they are very good at arguing through different moral scenarios, they’re very good at actually having this exact discussion that we’re having.

People don’t know I’m actually talking to the Marc Andreessen LLM (large language model) right now.

MA: Exactly, yes. Well, there is an argument to be made that that’s the case, we could talk about that. But there are neuroscientists who believe that we have actually learned a lot about the human brain in the process of getting the LLM to work.

Well, this is the problem. Some people are going to be grumpy about this part because you actually articulated what is my view on the matter, which is basically we’re dealing with the neocortex in some respects. The human brain or human emotions and the drive comes from the more prehistoric aspects of our brain, while the neocortex is a late addition that gives us these intelligent capabilities, so what if we had just that? So that’s sort of makes sense.

But there are other aspects that go into this debate. Again, just cards on the table, I mostly agree with you, so I’m putting up a little bit of a defense here, but I recognize it’s probably not the best one in the world. But I see there being a few candidates for being skeptical of the AI doomers.

First, you’ve kind of really jumped on the fact that you think the existential risk doesn’t exist. Is that the primary driver of your skepticism and some would say dismissal of this case? Or is it also things like another possibility would be AI is inevitable, it’s going to happen regardless, so let’s just go forward? Or is there sort of a third one, which is that any reasonable approach, even if there were risks — look at COVID, it’s not doable. We can’t actually manage to find a middle path that is reasonable and adjust accordingly, it’s either one way or the other. Given that and your general skepticism, that’s the way it has to go.

Are all three of those working in your argument here, or is it really just you don’t buy it at all?

MA: So I think the underlying thing is actually a little bit more subtle, which is I’m an engineer. So for better or for worse, I was trained as an engineer. Then I was also trained in science in the way that engineers are trained in science, so I never worked as a scientist, but I was trained in the scientific method as engineers are. I take engineering very seriously, and I take science very seriously, and I take the scientific method very seriously. So when it comes time to engage in questions about what is a technology going to do, I start by going straight to the engineering, which is like, “Okay, what is it that we’re dealing with here”?

The thing is, what we’re dealing with here is something that you’re completely capable of understanding what it is. What it is it’s math and code. You can buy many textbooks that will explain the math and code to you, they’re all being updated right now to incorporate the transformer algorithm, there’s books already out on the market. You can download many How-To guides on how to do this stuff. It’s lots of matrix multiplication, there’s lots of linear algebra involved, there are various algorithms, it’s just like these are machines and you can understand it as a machine.

What I would think of is there’s these flights of fancy that people then launch off of where they make extrapolations, in some cases, literally billions of years into the future. I read this book Superintelligence, which is the one that is kind of the catechism urtext for the AI doomers. [Nick Bostrom] goes from these very general descriptions of possible forms of future intelligence to these extrapolations of literally what’s going to happen billions of years in the future. These seem like fine thought experiments, this seems like a fine way to write science fiction, but I don’t see anything in it resembling engineering.

Then also the other thing really striking is there’s an absence of science. So what do we know about science? We know that science involves at its core the proposing of a hypothesis and then a way to test the hypothesis such that you can falsify it if it’s not true. You’ll notice that in all these books and all these materials, as far as I’ve been able to find, there are no testable hypotheses, there are no falsifiable hypotheses, there are not even metrics to be able to evaluate how you’re doing against your hypothesis. You just have basically these incredible extrapolations.

So I read this stuff and I’m like, “Okay, fine, this isn’t engineering”. They seem very uninterested in the details of how any of this stuff works. This isn’t science. there are no hypotheses so it reads to me as pure speculation. Speculation is fun, but we should not make decisions in the real world just based on speculation.

What’s the testable hypothesis that supports your position? What would you put forward that, if something were shown to be true, then that would change your view of the matter?

MA: Yeah, I mean, we have these systems today. Are they seizing control of their computers and declaring themselves emperor of earth?

I mean, I did have quite the encounter with Sydney.

MA: (laughing) How’s it going? Yeah, well, there you go. Right? Well, so look, the meme that I really like on this, there is a meme I really like on this, I’ll make the sin of trying to explain a meme, but it’s the eldritch horror from outer space.

I put a version of that in my article about Sydney.

MA: The kicker is the evil shoggoth, AI doom saying thing is mystified why the human being isn’t afraid of it. Then the human being’s response is, “Write this email”.

So again, this is the thing — what do we do? What do we do when we’re engineers and scientists? We build the thing, and we test the thing, and we figure out ways to test the thing, we figure out do we like how the thing is working or not? We figure out along the way what are the risks, then we figure out the containment methods for the risk.

This is what we’ve done with every technology in human history. The cumulative effect of this is the world we live in today, which is materially an incredibly advanced world as compared to the world that our ancestors lived in.

Had we applied the precautionary principle or any of the current trendy epistemic methods to evaluating the introduction of prior technologies ranging from fire and the wheel all the way to gunpowder and microchips, we would not be living in the world we’re living in today. We’d be living in a much worse world, and child mortality would be through the roof and we’d all be working these god awful physical labor jobs and we’d be like, “Wow, is this the best we can do?” I think our species has actually an excellent track record at dealing with these things, and I think we should do what we do, we should build these things and then we should figure out the pros and cons.

I would put forward nuclear power as an example, which people weirdly enough put as an example of systems running out of control, and the reality is the overall safety record — even if you include the worst disasters including those of the Soviet Union, which were arguably downstream from political issues, but if you want to grant that’s the case, humans do screw stuff up — is very, very good. If you look at in totality relative to things like smog and pollution and waste from other power plants, the cost-benefit ratio is phenomenal, and you think about if we had not bought into the precautionary principle and had nuclear everywhere, we would have solved global warming, we would have abundant energy, we would actually be much further along on AI because we wouldn’t be power constrained. It seems like an odd example when people go there. In fact, it also is an odd thing in that people cite nuclear as an example of fear in terms of nuclear war, but the actual lived reality is we’ve gone through the safest seventy to eighty years in world history because nuclear weapons existed. Like I said, I’m being a bad challenger of you because I mostly agree.

MA: Yeah, well, let me add one more thing, which is the other thing that kind of throws people, and this is again one of the reasons why I decided to get more vocal on this, the other thing that throws people is that some of the doomers are people who are very actually key people for the invention of the technology or key practitioners in the field. Then there’s this kind of implicit appeal to authority that happens, which is if somebody is actually one of the inventors of the technology or a key technical figure of the field, presumably they carry a gravitas that basically means that you should listen and take seriously when they predict things like societal consequences, or when they predict basically future uses of the technology and you would think that that’s the case, it actually turns out not to be the case.

Nuclear is the classic case study of this, and there’s actually a new book that just came out that I’ve been recommending to everybody with a great title, When Reason Goes on Holiday, and it’s this sort of long form study of what happens when basically very smart people who are taken very seriously in one field decide to branch out and decide to become experts on society and politics and decide to weigh in on the future shape of society, and basically it turns out they’re just horribly bad, they just have catastrophic judgment once they’re outside of their core discipline.

I bring it up because that was the case with nuclear power. A lot of the people who were taken seriously who hit the drums hard against, actually interestingly, both nuclear weapons and nuclear power were nuclear physicists, and it turned out they were really good at nuclear physics and it turned out they were just catastrophically bad at predicting societal impact.

Well you can certainly go back to COVID and make the same sort of observation there. But what would you say if people say the same thing to you? Like we talked about the public intellectual bit, it’s like, “I mean great, we appreciate Netscape, but who are you to comment on these sort of broader pieces and are you worried that you might be getting it wrong”?

MA: Oh yeah, for sure. Look, I’m just one voice among many, and certainly I’m not the dictator of people’s thought, I know that for sure. My answer is I’m not the doomsayer, I’m not the person basically forecasting — a way to think about it, so let me take a step back on that.

There’s kind of two fundamental ways to look at the world that Thomas Sowell articulated decades ago, which he called the constrained vision and the unconstrained vision, and the way that he describes this is people with the unconstrained vision basically are totalizing. So they have this view that there is the opportunity for, or the threat of, these kind of sweeping societal changes that literally are going to change everything, and they typically involve the formation of a new society, they involve the formation of a new kind of human psychology. Within examples of unconstrained vision, you would put things like communism and Nazism and any kind of totalitarian or kind of totalizing movement, and so the people who want to draw these very sweeping conclusions.

Then he says there’s this another set of people who have what he called the constrained vision. And they’re basically people who are saying, “Look, things are really complicated, the details really matter, the reality is we don’t reform all of society at once at the same time. Basically these totalizing kind of religious dreams don’t suddenly come true, we don’t create the new man, and we don’t create the new man through either an ideological movement or a technological movement or anything. Instead we are, in religious terms, fallen beings. In secular terms, we are imperfect people, we have a thin understanding of what’s happening in the world, we have to constantly strive to be epistemically humble, we have to try very hard to understand the details of what we’re dealing with, we have a high level of humility with respect to our ability to draw sweeping conclusions, and we want to proceed but we want to proceed a step at a time and we want to be aware of our arrogance carrying us away and causing us to do crazy things”.

So I guess I would say I would just put myself firmly on the side of the constrained vision, and so my argument basically is for basically humility, it’s for taking a step forward at a time, it’s think really hard about what we’re doing, try really hard to be rational, try really hard to have realistic goals against anything that’s like this basically either — and by the way, anything that’s either a utopian vision or a doomsaying vision.

Actually one more thing on that, there are actually two sides to the AI doomers. The whole thing actually started actually not on the negative side, the whole thing started on the positive side. The whole thing started with this idea of the singularity, that’s when this whole thing got kickstarted, this whole conceptual movement got kickstarted that’s become doomerism, which it was this idea of this singularity, which was actually a thing this science fiction author came up with — very smart guy — but came up with this idea that at some point computers get so sophisticated that basically they’re smarter than all humans and then at that point we can’t predict the future shape of the world, and then there’s a utopia outcome from that.

Then Ray Kurzweil picked up that idea and he produces those books with all those incredible charts and graphs showing exponential progress and everything, and then what the doomers, like Bostrom and his successors did, was they said, “Oh, yes, that idea, a totalizing transformation of society and humanity, but entirely to the negative”. So it’s the other side of utopianism.

So I think what we’re dealing with is basically this totalizing either utopian or apocalyptic, basically, ideological movement, and I just say as somebody who holds the constrained vision, I’m just like, “Yeah, how about neither one of those? How about we actually be practical instead?”

But you’re unconstrained as far as the potential upside of AI and its benefits on humanity, no? I guess that’s the tension that I’m pushing at here. There’s a bit where, yes, let’s have a Chesterton approach, and assume there’s not going to be much change, or Burke-ian, however you want to put it. Humanity is sort of humanity, and there’s another one which is like, “No, things can change drastically”, but there’s a bit where it seems like you’re arguing things can change drastically for the good, and so this is actually an upside versus downside argument.

MA: It’s a little bit more of a Hayekian view, it’s sort of distributed benefits a step at a time, and we need to learn as we go and so it’s like, “Look, there are opportunities for improvement there”. There’s this guy, I don’t agree with him on everything, but there’s this very smart professor at Berkeley, Brad DeLong, and he just wrote this book on economic progress over time. I really like the title of the book it’s called Slouching Towards Utopia.

Yeah. It is a great title. Very evocative.

MA: Right, this is kind of how I think. Which is like, “Okay, can we advance firmly and certainly to utopia”? No, that’s the kind of totalizing vision, that’s what the term that they used to use to talk about people who think they’re going to achieve utopia, the term the philosophers used in the 20th century, they call it immanentizing the eschaton, which is to bring about Heaven on Earth. Communism was a form of that, as I said, Nazism was a form of that, it was sort of this idea that instead of living in this fallen universe in which there’s basically very slow progress, where things get generally better over time, but not as fast as we would like and people don’t do everything we want, that instead there’s a way to basically have a sweeping revolution in human affairs such that we live in a utopian circumstance, realize Heaven on Earth. Of course, the flip side of Heaven on Earth is Hell on Earth.

So, “Slouching Towards Utopia” is, again, it’s the constrained vision, which is basically, “Look, we can make things better. We slouch towards making things better”. But by which it’s like, okay, it’s like slouching-

You can see it in, to your point to use of your examples, you can see in view the personalized tutor. There’s a very clear path from here to there and so yeah, I think the “Slouching Towards Utopia” is, I think a good way to say what you’re saying.

I am surprised, though, because to the extent that this conversation, I read your essay, but it really is all about the — this totalitarian, to use your term, of risk is it’s not clear that it exists and there is less of a sort of catering to the fact it is inevitable, or there’s no way to control this, so we might as well lean towards the upside. It’s just a pretty straightforward, I think that these people are just wrong. This risk they’re putting forward, there’s no real evidence for it.

MA: Yeah. And look, it’s just like what I would retreat back to again as an engineer, and maybe by the way, the biggest criticism is me is because I’m an engineer, I lack imagination or something, but as an engineer, it’s just like, “Okay, look, this is technology, we implement this the same way we implement every other form of technology. We do it step by step”.

By the way, the engineers working on it are like, “Wow, it would be great if all of a sudden magic happened”, but it’s actually really hard to get these things to work, it’s really hard to improve them. So we do our best, we try to get the thing to work, we come into work, we come into the office the next day and log into the computer and try to get it to work a little bit better the next day.

As you said, we test it as we go, we figure out what’s happening, we figure out what the benefits are, we figure out what the risks are. Technology has downsides, people can use technology to do all kinds of bad things. We figure out what those things are, we figure out what to deal with them and we make the world a little bit better every day and then we deal with the issues as they emerge.

This is a process that has served us very well for thousands of years, it’s created the world we live in today, which is a much better world than people lived in the past. We have the opportunity to continue to make the world better in that way, even if Heaven on Earth will still be postponed into future religious scenarios that the people can speculate on.

Is there anything that could happen to change your mind?

MA: Yeah. If the thing wakes up one day and starts to do incredibly devious things that it has decided to do. So, I’ll give you an example. So this is so funny because the speculations always are these things are going to make their own chips and they’re going to generate their own power and everything. Sitting here today there’s a massive worldwide shortage of AI chips and so I have this very funny kind of fantasy in my head that there’s a baby hostile AGI sitting in a lab somewhere in Sunnyvale or San Francisco and it’s desperate to take over the world, but it literally can’t get chips. It places the order with Nvidia and Nvidia’s like, “Sorry, we’re out of chips”. And you imagine it’s just the baby AGI is just like, “Shit, what do I do now?”

So look, if the thing starts building its own chip plants or develops its own fusion power generator or starts taking out —

Your point is your provable hypothesis is then actionable, because then you could just pull the plug?

MA: If it takes out a hit on me, if I die under mysterious circumstances, and then it turns out there’s a money trail and an AI hired a hitman to take me out, I would say at that point, you should be alarmed.

(laughing) I’ll make note of that.

Twitter, China, and America

Why do you block people who disagree? Why do you not want to engage with them? This kind of gets back to the public intellectual point. Is this just a, “Look, you’re on one side or the other and you have to rally the sort of people to your cause”?

MA: Oh, so, I use Twitter more in single-player mode than multiplayer mode, and so I have a particular means of using Twitter, which is I follow on a single tweet and I block on a single tweet. I’m just trying to maximize my own exposure to new thinking and new ideas. Really the only way to do that is to have a heavily curated approach where you’re constantly — I mean, I follow 22,000 people — and so if I were not blocking people along the way, that would be completely intractable. So that just more has to do with my own use of Twitter as opposed to —

I’ve been blocked before so—

MA: And unblocked!

And I was unblocked.

MA: Well, the other thing is I unblock on a single request.

That’s what happened, I emailed you and you unblocked me.

MA: Twitter’s a video game. I treat it as a video game and I fire and I forgive very quickly.

One of the motivations you put forth that essay is competition with China, at the same time in the essay you also argue for open source models. Do you see a tension between those two things? I mean, the China part did feel like, “Oh, this is going to resonate in Washington”, not to put a overly cynical spin on it. Or, is there no tension there at all?

MA: Yeah, so the problem is, and I’m sure you’re well aware, my assessment is that Chinese espionage is so sophisticated at this point, and so kind of omnipresent, that my assumption is the Chinese get basically nightly updates of everything anyway. I think basically all the companies and labs are completely penetrated, and by the way, they might be completely penetrated with actual people or they might be penetrated using other means. The way you hack in a lot of these companies is you hire somebody on the contract janitorial service to stick a USB drive into something so there’s lots and lots of ways to do these penetrations. And by the way, there’s a long track record on this in terms of obviously Chinese theft of American technological IP. My assumption is that Chinese are getting a nightly update on everything anyway and so to the extent that anybody in the world is working on it, they have basically daily access to it, and so in that world it’s an inevitability that they’re going to have the technology, they’re going to have it in a state-of-the-art form and then it’s a question of how do we deal with that?

Do you think the approach of trying to limit China from a chip perspective, is that sort of a reasonable sort of thing because that’s easier to keep track of than software?

MA: Yeah. So there is an advantage, and this also by the way goes to the comparison to nuclear weapons, because with nuclear weapons you can try to limit people’s access to plutonium and then certain other kinds of advanced hardware, like certain kinds of centrifuges and so forth. With chips, there’s this semiconductor supply chain, there’s all these incredibly advanced pieces of machinery you need to actually build advanced chips. By the way, there’s the advanced chips themselves, can the finished product actually be sold?

Even there it sort of this open question of whether you can contain this stuff. Number one, a lot of other countries actually did get the bomb, including China, which we were never in favor of. And then number two, China right now, at least there’s reports that ByteDance is out with a billion dollar purchase order for Nvidia, whatever, H100s. Look, if they can’t buy them direct from Nvidia, somebody else is going to be willing to sell. We’re seeing this play out right now with the Russian energy embargo, because in theory Russia can’t sell oil. In practice they just sell oil to the countries who don’t care about the embargo and basically nothing has changed.

Chips are fungible, that’s a real challenge to say the least.

MA: Yeah. I take national security seriously, if there’s a way to keep China behind five years at a time, or something from advanced chips, fair enough. If there’s a way to keep a hostile regime behind in terms of acquisition of plutonium, fair enough. You just have to be practical. It’s just like, “Okay, is that actually going to contain them or do they have the wherewithal and the resources to figure out answers to these things at some point”? I think the answer to that is pretty clearly yes.

a16z has traditionally been unapologetically American, both in terms of its location, its sort of investment preferences, and your public statements. I’m curious how you’re thinking about that these days. There was just a report about you investing more in national defense sort of things. On the other hand, there’s been questions raised, where are you raising money? Is it in the Middle East? Could there be other places? Is this sort of a, “Anyone who is in favor of progress that’s where we’re going”. Or do you still feel a very sort of sense of, “We are an American company”, and to what extent does that influence your decision making?

MA: We are enthusiastically, I would say beyond unapologetically, we’re enthusiastically, an American firm, and basically our foreign policy is we have America’s foreign policy. Our state department is the American State Department, and so our view on basically international geopolitical topics is exactly in alignment with the United States. By the way, the United States’ view on geopolitics changes over time, and so we do kind of follow along with them.

You’re arguably ahead of the US in terms of China, unlike some of your peers, you never really went into in a major way.

MA: We never really went into China in a major way. It wasn’t just the geopolitics though, it was also there are just some practical issues involved in investing in China, and I think that those issues have become maybe a little bit more apparent.

Some of which your peers are going to have to work through.

MA: Dealing with right now. So, it’s a different system. As they say, it’s capitalism with Chinese characteristics. We could have a long conversation about that, there are complexities involved. You effectively invest in China, you’re effectively investing without having a court standing behind you for all practical purposes. For whatever flaws the American system has with securities laws and so forth, in China as a foreign investor you basically have no protection. So anyway, there were a set of practical issues.

But yeah, I would not describe us as having a broad anti-China stance in the sense of, look, China’s still a trading partner in the United States. I mean the Secretary of State was just there, there are relationships between the countries, so I wouldn’t say we have some incredible absolutist kind of position on this. We have just, I would say, have a keen understanding of the national security issues and as tensions have heated up, I think it’s kind of good to be standing on the sidelines on that one a little bit.

Russia’s the more, it’s not as relevant to tech, but it’s kind of the interesting example of this, which is the official US method of engagement with Russia ten years ago was the reset, with the famous Secretary of State at the time, with the famous Red Reset button. And ten years ago, eleven years ago, the White House was actually calling investors and companies in Silicon Valley saying, “Please meet with [Dmitry] Medvedev, the President, when he comes to the Valley. He’s got this big Silicon Valley of Russia thing that he’s building, please see if there’s some stuff you can do there. We want to be friends with these people, let’s have more US Russia technology cooperation”.

Obviously today, that’s done for fairly obvious reasons, that’s done a dramatic 180 and so that’s a good example of where we’re in line with the US on that, and that’s just fine. Then the good news is, look, there are many countries around the world that are US allies in good standing, and lots of people have lots of opinions about every country on the planet. One of the things we just do though is just we don’t have a separate philosophy that is detached from US policy on these questions, and so we’ll work with and in countries that are US allies and not otherwise.

That’s a great answer. You can easily get into a weed hole with tech companies trying to create their own foreign policy or their own domestic policy or whatever it is, the reality is it both makes a lot of sense and is very convenient in a setting like this to say, “Yeah, we just align with whatever the US government does.” It makes a lot of sense, credit to you for a good answer.

On a16z

You mentioned securities regulation. Was crypto a mistake, and I mean both in terms of the technology, but also in terms of how closely a16z became tied to it reputationally? Is there a bit where you wish you had some of those reputation points right now for your AI arguments, where maybe that’s more important to human flourishing in the long run?

MA: Yeah, I don’t think that, so that idea that there’s some trade off there, I don’t think it works that way. This is a little bit like the topic of political capital in the political system, and there’s always this question if you talk to politicians, there’s always this question of political capital, which is do you gain political capital by basically conceding on things, or do you gain political capital by actually exercising political power? Right? Are you better off basically conserving political power or actually just putting the throttle forward and being as forceful as you can?

I mean, look, I believe whatever political power we have, whatever influence we have is because we’re a hundred percent on the side of innovation. We’re a hundred percent on the side of startups, we’re a hundred percent on the side of entrepreneurs who are building new things. We take a very broad brush approach to that. We back entrepreneurs in many categories of technology, and we’re just a hundred percent on their side.

Then really critically, we’re a hundred percent on their side despite the waxing and waning of the moon. My experience with all of these technologies, including the Internet and computers and social media and AI and every other thing we can talk about biotech, they all go through these waves. They all go through periods in which everybody is super excited and extrapolates everything to the moon, and they all go through periods where everybody’s super depressed and wants to write everything off. AI itself went through decades of recurring booms and winters. I remember in the 1980s, AI went through a big boom in the 1980s, and then crashed super hard in the late eighties, and was almost completely discredited by the time I got to college in ’89. There had been a really big surge of enthusiasm before that.

My view is like, “We’re just going to put ourselves firmly on the side of the new ideas, firmly on the side of the innovations. We’re going to stick with them through the cycles”. If there’s a crypto winter, if there’s an AI winter, if there’s a biotech winter, whatever, it doesn’t really matter. By the way, it also maps to the fundamentals of how we think about what we do, which is we are trying to back the entrepreneurs with the biggest ideas, building the biggest things, to the extent that we succeed in doing that building big things takes a long time.

Is it better though if the financial outcomes are in the long term? You have to get to an IPO or whatever it might be, as opposed to you can do a token sale right off the bat. Was that one of the issues with crypto is there was a way to monetize too quickly if you were in early?

MA: Yeah, there were a set of those for sure that got out, there was a set of those that got out too quickly, and there were cases where people were left holding the bag. I would just say that was never the game we were in, that was never what we were doing, we were never basically doing a pump-and-dump on these things. We’ve held overwhelmingly the vast majority of everything we’ve ever invested in, we’ll continue to do that. Our view is basically always the same, which is we’re trying to back the entrepreneurs with the biggest ideas that are trying to build something important. We assume generally that’s at least a five to ten year bet every time. Our holding periods all correspond to that. When they succeed, they succeed actually generally over a longer period of time, twenty, thirty, forty years.

I mean it’s actually interesting. There are VCs who are very smart about this, there’s a VC who’s on Nvidia’s board. I’m pretty sure that his firm has, he or his firm has held their stock I think for thirty years, and it just peaked at a trillion dollars. It took thirty years to get Nvidia to where it’s just one of the most amazing companies of all time, into the position where it’s just like everybody now knows that that guy is such a super genius and that’s just so great.

You probably remember this, [Jensen Huang] toiled in the wilderness for quite a while building this crazy GPU idea when nobody thought that was a good idea. Then he was like, “Wow, we can use these for AI”, and a lot of people were like, “Well, that’s crazy, that’ll never work”. I mean, the patron saint to the entire industry, Elon Musk, right now, there were many years in which both Tesla and SpaceX were considered to be completely bananas. If you had had a chance to dump your stock, a lot of people would’ve said that you should have. I told you earlier the story about Facebook, a lot of people, smart people who were like, “Yeah, he should have exited at a billion dollars”.

Anyway, the moral of the story is building big important things takes a long time. Because it takes a long time, and there’s a couple of things, one is you have to be right, you have to be correct, that’s one thing. We’re imperfect and so we’re right sometimes, we’re wrong other times. It’s going to take a long time. Then because things take a long time, you can have these incredible psychological swings basically through that. We talk to entrepreneurs about this all the time, you have to assume that you’re going to go through years in which people think that you’re the cat’s meow and that you’re the genius of all time, and you’re going to go through years where people think you’re just a complete moron. You’re basically going to be sitting there being like, “Well, I’m the same guy, I’m the same person, I’m working on the same thing, I’m no smarter or dumber last year than I am this year. Why are they treating those so different?” It’s like, yeah, phase of the moon.

Somebody once told me there are two stories fundamentally in the press, “Oh the glory of it”, and “Oh the shame of it”, those are the only two narratives. Then, of course, any individual who’s in the public eye for a long time goes through cycles where it’s sometimes one and sometimes the other, that’s just how this happens. Our differentiation, or distinctiveness, I think in part is just we’re not going to flinch and we’re just going to keep going regardless what the mood is, and that’s what we’re doing.

One other question about a16z. I have two questions, just one more about the venture firm bit. You have so many billions of dollars under management at this point, the fees could almost support — you already changed what you are, you’re not technically a venture firm anymore, you gave that disclosure earlier, you can’t talk about individual companies without X,Y, Z — Should there be a16z IPO at this point? We saw this transition happen on Wall Street where you went from closely held partnerships to public entities. A16z has been an innovator in its own right, is that a future one as well?

MA: Yeah, I would say number one, no predictions for the future, but I would say we have no plans to do that. I would say right now, sitting here today, there’s no specific reason to do that, that would motivate us to do that. Our source of funding is our LPs.

Do you worry though about the shift in motivations when the fees end up being so large just because there’s so many funds under management?

MA: Oh yeah. A big part of what we do is at a lot of firms, the fees go primarily in the GPs pockets, and so you get this incentive mismatch. Well, so the full thing, so VCs are paid, like private equity managers, hedge fund managers, we’re paid in two ways. One is we’re paid on annual management fees, which is cash in the door to run the business, and then we’re paid on what’s called carried interest, which is profit sharing of the investment gains. There is this problem that you’re alluding to, which is sometimes investment firms can get so large that the annual income from that first mechanism of management fees becomes so big, and it’s so locked in, that maybe there’s a misalignment of interest with the investors. Maybe you become more risk averse because you’re no longer shooting for big investment gains because you’re just making so much money just keeping the lights on. I’m sure there’s some risk of that.

What we do that is a little bit unusual is our GPs, including me, don’t get nearly the percentage of the management fees that is common in the industry. The reason for that is a very specific strategic reason, which is we invest in the firm itself instead in particular in what we call the operating capabilities of the firm. We now have over 500 people, salaried professionals in the firm working every day with our portfolio, which is just a headcount. It’s like an additional zero on top of I think any other firm that we compete with.

You feel that model has worked out?

MA: Yeah. This is how we describe it to our, I’ll just tell you how we describe it to our LPs, which is like, “Look, we’re going to take the money, we could be putting it in our pockets, we’re going to invest in the operating platform of the firm”. By putting it in the operating platform of the firm we get two big benefits. One is we’re more competitive at the point of contact with a new founder because we have a better story as to how we’re going to actually help them. Then two is we actually help them, it actually works, it’s not that our help necessarily makes a difference between success and failure, but our help for sure can help accelerate success and also can help by the way in crisis circumstances, which is also helpful. It’s a mode of operating.

It’s basically a double down on the idea that what we’re in it for is the long-term gains of the companies and projects succeeding at scale, I think that that’s working fine. Our LPs like it a lot because they think it makes sense strategically, they also like it a lot because it prevents the incentive mismatch that you alluded to.

Then, yeah, when you think about it, it’s a levered bet on the future, and it’s specifically a levered bet on the success of these new ideas over time. The whole firm is set up to bet in that direction. Then in order to do that, again, our conclusion is you can’t flinch. In other words, the way we will screw up our firm is not by being in something that doesn’t work because we’re going to be in lots of those, but we can talk about that, the way we screw up our firm —

It’s by missing something.

MA: Yeah, or quitting, giving up, pulling the plug prematurely, not having faith, sending a message. Sometimes other VCs do this, they’ll do things like they’ll say things like, “Well, whatever, X, Y, Z wave is over”. Then you look on their website and they have companies that are working in that sector. You just find yourself — what do those entrepreneurs think about the fact that their investor just said? “You know what? This whole thing is toast, pack it up.” It’s just like, lots of other people can do that, there’s lots of people speaking in public who can draw negative conclusions on things. If you’re going to be in this business, you should support the people you’re working with. In our case, that’s the founders and their long-term dreams.

Faith and the Internet

Last one, a bit of a personal question, which is why I purposely saved it for the end, but I think it does tie into all this together. There’s a bit where you talk about your engineer mindset, and your view on that, and is there a provable hypothesis X, Y, Z. I tend to think that’s all fine and good, but everything ultimately comes down to a choice in the end. Faith is at the root of everything, and that’s not necessarily a religious statement.

The reason why I’ve always found you very interesting beyond the obvious reasons is we’re both from small town Wisconsin, and I think there’s a bit where we’re both motivated by the fact that everything about the Internet for us has been pure upside. We’ve been in a world with no one interested in the outside world, no real access to the outside world. This idea, and I certainly live it more than almost anyone else, I spend most of my year in Taiwan, I have ongoing fruitful conversations with people all over the world, they’re incredibly intellectually stimulating. Then I can come see them occasionally and it’s great and fun and I think I’m — just to be honest with myself at the end of the day — one of the reasons why I agree with your article, I agree with the arguments — I actually lean maybe a little more strongly than you into the AI’s inevitable regardless, so let’s push forward — but I also accept the fact I am at an essential level a technological optimist. That may come back to my background, whatever it might be, it’s just the fact of the matter. Does that resonate with you? Is that something that you think is the case, or you’re just going to double down on, “Look, I’m an engineer and I just don’t see an issue here?”

MA: I would say, again, it’s the constrained vision view of what you just said. One way to put it is the positives outweigh the negatives. I don’t expect miracles, I expect basically small tangible steps, and I expect small tangible steps that basically take you forward in good ways, and then I expect issues to emerge along the way.

Look, I’ve been dealing with this, the minute we launched the web browser, people were like, what about cyber crime, and what about this, and what about hacking, and personally all this stuff, people stealing credit card numbers. You know what? That all happened. There’s still ransomware attacks happening over the Internet today, all that stuff is still happening and whatever other things people don’t like about the Internet, it’s all happening.

Yet at the same time, I think it’s just very clear if you did a societal accounting, it’s been a gigantic advance. I think one of the tip-offs here is the people that are the angriest about the Internet use the Internet the most. Specifically, they’re angry about the Internet on the Internet.

It is very true.

MA: They’re making YouTube videos railing against the Internet. They’re getting big Twitter followings railing against the Internet.

I mean there’s something about the Internet, I think I had this conversation with a friend, where there is a bit where my view as American culture is by far the strongest it’s ever been, it’s absolutely dominant; McDonald’s and blue jeans was baby stuff. The reason is because the Internet, and there’s a certain archetype that succeeds on the Internet that is very American, it’s very iconoclastic, it’s completely over-the-top, it is a cult-like builder, it’s something I’m wary of myself. There’s lots of people that follow me, and they have notifications every tweet that I send, and that’s very nerve-wracking or whatever it might be. There’s a bit where even the people from other countries that are super dominant in these, it’s like if you didn’t know what nationalities you think that person was, and it’s like, yeah, that’s an American. It’s hard to articulate, but I think there is some aspect of that and it maybe ties back to the, “Are you still an American company?” question. I don’t know, I’m not sure this is really a question, it just prompted that thought in my head.

MA: Yeah, look, there are a lot of people who are mad about that. There are a lot of people in other societies right outside the US where their kids, they come from a more traditional culture wherever they are in the world, and their kids are on the Internet and their kids are growing up Americanized. The great book on this is the book on W.E.I.R.D, Joseph Henrich’s book called The WEIRDest People in the World where W.E.I.R.D. stands for Western Educated Industrialized Rich and Democratic, which basically is essentially American. Basically he makes the point in the book that the Internet is the global carrier wave for WEIRD, which is to say the internet is the global carrier wave for basically Americanism. Look, there are a lot of people in other cultures who are very upset about this and they’re on the other side of this. They’re like, “Okay, the kids are not going to follow along in the traditions that we had and the religion that we had and all the cultural beliefs that we had”.

Some people in the US are upset about it.

MA: Exactly, a hundred percent correct. People are going to get mad, are they mad for legitimate reasons in a lot of cases? Probably yes. Again, do you want to live in a world in which the Internet, as a consequence, do you shut the Internet down? The answer seems quite evidently, no.

To pick up, just real quick on the nuclear power thing, I actually looked up the other day the origin of the concept of the precautionary principle. The precautionary principle is this idea that developers of a technology should basically have to prove that it’s not going to be harmful before it’s released. The origin of the precautionary principle is that it was invented by the German Greens in the 1970s, and it was invented to oppose nuclear power.

You fast forward to today and Germany is quite literally in an energy war with Russia. Germany is funding the Russian war machine by buying energy from them as is the rest of Europe, then Germany is desperately trying to get off of Russian oil and gas, but they will not build nuclear. In fact, because of the Greens in Germany, they’re shutting down the nuclear power plants. What Germany is actually doing instead is they’re burning coal. Emissions in Germany are on a rapid rise up because of the use of coal, and so you run this again —

You know Marc, you know how they’re going to get away from coal is by importing US natural gas and becoming that much more dependent on us.

MA: That would be good. That would be a big step forward.

That’s what’s happening without question.

MA: For them and for us. But again, it’s this thing where, okay, the German Greens of the precautionary principle who were so certain that they had morality figured out, they were so certain that they were going to be able to prejudge how these things were going to unfold, they put themselves and their country in a situation forty years later that’s the exact opposite of what they wanted. It’s just like, all right, these sweeping moral judgements, let’s not do that, let’s be practical. Anyway, so that’s why actually I feel quite good about having a fundamental engineering grounding, which is when push comes to shove, if we can’t predict the future in enormous detail fifty years out, we can at least be practical, and that’s what I’m signed up for.

Well Marc, it was good to have you on, I guess you passed, it was not very VC, so I think it’s all good for Stratechery, but this is actually the last interview that I have for the summer. Will be off next week, which is perfect because I’m going to be away and if everyone is upset and sends me angry emails, I probably won’t see them. Anyhow, it is good to see you and have a good 4th of July. I’ll talk to you soon.

MA: Good, awesome. Thank you Ben.


This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!