An Interview with Palantir CTO Shyam Sankar and Head of Global Commercial Ted Mabrey

Good morning,

This Stratechery interview is with the executives of a public company, but truthfully, I approached it as another installment of the Stratechery Founder series; as a reminder, one of the challenges in covering startups is the lack of available data. My solution is to go in the opposite direction and interview founders directly, letting them give their subjective overview of their companies, while pressing them on their business model, background, and long-term potential.

In this case, there is certainly SEC-mandated data available about Palantir, but this has always felt like a company that was mostly mysterious; to that end, I wanted to take the opportunity to interview Chief Technology Office Shyam Sankar and Head of Global Commercial Ted Mabrey about how exactly Palantir came to be, what it is today, and the opportunities it has going forward.

I’m going to be honest: I’m incredibly intrigued by what Palantir is building, and will probably write more about the company soon. What I hope you enjoy about this interview — which is very dense — is the extent to which I felt like I was expanding my understanding of the company in real-time.

To listen to this interview as a podcast, click the link at the top of this email to add Stratechery to your podcast player.

On to the interview:

An Interview with Palantir CTO Shyam Sankar and Head of Global Commercial Ted Mabrey

This interview is lightly edited for clarity.

The Palantir Vision

Shyam Sankar and Ted Mabrey, welcome to Stratechery.

Shyam Sankar: Great to be here.

Ted Mabrey: Thanks for having us.

So Palantir is not a company that I’ve covered to date. I did write about your S-1, we’ll actually get to that in a little bit, so I just want to put upfront that while Palantir is a public company, I’m going to treat this interview a little bit more like one of the founder series of interviews I’ve done. I’m personally in somewhat of an exploration mode and wanting to better understand your business, so you’re going to get off a little easy, I think. I’m not trying to nail you to the wall and anything, but I actually am quite intrigued by what you have built.

Shyam, you reached out to me a little bit ago to talk about the business and I’d been hearing a lot of good things. I did go back and watch a few of your investor videos where I was invoked, so that was obviously a good way to get me on board. But I want to go back to even before then, back to the beginning. Shyam, you said you were employee 13, is that right?

SS: That’s right.

And Ted, when did you join the company?

TM: 2010.

2010. Okay, and so Palantir was officially started in 2003, I believe, although it’s reported as 2004. What was it like back then? What was the vision of Palantir that was presented to you? Was that the vision that persists to today or was there a shift? This is pre-product, pre anything, what got Shyam Sankar on board back in the day?

SS: The thing that’s been hugely consistent over time is the ambition of what we’re trying to accomplish. So when a group of folks got together in a post-9/11 world and said, “Why are we arguing about what’s more important, privacy or security? They’re both obviously important. How do we build technologies that allow you to have more of both?” It’s not a simple web app, it’s not a trivial technical vision, and that’s what got me deeply wedded to this.

In that journey, the technology we had to build, the ambition that we continually upscaled and up-scoped, so while it started actually very focused on government, this idea that technology can have a profound effect on how the institutions we need in the world to function, function, and why that would extend beyond government to commercial institutions, I think that’s very clear in a post-financial crisis world, that’s been remarkably consistent.

You mentioned the upscale in ambition and you put it in the context of starting with government, and obviously the huge growth story I think around Palantir is largely around your enterprise offering, which we’ll get to in a little bit. But something you’ve talked about is the extent to which Palantir is the fattest of fat startups in many respects, where you feel like you have to build the entire system from top to bottom to deliver on your promise. Was that idea part of that initial vision, or is that something that expanded over the time that you’re talking about, as you figured out what needed to be done?

SS: As Ted sometimes says, he’s like, “What are you accountable to?” Are you accountable to some narrow definition of the software that you’re building? Because the way we think about it is our success is our customer’s success, and we just kept expanding what we thought our software needed to do based on what we thought was empirically true on the ground. That thing that you think is working is not working, so if that is below me in the stack, I’ve got to own that, because I need that to work.

If you’re trying to help soldiers come back from war zones, nobody says, “Well, I use the software that’s at the top of the Gartner quadrant.” The standard is much more absolute and so how do you go after — it just got thicker and thicker from the perspective of, “What are you learning?” is a Potemkin village of software. Not because I think the software literally doesn’t compile, obviously, it’s just that it just doesn’t do what the enterprise needs it to do, and when you’re at the coalface, you can find all these secrets, all these truths that somehow get lost in the bureaucracy of the organization of what works and doesn’t work and why.

So was there a particular moment where there was a real shift to, “Okay, we’re in the post-9/11 area, people say you have to choose between security and privacy. We think we can deliver both, we’re going to do a startup that can build this.” And as you are building it, you’re like, “Okay, well we actually need to build this, we need to build this, we need to integrate into that.” Was there an aha moment where you have this concept — you use this phrase now at the beginning of all your financial reports, which is that you’re the operating system for enterprises. Now, obviously this is still the government era, but it’s interesting the S-1 uses that line, but it’s further down, it’s not the lead thing. Was that something that emerged later or was this that, “No, we have to be the interface for everything” idea in place from the beginning?

SS: I think the critical part of it was really realizing that we had built the original product presupposing that our customers had data integrated, that we could focus on the analytics that came subsequent to having your data integrated. I feel like that founding trauma was realizing that actually everyone claims that their data is integrated, but it is a complete mess and that actually the much more interesting and valuable part of our business was developing technologies that allowed us to productize data integration, instead of having it be like a five-year never ending consulting project, so that we could do the thing we actually started our business to do.

So the analytics was the goal, but it turned out that actually you had to build your own picks-and-shovels, your own pick-and-shovel factory to even start to do the analytics and that ended up being more valuable in the long run.

SS: Exactly.

Yeah, that makes sense as far as the transition goes.

Gotham and Foundry

So lay out to me, what is Gotham, and we’re going to get to Foundry in a moment, but I think just to be super clear, what is this product? Who buys it? What is it used for?

SS: Gotham is an operating system for defense intelligence organizations, it allows them to integrate multimodal data. This could be satellite imagery, video, structured and unstructured text data, sensor data, fuse that together to create a common operating picture. How can I see what’s actually happening in the world right now, and use that as a — you can think about that as a single pane of data that allows you to have a single pane of glass and then plan forward. So based on what’s happening now, what could we be doing to create the effects in the world that we want?

Both of us keep using the term operating system, but I got it from you — is this something where you had to go in and work with other defense contractors to build something that sat on top of them, or you just had to figure that on your own? How did you get to this idea of, again, from being just an analytics piece of software to this operating system idea?

SS: It’s really one of the core things that Ted developed for this business, which is an obsession with decisions. Instead of going data forward, what data do I have? How do I bring it together? How do we actually squint at an institution or an operation and think about it as a series of interconnected decisions. How do I bring better intelligence, coordination, and compounding to those decisions and chain them together? What would I need to be able to do that? So that’s where the single pane of glass becomes really important, this is the place where I’m going to make the decision. Okay, well, how do I optimize what’s coming into that decision? And then how do I help you understand the consequences upstream and downstream of these decisions?

Then that’s single player mode, but that’s not interesting, really you’re talking about massive concurrent multiplayer decisions that are happening and you want to enable the technology to drive that coordination. It’s not because Shyam and Ben and Ted happen to all know each other, or that we’ve come up with some SOP [standard operating procedure], it’s that the data that is a consequence of my decision is changing how I’m doing things.

We often frame this in terms of most of these analytics systems think of data as exhaust, it’s like the exhaust is the transactional system and maybe you’re going to go analyze it, but actually data is fuel. If you flip this around and say, “This is going to fuel my operations,” what do you change in terms of how you think of your stack and how you think of the roles of the humans who are operating and using your software? This is where I think the metaphor of the operating system is quite important here. This is not a terminal dashboard, this is not analytics, it actually is about the decision making and then you’re going to get many chances at making this decision, you want tomorrow’s decision against the same problem to be better than the one before. And so if you’re thinking, “Look, I’m in a learning loop”, how do I wield everything at my disposal to get better at this every day?

TM: The founding trauma, if you will, to quote Shyam, the data was disintegrated. I think then the second trauma was the utility or lack thereof of insights. So starting with, I have to get this data integrated, I need to get to a single source of truth. The standard, “Get data, organize data, analyze data, act on data”, that gap that looks like the last step of, “Once I know something I can do about it”, but then you think about, “How is that insight deployed into a multiplayer mode, complex value chain?”.

What we found time and time again is that the hypothetical value of an insight, when deployed operationally to the front lines of people actually making decisions, there’s an ocean of a gap. So we really need to be able to be accountable to what is the information intelligence that is needed at the point of decision? What is the frame on that information that we need? Why does the existing underlying data not provide that frame? Then when you can have that frame and make that decision, how do you do that more intelligently, more automated, more collaboratively, more compounding over time?

So how do you do that? Theoretically I can get the bit where, you make a decision now, you’ve entered the fact you made the decision, you can track the outcome of that and see what happened or didn’t. Is it just about closing the loop in that regard, or is there a communications aspect to this? This all sounds great, but how does it actually manifest itself in a military or national security agency?

TM: Well, fortunately for us, I think there are a ton of commonalities. Whether you are an operator in the defense context, or you’re trying to figure out how to drill an underground mine, or figure out how to build an airplane more effectively, discover new drugs, deliver vaccines. Whatever it might be, there’s actually a lot of commonalities in where the existing software stack breaks down in enabling operators to make those decisions, and so we think of that as a development of levels of sophistication of the workflow.

The first part of that workflow is, “Do I have 360 degree awareness of everything that I need to know to make my local decision at my fingertips, with timely data presented to me in a way that I can understand that?” and if I’m a blue collar worker on the factory floor at Fiat Chrysler or at Airbus, that view is going to be very different than if I’m a CEO making long-term strategic decisions. But there’s a lot of data coming from CRM, sensor data, ERPs, whatever it might be. Can I get that in the AI sense, that context window, that is exactly the context that I need in order to make that decision?

Now that’s all well and good, because it allows me to instead of fighting a lot of different systems, swivel chair into different screens, have Excel macros that create my own local source of truth, different views in the data warehouse, whatever it might be, giving me at the transaction level, this is the thing that I need with all the context that is there.

I now want to start to become more intelligent as I do it. So I want to automate things, simple things as, “How do I prioritize which thing I need to do first?” Well, in order to do that prioritization, I need that integrated context window and then I need to be able to deploy models, and intelligence, and rules from simple algebra up to and including artificial intelligence models to help me drive the prioritization of how I’m making that decision.

So if I have an integrated view about my customer service workflow, what is the most pressing customer now when I have the entirety of product usage and everything I know about the value of that customer at that point of decision making? So as we think 360 degree awareness, now start to layer in intelligence with it. Maybe best described in the concrete example of, “If I’m running an offshore oil platform, I need a well 360, but then I also need alerting on when is the fiber optic data that is streaming data off the down-hole signaling that I have sanding risk?”

What you’re driving at, it seems to me, is making lots of this data that may have existed in all these different systems, basically making it much more realtime accessible in the moment. Is that the gulf that you had to bridge?

TM: That was the first-order gulf, was realtime data across all the different systems, presented in a context that is usable for an operator. But then I also need two things, I need to be able to integrate models into that view so it’s not just a representation of the data, I now need all of the intelligence that can be inserted by models into that, so that has to be co-equal. Then critically, also, the ability to act on the things that I have. So how can I literally create physical change in the world from that same application? I’m not looking at something and then figuring out how to implement it in SAP. I’m executing and writing and acting.

Got it. That was that huge gulf that you were talking about before that was missing. Okay, that makes sense. Well I think that’s a natural transition to Foundry. What was the history of Foundry? So those are your two main products, you have Gotham that is for government and Foundry that is for the enterprise. Help me understand, are they totally different software stacks? Is there a lot of commonalities? Is it Apollo, your deployment system, that is the commonality there? And was Foundry in the vision from the beginning, or was this a, “Look, our founding idea was to solve this government national security problem, turns out we have this capability that useful elsewhere”? That was a whole bunch of stuff, but basically tell me the Foundry story.

SS: So Foundry, while it started in commercial, has now grown, you can think about it as Apollo is at the very bottom of the stack, it’s the production infrastructure, Foundry is next and it’s kind of full stack, Gotham will sit on top of.

Got it. So Gotham is a government-specific manifestation of the Foundry. Foundry’s the operating system in some respects and Apollo’s the infrastructure?

SS: Yeah. We started Foundry in commercial, I’d say, almost doubling down on this realization that data integration doesn’t work, and the amount of tooling you can get it to work on day zero, it works on day one, but how do you deal with the entropy of the universe in data over years and what accumulates there and what tooling would you need? And so all of this, how do we treat data integration like code? How do we think treat more and more of the enterprise code? How do we productize that component that Ted was just talking about, that allows you to abstract and integrate not only your data into a semantic layer, but bind your models to that data? And then build that integrated decision making tooling on top of that so customers could build what they needed for the factory worker at a Detroit car factory, but also an offshore oil worker or a hospital operations person. So that is the origin of it.

But it started, I’d say, at the bottom of the stack and we just kept building as we realized, “I can deal with really complicated realtime data integration that gives you this Level 1 360 awareness”. Now how do I give you more tooling to bring the real time intelligence insight into it? Now how do I give you a digital twin that allows you to do dynamic simulation and counterfactuals of, “Oh, if I made this decision, how do I cascade it through to understand the consequences of it?” and then turn that around so that you can automatically programmatically generate scenarios. “Here are ten decisions I could make”, let me play through the consequences of them and now enable the human operator to select amongst them.

Then to Ted’s point, actually make the decision and have it filter back down into the underlying layers.

TM: And critically, when you make that decision with the integrated view, you’re also creating a data asset that allows you to learn on top of it, because I have this action that was taken, this is the contextual information of everything else that was happening in the world at that moment, I now have the training data asset for me to be able to actually train things given the context of the decision I actually made, not some aggregated analytic view that I construct later.

The Foundry Operating System

Yeah, that makes sense. So you mentioned, Shyam, that in 2017 was the critical year for Palantir and you said that was despite the fact that from a financial perspective that was arguably the worst year in the company’s history. Why was that? Tell me about the 2017 pivot point as far as Palantir is concerned.

SS: Well, that was the point where Foundry had really come to market and we were able to transition our customers in the commercial world, transition that revenue to run on top of Foundry.

So before it was just a bunch of custom integrations, custom deployments across all these different customers?

SS: Yeah. There was a period of experimenting with what the product was and where it sat in the stack. And I’d say the things that we’d built would subsequently become apps within Foundry. So Foundry gave us the infrastructure to really scale what we were doing, not only across the commercial world but across all of our government customers and it gave us the ability to go fast.

TM: And I think also critically just the ability to go fast, but also still be accountable to the same outcomes. So 2017 we were able to consolidate on Foundry, but also deliver the 33% ramp up in the acceleration of the [Airbus] A350 production. So how do you push out the efficient frontier of building exactly what people need to solve their exact critical problem, but on the same common foundation where there’s commonalities where you can use it for Airbus, or PB, or Swiss Re, or whoever it might be?

So basically this was really where Foundry became an operating system to your point. You talk about this idea of companies, where they generate alpha, where they generate beta idea, and this bit that everyone wants to build their own custom software because that’ll give them advantage, no one else has that software, but there is an extent to which there’s a lot of stuff that everyone has to do the same thing over and over again. If you think about an operating system in the context of a computer, that it covers the totality of the computer, but there are lots of computers and so what sits across all those computers and that, at a very simplified level, it just seems to be a rethinking of what an operating system is.

I actually think there is an interesting analogy to Microsoft, where Microsoft’s in the productivity area, and one of the issues is that there’s a SaaS explosion, there’s all these different single solutions, but how do you actually tie them all together into some commonality? I think Microsoft has done very well focusing on that, I think that’s under-appreciated, that integration mediocrely done is better than no integration at all and I think that’s underrated.

What I’m hearing, the analogy I have in my head, and correct me if it’s wrong, is Palantir is doing a similar thing, but basically for industrial companies, and at scale. You’ve talked a lot about oil companies and building airliners, this is a little bit different than stitching together a chat client and Word documents, it’s really at the level of SAP and ERP systems and all that thing. Sorry, I’m rambling a little bit, I just want to make sure I’m painting the right picture of what Palantir, is that a good way to think about it?.

SS: That’s exactly right. Our ambition is to have the operating system that enables every institution in the world to make every decision that they make, and when you think about that, well you can blow that out in two dimensions.

One is they exist in a value chain. So from the hand of their supplier to the hand of their customers, there are a lot of decisions that happen — it’s not really linear, it’s a web of decisions — so that’s one dimension. The other dimension is between strategy and operation. So what is the top of the house trying to do and what is the process by which that gets translated into actions. Often that’s pretty disconnected and I think this is the ultimate manifestation of why we’re so focused on alpha. How do you deliver alpha and not just beta? Well, it’s by connecting strategy and operations. So there’s actually a steering wheel when the C-suite is trying to accomplish something that links that up and informs the decisions that start to get made in a way that provides a feedback loop.

So to really make sure this is clear though, where does a company like say SAP fit relative to a Palantir? Are you looking to replace them or is this more of a layering on top of, and I’m using them as a stand-in for all these industrial grade software companies that have been around for a very long time, and are very deeply integrated into these companies? Where is Palantir relative to that, is this a rip-out-and-replace or how does that work?

TM: So I think there are two ways that you can think about it. One is, you have all of these industrial grade core transactional systems, and then a question that we ask, and this is a question we ask when we start with our customers is, “How many of your decisions are made through those systems? How many of your resources are allocated every day through those systems? And how many of them are allocated around those systems? So where do they fall down?”

There’s the first thing which is the low hanging fruit of, “Well in theory, this is what my ERP does. In practice, this is what I do through Excel, emails, phone calls, the number of customizations that I have to do and then the customizations that are impossible to unwind”. So how do I deal with the fact that this beta, the conform to my system, isn’t how every single business operates? How do you provide flexibility to use the things that are hardened, excellent, provide significant automation, but then fall over when they have to meet your customer where you need or you need intelligence in order to do it? So I think that’s the low hanging fruit.

The higher order bit is then, “Well, how do I reflect the way that I run my business?” Business in its natural state, trying to figure out how to allocate scarce resources, why do I have to battle it through the existing software architecture of thinking about what’s in my CRM? What’s in my PI Historian? What’s in my ERP? What’s in my MES? I build airplanes, I build cars, I want to be able to think of these things and make decisions that takes into context everything across that value chain.

So when I’m making a change to my production schedule, I’m taking into account what’s happening in the market at that given point in time and view my business in its natural state, where I can start to interact with it as an executive in the way that I think about my business, not fighting three weeks to get a report to tell me what’s going on in the frame of how I actually want to run the thing.

Yeah, I’m feeling very good about my Teams analogy here, because the point I’ve always made there is, where Microsoft really kicks the rear end of a lot of these SaaS companies in the market, is a lot of these companies forget they’re not the point. People aren’t out there to buy, to use a Slack example, chat software. They’re out there to actually run their business and they’re just looking for something to help them get it done. So when you say something like, “Look, I’m trying to build airplanes,” I think that really resonates.

Services and Scalability

Let’s talk about this, one of the things that’s really interesting about Palantir that I found very striking is this bit about how traditional software, you don’t just buy the software, then you have to have a systems integrator or maybe you contract with a company directly, they have a services division to actually implement and install the software. That’s a separate contract above and beyond the software contract. Is it still the case that Palantir, there is no installation contract, that’s just part of the service? If you sign up for Palantir, Palantir is going to come in and put the software in, is that the case still? And if that’s the case, what’s the payoff for you from doing it that way?

TM: Yes and no. I think there are two interesting dimensions to it. So one is that our FDEs [Forward Deployed Engineers], the FDE model, we think of as one of our greatest secrets, one of the greatest features of the company and the primary reason we think of that is that it provides accountability to us that aligns us with our customers to say, “Is the software doing what the customer actually needs?” Not just as someone using it, but have we delivered the feature in a way that matches what the software industrial complex would want out of this feature and the current understanding of the stack, but is it actually working? And how do we walk a mile in our customer’s shoes?

And when you think about, and the way we think about what our software is, we’re very proud of what it’s done so far. We think the outcomes that it’s delivered are really real, but we think we’ve only built 1% of the software that we need to build. And so how do we continue to create a accountability function, mentioning what Shyam had sad, that is the accountability to the end outcome of our customers? And that is primarily the service that the FDE provides to Palantir, is the institution is living at the coalface, sitting there saying, “Did it actually work? Did it actually matter?”

Now also, we want to do that in a way that provides maximum scalability over time and so what we’ve been doing is investing in the — I think that innovation feedback will always be a part of Palantir, but there are other customers that also start to deliver things that provide that same level of accountability, not just install the software, but something that really works. So we can start to see other third parties are implementing the software independently and where that’s really working for us is where those customers are very aligned with their end customer and so we’re seeing that, for instance, scaling in the EPC engineering space, with someone like a Jacobs Engineering. They’re doing physical things, they’re signing up for the vegetation management so the wildfire doesn’t ignite. That’s a really good scaling partner for us of implementing the software, because they provide that same accountability function that our FDEs do when we’re doing the implementation.

SS: And a big part of this also plays into the product feedback loop, so that’s customer-facing. On the product side, I want the time-to-implement the software to be as short as possible, and the value you get subsequent to be compounding, and so by taking ownership of that, I’m not getting dis-intermediated from the signal, “What could I be building to make this go twice as fast, or to enable you to do twice as much?” And by internalizing that cost, we have a radical incentive to invest in that. Where I think there’s an agency problem for lots of other companies where it’s, “Am I actually depriving something from my partners here?”

To what extent is there a product feedback loop in that you have to build particular integrations for a customer as you’re doing this and then you will reuse that for other customers?

SS: That’s certainly part of it. I think you could say, “Look, you can generically integrate any data”, but then as we started getting closer to very complicated industrial systems, then it’s, “Okay, these are pretty particular” and how you want to model it. I’d say almost in part, what’s more interesting is what is this semantic model that you want, that helps you define zero to value? How do I get to the use case that people are trying to do with this, so I’m not just shipping a data connector, but actually more of an integrated chain of, “This is the golden path that gets you to your first dollar of return inside of a couple days”?

A Middleware Platform

So what’s the life cycle then of a company using Palantir? It sounds like this ties into the operating system metaphor, where at the beginning you’re almost like building the computer, but you’re not going to get a new ERP system is what I’m hearing from you. You’re not getting a new CRM system, you have lots of pieces that are already there, and so the initial goal for Palantir to tie those pieces together into, your phrase, a single pane of glass. But after someone has Palantir, is the expectation that everything going forward is just going to be on Palantir directly, and that’s the end of —sure, they may maintain their SAP contract, but they’re not going to be buying any other enterprise software in the future — is that the viewpoint that you have?

SS: No, we live in a context of a brownfield reality. Not only is there a lot of stuff that’s there over time, but they’re going to want the optionality to go wherever they need to go in the future.

So then what concretely happens, let’s just say you’ve used this for a little bit, you’re a more mature customer, maybe you have tens of thousands of users on this, this becomes the logical place to solve the next use case or problem that arises in the business because you have all of your data there modeled in your digital twin. You have access to all of the AI models in the enterprise, and it’s wired up into your transactional system so that when you make a decision, you can actually activate it and then you start to accumulate applications that are interconnected, I have a procurement app and a production app, and when I get a discount raw material, I can understand how is this going to affect my production plan pretty seamlessly.

Now the last mile of this though is of course you don’t want this inside of the walled garden, is that the Ontology SDK allows you to go bring all of this data, all of the digital twin simulation dynamics to the applications you have in the enterprise. Whether those are custom-built apps or third party apps where you just simply want the data to flow to so that it gives you kind of like an enterprise buffer, which is a little bit closer to the operating system metaphor, where it’s, “Look, I can build my own application, I can pull data in and out of this.”

Our applications will handle a lot of the low level details of which transactional systems is coming from. How do you actually write back to these systems? How do I coordinate and integrate these things? And that liberates the application developers to know their business, which they already do, and then build applications that change their business rather than fighting with whatever technical decisions have been made in the past?

Well, because you’re being a middleware layer almost, because they’re writing to Palantir. So the question is, do you see a development of not just an internal line of business apps where going forward, once Palantir is in place, they write to Palantir? But is there a market of third party enterprise applications that you foresee in the future working with Palantir out of the box, and assuming that Palantir is in place?

SS: Yeah, I’d say we’re there already. Maybe Ted, you’d like to talk about some of the ecosystem enterprises we built.

TM: Yeah, absolutely. So if you take SkyWise as an example, which is the Airbus digital program that uses Foundry underneath it as its operating system. It’s deployed across 150 different airlines where they provide different applications. One to be able to facilitate secure exchange of information so that the people who fly the aircraft can get it back to the people who design and fix the aircraft, but also so that the people who design and fix the aircraft can deploy predictive maintenance applications, but to 150 airlines, but be able to do that at scale on top of that same common ontology, that abstracts away the differences in all the underlying systems that are different across each one of those different systems.

Got it. So do you all those individual airlines, do they have a contract with Palantir, or this is all via Airbus?

TM: This is all via SkyWise.


TM: So SkyWise is, you could think about it as this suite of applications that Airbus built on top of Foundry and they sell that to their airline customers.

Interesting. Well, I’m curious, what does the go-to-market for this look like? This gets back to the fat startup concept, which Palantir is in spades. You start out and you have this, “There’s this very clear national security issue around terrorism” that you are out to address, and the U.S. was in the mood to open up the pocket books and so that has a great place to get started. But now you’re talking about these large entities, whether it be an Airbus or an oil company or other manufacturer, and it sounds like this is a pretty big ask. You’re asking them to basically transform the entire plane of their company, to your point, because you’re not going to get the benefit of Palantir by just — this isn’t a one team in the company saying, “Oh, this is very useful and makes us more productive.” This is talking about, “You’re not going to get the benefits unless everything is on there”. What’s the pitch? How do you go in? How do you get companies to commit to this?

TM: Yeah, so not exactly true, but I would say the way I would describe it at a high level is the implementation essentially indexed the ambition of the customer and as Shyam had mentioned, by internalizing a lot of the cost and accelerating the speed-to-value and productizing that data integration layer, that means that we’ve been able to dramatically lower the floor while also extending the ceiling of what we want to do. And so what we’ve done over the last two years as an example —

You’ve just accumulated so much experience of installing these, you feel you can walk into any company and get it done pretty quickly.

TM: So median time-to-production, median time of our POCs (Proof of Concepts), and POCs at the end of you don’t have a POC, you have a production application that’s used by users in six to eight weeks, and so that allows us to start with an individual problem. It does oftentimes require that our customer says, “I recognize that I have a problem that my existing software stack does not solve,” without that we’re challenged. But that problem can be very specific and something that we can solve on the order of weeks.

Do you only solve problems? If your problem solving involves integrating lots of different software, are you effectively installing an operating system just to solve one problem? Maybe it’s only manifesting in one way, or can you go in and say, “Look, we just have a CRM problem,” “We just have…,” whatever problem might be like. What does this proof of concept bit look like? Because how do you do what you do without putting in the whole thing?

TM: Oftentimes what these things are is they start with, “What version of the problem that Palantir can solve do I realize that I have?” And that often manifests itself in a very specific way. I have a huge amount of challenge with volatility in my supply chain, so I’m either blowing OTIF or I have exploding inventory and with high interest rates, I can’t manage that, that’s the problem I know I have. Now, in order to solve that, I have to integrate probably several different systems underneath that. But I can bring that and approach that in a software-defined way where I can say, “I can start to address your excess inventory problem with an application that is in production on the order of weeks.”

Then oftentimes what they do is they’ll see that and say, “Okay, well now that I have that, I’d also like to integrate what my pricing strategy is, how I’m doing my contract management as a function of creating these problems for myself, how I do my production planning, how I deal with managing supplier shortages.” But that classic-land-and-expand strategy of horizontal data integration, very focused, very accountable, what is the ROI on the end user production application? That’s how most of these things start.

Got it. So if you can solve that first problem in the order of weeks, is it fair to say you can often solve a second or third problem almost in the order of days, because you’ve already got all the data there?

TM: Exactly. And this is where it gets fun and where really we end up being bound by the ambition of our customers, of seeing that and saying, “Okay, once I have this, what’s the next thing that I can do? What’s the next thing that I can do? What’s the next thing that I can do?”

It’s almost like an analogy of Palantir, the company, where because you start out with this grand ambition and once you realize you’re solving all the problems, now you can go into customers, you can basically install your grand ambition by default, because that’s the way you solve a singular problem and then as they just appreciate the opportunity goes from there. I’m sounding like a bit of a fanboy, but obviously, Shyam, you sort of said this in some of your investor talks, about this idea of you’re aggregating problem solving in a certain respect, because you’ve already collected all the data, it’s already all in one place, and so the marginal cost of solving a new problem decreases dramatically.

SS: That’s exactly right. The key case of this, which really shows this, is crisis. We’re very proud of how we can respond, we can help our customers respond to crisis, because the cost of solving the problem is so low, you have all the agility, you’ve already got the data that you can wield and then you can drive it to changing your operations, changing how you make decisions.

TM: How do I know what’s happening with COVID? Well, the first thing is, I don’t have any integrated national data asset that operates across 6,000 hospitals. After that, now I need to know what’s going on with PPE, now I need to figure out how to allocate PPE. Now it’s not a PPE problem, now it’s a vaccine problem. How do I continually iterate on top of what is the specific problem of the current specific manifestation? And oftentimes in these crises, the problem rolls, what the problem is today is different than the problem tomorrow.

But if you have the data all integrated, then it becomes much more easier to respond. It makes total sense. The inverse of go-to-market is churn, and once a company has Palantir in place, does anyone churn? Is it just at some point that way, “You solve my problem, that cost a whole lot of money. Yeah, you did a good job, but I’m not sure we can afford this going forward”? Or is this the world’s best lock-in?

TM: We really don’t have any interest in working with customers that don’t want to work with us. Like I said, we’re really at the beginning, we need customers that are forcing us to accelerate in the innovation of what our product is going to be, that only works when we’re very aligned with our customers. And so if we’re doing something that creates misalignment—

And to be clear, lock-in is not a bad thing per se. I guess the word lock-in is bad, but you could reframe it as an available set of APIs that is fully tied into my existing data, and is an obvious next step to build something. Operating systems are the best lock-in on earth, because they are so good for everyone involved. They dramatically expand the markets from a computer operation, from the hardware perspective, from a developer perspective, from a user perspective in what’s more available. So it’s not a criticism, it’s a praise at least, maybe we just naturally, oh, lock-in, that sounds bad.

SS: Well, I’d say it’s sticky.

Fair enough, sticky.

SS: It’s a sticky proposition and of course customers are smart about this, they recognize, sometimes the Europeans call it the reversibility of the decision, so everything’s open. You have to walk that journey with them to understand, it’s, “Look, if I wanted to exit, could I?” And so we’re very committed to that, and to Ted’s point, we hold ourselves to the standard, but we want to be the place, the most logical, cheapest place to solve the next problem. How are we going to continuously innovate on doing that?

Solving Hard Problems

Why do you think that other companies haven’t taken this approach of being all-consuming in some respects, whereas being a point solution.

TM: It’s so hard, it’s so hard. And hard on every dimension. You have to sign up for outcomes, you have to get software engineers who have the alternative of working in Silicon Valley in a pleasured experience, to deploy down range, or down range into a Fortune 500 institution defined by giant bureaucracies and data rights access and all these different things. So one, you got to motivate really talented people to do that, that’s really hard.

Then you have to figure out how you can build product that actually solves the problems, that’s a substantively hard problem. Then you have to figure out how you do that, where you’re creating technology that is horizontal and load bearing for the next technology that you didn’t solve, that’s a lot of internal friction. How do I get the FDE to work with the dev who has a different accountability function? How do I put the devs work on my critical path to solve the critical need of the customer, when that dev is accountable to solving things for 350 different customers? This is very hard, we don’t always get it right internally. I would say that I think we’re the best in the world at doing this, and we suck at it, because it’s so hard and we’re always missing under and calibrated with the generalization versus specific, and it’s very unconventional, so you get critiqued all the time for ways that are, where that critique has nothing to do with what we think we’re doing. We don’t even care about the critique but it requires a lot of discipline, a lot of commitment to fully align yourself with your customer.

Do you think you’re getting to the fun part now?

TM: We’re optimistic and I think maybe it’s more fun now than it was, I’ll put it that way. It’s still hard all the time but one of the things that I think is getting really fun right now, is we’re seeing a shift in the market with the advent or maybe the threshold crossing of the effectiveness of large language models.

We’ll do that in a second, but one more question on the motivation bit. How important was it with keeping these engineers to build this massive system, you just extended, I think you just think about it for two minutes, you could imagine how much work you had to do and how much you had to build. How important was it to have that core original motivating factor of, we’re doing this for national security?

[CEO] Alex Karp issued that letter with the S-1, just laying it out there. “Look, we believe in Western governance, we’re aligning ourselves with democracies, we’re not going to sell to China, we don’t believe in the Silicon Valley vision of a bunch of executives sitting in their rooms telling people what to do”. Was that just a great letter for the IPO, or was that something that was an ongoing motivation? And did that tie into your ability to persevere for, it’s been almost twenty years, and build this system?

TM: We should probably both give our personal answer to that, but I was like, “I don’t think there’s any way you can do it without that.”

SS: It’s crucial.

TM: It’s fundamental and it’s fundamental also because it permeates. I sit on the commercial side of the business, but the same mission orientation applies everywhere. When you work at a mission-oriented place, then you take on the mission of your customer. If that’s someone building a mine in Mongolia or figuring out how to price insurance in Switzerland, you still have to sign up for that mission and belief that in that mission then it’s the success of the West is the strength of the economy, the ability to employ people, the ability to continue to actually drive innovation into the core of the economy, not just in Silicon Valley, et cetera. Without that, there’s no way you sign up for this pain.

SS: You look at the longevity of people at Palantir, it’s strangely, the people who’ve been here, tend to be here and stay here, and I think a big part of it is, “Look of all the things in the world that I have some small part in making better and being able to touch, and across the diversity of things that I get to do it” — it’s why you joined the company. But then it’s how you continue to be motivated through the incredible pain, you’re dealing with the bureaucracy of your customers, you’re dealing with the imperfections of the world, and you don’t get to just put down your toys and run away, like most tech companies want to do. You’re embracing that complexity and you are defining your reward for embracing it based on the change you’re able to manifest and so it drives alignment through the whole company and the talent in the whole company.

Palantir AI

Okay, we are now to the AI part. I’ve not purposely put it off, as I do think it’s an interesting to lay the broader foundation here. Shyam, you just gave a keynote last week, I think a couple weeks ago when this is going to publish, talking about your new AI platform. Walk me through it, give me the pitch. I think you’re obviously already plugged into all this data, why now? Why today? And I mean that in a context of, why not previously? What is it about 2023 that this is the time for it to come out?

SS: AIP is a core set of foundational technologies that really enable enterprises to bring LLMs to their data on their private networks, to enable their software to connect the LLMs to the tools of the enterprise to their AI models, to their geophysical simulators, to the things that exist, the technology they’ve already invested in, and to do so in a way that’s controlled and governed so that they can ultimately trust it and comply with regulations.

There’s a lot of complexity, you can almost think about the LLM as being the first mile. How do you wield this to make decisions, being all of the other work to be done there? And I think in many ways, all of the work that we’d done with Foundry and Gotham was just waiting for the LLM. The mental model I have for this is that the LLM doesn’t replace your software and doesn’t replace the human. It replaces what the human was doing when they were using your software.

So if you step back to a pre-LLM world, the way that Foundry would be an operating system for the enterprise, and now you bring AIP into that, it’s going to supercharge the experiences you have around the decisions you can make. So maybe the most pithy way of saying this is that we think there’s an opportunity to enable every decision the enterprise is making with AI, and that AIP is going to bring that experience to bear.

There’s a couple pieces that I think make it, that are generically required that we’ve been investing in, but I think that make us ideally suited to doing that. The first is that you need, the more obsessed you are with type safety in your operating system, the more leverage you actually get out of the LLM. We are an obsessively, actually, the whole stack is obsessively type safe, but if we just talk about the layer that the human is interacting with, this enables them to go much further, much faster. One way of thinking about this is if you go to this enterprise, you’re trying to essentially get the entire state machine that is your enterprise somehow into the context, and the question that you’re to solve, and I don’t think you can do that without a foundational set of technologies that have modeled the semantic layer, that have given you the digital twin, and then give you the tools to iterate back and forth between whether it’s retrieval augmented generation or actual computational tools that allow you to get that to compute to get an answer.

Are you using a third party API for the model and then layering on the enterprise’s data on top of that, or have you developed your own models?

SS: We’re bringing the models to our customers here, so that you can self-host the models, pick your own open source models. We think there’s a huge opportunity in continuing to fine-tune these models as you go along. Essentially our view of the world is that the rate of progress with the models is incredible, that you’re probably going to have a menagerie of models that are actually in your enterprise. You’re going to be able to very cheaply fine-tune them to the specific needs, that the biggest, baddest model is not inherently the one that’s likely to win.

We think there’s this Goldilocks balance between the power of the model and the rate at which you can iterate on that model and so if you’re living in this universe, I think you’re going to actually be able to develop models that help you with the specific applications that you’re trying to apply towards and what we’re trying to enable is really this almost prompt-free experience. My mental model for this is the prompts are tools for developers, and really how do I enable magical interactions in the operating system itself for years and enable the developers who are building these applications to stand on top of and create these LLM powered experiences.

So as I understand it, the way an LLM will work today is you have the core model, then you can fine-tune it with your own data. Or it can be a particular use case or type of data, to your bit, if you’re doing coding for example, but there’s the foundational piece, but you don’t just go straight from foundational piece to application, there’s some intervening step there. You talked about, “You can sub in any data, you can use open source model, you can do XYZ”, does that mean that there’s a distinct — I’m a little unclear about how this works. So if you are an enterprise, do you say, “I want to use the OpenAI model,” or, “I want to use Google’s model,” and then there’s a intervening step where you fine-tune it with your data? Or is it, no, this is a completely self-contained thing, you have these open source options. Sorry, I’m just a little confused on how it actually manifests itself.

SS: Yeah, it’s a good question just to slow down, unpack it. AIP ships with predefined integrations out of the box, hitting these large foundational models and those are wired up to work with the generic experiences in the platform. Now, on top of that, you may want to build very specific applications.

Is there still going to be a step there, this fine-tuning step as part of the AIP delivery?

SS: There’s no fine-tuning presupposed out of the box. It only allows you to capture more and more specificity and more and more — allows you to capture more value out of your unique use cases that may not exist on the public Internet, may not exist in the training data set for these foundational models, where you’re going to need fine-tuning. But it doesn’t presuppose that and I think a big part of it is, how do I actually give those LLMs the tools?

So the first part is, I have my application, I’m using it. I’m trying to do stuff on my private data, which is clearly not in there. Then you have the weaknesses of LLMs, I’m asking it to do things that are likely to lead to hallucination, or I’m asking you to do things that where I can’t trust it perfectly. How do I combine the strength of the LLM with the fact that I have a whole bunch of tools in my enterprise? If I’m asking the LLM to answer a question, that’s very different than asking you to generate the query I need, where I can use traditional IR [Information Retrieval] techniques and the security against that to actually answer the question properly. So how do I help you compose these so that you get the experience of the LLM, but you get the leverage of your existing enterprise tools?

And what’s the exact magic to make that work? In your keynote, it almost felt like this was the world’s best SQL query building tool, which I think is actually quite compelling, that makes a lot of sense, but how does that actually manifest itself in day-to-day work?

SS: If the keynote kind of breaks down a few different sections, the first is, how do I integrate data into my ontology? You could call that, I think if you squint at it, the world’s best SQL building tool, where I can just take, here’s my target, here’s my source, just can you connect the dots on this? But now that I have the ontology, how can I ask questions of it? The example I showed there is, I got an email saying that there’s a disruption at one of my distribution centers. Well how do I just paste that email in and ask it to help me visualize the impacted customer orders, and then work through a series of steps to figure out what can I do about reallocating inventory given that I have this disruption?

So is a way to put it that the large language model — you’ve built this operating system, you have a way to pull in all this data to make decisions and your hope is that basically the LLM will expose all these latent capabilities that are already there in a way that people don’t need to know how to operate the operating system, because it’s like there, to use Microsoft’s term, it’s your Copilot.

TM: Maybe I’d bring it also back to, previously the conversation here we had these two foundational traumas that I think are really relevant in the AIP context, but at a whole new level order of magnitude of ambition. So the first one is, “How well integrated is my data?” Now in the LLM world that’s, “How can I manage a very small context window?”. So if you think of every single decision that every human is making across every enterprise, there is a flow of data coming through the enterprise that provides that context window. Oftentimes they are fighting their ERP, their CRM, their access database, their Excel macros to get the context that the human needs in order to make the decision. So if I can provide you an automated context window that is relevant to your decision as an operator, that’s the first order thing that I need.

Making sure you always have all the data that you need to make a decision at the moment that you need to make a decision.

TM: That you need, that your context window is accurate, because what I’ll get to is, because now it’s an agent doing it instead of a user doing those things. So that has to be automated, it has to be programmatic, I have to be able to constantly be providing and flowing the right context over to the user at any given point in time.

But then I think the second trauma is also as important of, so you see a lot of things that are like, “Let me summarize unstructured information, let me do semantic search”, the low hanging fruit of how people are like wading theirselves into generative AI, but that really isn’t particularly interesting. We know this in an enterprise context, I have to be able to do something and it’s not an insight, it’s an action. So how do I then provide the LLM the right context window, but also the tools to be able to say, “I want to call a order reallocation model”? That model is a tool that an LLM doesn’t have, it is one of the queries that it needs, but then it also needs to know, “Under what conditions can I act on reallocating this?”

Now if I need to have the infrastructure to say, “I want to increase the automation, I want to be able to very rigorously define what the LLM can access from a data perspective, what actions it can take under what circumstances, where a human needs to be managed in the loop.” I now have the ability to essentially move to that operating system, but where a significant amount of that operating is done by autonomous AI agents that are focused on very specific tasks, that can take that context and start to iterate across them. But then at the end state, what you’re going to expect is that you’re going to have hundreds of different agents, potentially, that are integrating and accomplishing components of this. So how do I have the infrastructure that manages all of the tooling, the security, the actions, et cetera to get them to do something that is much more interesting as Karp would say than write poetry, actually do something.

I think this ties into something you said in your keynote, Shyam. Maybe the world’s best SQL query builder was not the right way to put it. It’s the world’s greatest — I think I’m repeating what you said actually, because it’s clicked — greatest prompt engineer to some extent, and that’s how you can use a generic model, because that context window — basically you could pass that in with a single query. So it’s not just with the user types, but every aspect of the user’s bit.

And you said this, Shyam, WYSIWYG reimagined that AIP, or you said, “Every UI is going to change, this is the integration point.” And it sounds like what this is, if I can analogize it to the operating system, is the shift from the command-line to the GUI, where all the capabilities you’ve built, they’re all there, but there’s still a steep learning curve for the user to even know what they’re able to do. Yes, you’ve got all the data in one place. Yes, they can do all this operations on it, you’ve done that hard work, but you have to be an advanced user to know how to use a command-line. And your bit is, “Look, all those capabilities and all those decision points can be passed into the prompt”, basically, so that it’s already constrained to what’s possible. What is it, what’s your WYSIWYG? “What you say is what you get”.

SS: “What you say is what you get”, exactly. I think that’s a very good, I think it’s the same revolution from command-line to GUI, GUI to LLMs are going to power that. And I do think there’s a part of this where the initial experimentation that has mostly been happening —

Right, just to jump in with one thing, a bit about the command-line to GUI is you’re not exposed to the capabilities of the operating system and what it can do with the hardware that you’re on. In this bit it’s, “How do you expose and show someone all of the data they have access to?”, that’s the hard problem here and to what AIs are uniquely able to expose. There’s no one — you can’t keep that all in your head just like you can’t keep all the functionality of the computer in your head, except for, at least most of us anyway.

SS: That’s right. I think it itself acts as a funneling mechanism to expose the data you need, the decisions that are possible and relevant in that moment, against the operating context that you actually have. And if you continue going down that path, I think it also changes how developers build software, which is the same way that the GUI changed what developers were actually building as well, and how they thought about what they were building. You can think about, if you thought about a prompt-free user interface, you may have buttons that behind it are actually calling the LLM. When you’re using Copilot to write code, you’re not asking for it to solve a function for you, it’s based on the intent of you as a user in the context of the code that you’re writing what comes next, and so really hone in on those Copilot experiences or what’s possible here, and I think what redefines the user interface.

I think this makes sense in my initial question, which is why 2023, something you guys have both talked about is “Look, it’s hard to understand Palantir because we’re building for something that’s five to ten years out in the future and you have to trust us on that”. And then it’s, “Oh well here’s Palantir coming along six months after ChatGPT, we got an LLM product”. But your point here is not that you’re not building that product, because you’ve basically spent years, to your point, making very type-safe data, the best labeled data sets in the world, for all intents and purposes, that’s custom to an enterprise. Now that LLMs have showed up, you can leverage them in a way no one else can.

SS: Exactly. Or more importantly, our customers can leverage them. When we were working with an insurance company, we were able to build an agent to subrogate insurance claims in two days. But I think a fairer version of it is, it was four years of their ontology, their set-up, running their operations on this thing plus two days.

TM: And I think to come back to when we look at the implementations of our software, generally, is the ability to integrate the next order, integrate LLMs into that. Does it provide an opportunity to fundamentally alter the outcomes that our customers have with that software? And if it does, then it’s something that we need to focus on, we need to run at very quickly. I think where we’ve gotten confidence, right now, is that this will turbocharge the value of our software to our customers and the early adopters. Shyam mentioned the insurance, you saw in the keynotes what J.D. Power’s doing as they think about redefining the experience of how their customers and interact with their data. What Jacobs is doing, when you think about how you redefine entire sewer or wastewater management networks. It’s very exciting, mostly because of, gosh, our customers are going to be really successful with this.

Well this is why I put the disclaimer at the beginning where, I’m probably insufficiently grilling you in part because I think the picture you’re painting is very compelling. This bit about actually working in the real world, actually solving real problems, building real structure around this and having this operating system approach and now you’re like, the UI has shown up, the AI-as-UI bit makes a lot of sense and it’s very compelling.

TM: And I would also say the other thing we’re excited about is, also the demands on our software is showing up. So instead of something that is, now I have incremental efficiency in implementing a nineties on-prem data stack using cloud compute with what I expect from data, now we have CEOs saying, “Shoot, this tech got scary. I need this for what I think will be the survival of my institution over the next five years.” That’s a market that’s very compelling for us, because we’re built to sign up for that accountability.

Is there a bit where you are worried about your addressable market, in that people are just going to be grabbing for LLMs left and right?”

SS: It’s the opposite. So the alpha side of this is, “Wow, this obsession we’ve had on type safety has met its moment, and that’s unique to us”. The beta on this is, “Finally everyone expects their software to actually work fast, and that’s expanding our market”.

Interesting. So do you think there’s going to be a lot of companies that take an off-the-shelf solution, do fine-tuning, whatever it might be, but that’s actually the completely wrong direction? Because the hallucination problems are not going to be overcome-able, because they haven’t properly defined their data. Whereas you guys, by coming from the opposite direction, do whatever model you want, but because we’ve defined what we have and what we don’t, the hallucination bit is a feature, not a bug.

TM: It is, but I also think it’s more than that, because it’s not just can it be accurate, but can it do something? And I think this is where our opinion that the chat interface is such a limiting interface for this technology, because it makes everyone says, “Can I get an accurate answer to a chat?” Who cares? There’s so much more that you can do with it if you think of it, “How can I operate with these things”? And so yes, accuracy is presuppose as a requirement to even get to a point where you could contemplate doing that but then how do I get to a point where I’m doing something?

Because it’s generating stuff without you instigating the generation, it’s just doing it on its own and that requires a much higher degree of trust, because you don’t even know what went into the prompt.

SS: And the tooling to think about it, it’s like the old version of this would be cron job. If we go a few paradigms out, it’s, “Okay, well how am I building the agent? How does the agent know when to start? How have I modeled the state machine as the enterprise? What part of that state machine do I trust them on, under what circumstances? What are the assertions I’m building in so that I know whether it’s hitting its guardrails or not?”

Then the output of the agent is going to be, realistically, a set of scenarios for a human to evaluate. That’s where it also meets the reality of change management within any institution. We’re going to develop trust experientially with it. This can’t come out of the lab, so how am I giving co-pilots or agents to these humans and creating these human agent teams? And I think the speed with which enterprises can leverage that to achieve real transformation is exciting.

Very good. Well we have gone long and gotten I think very much into the weeds, but I think this was a very interesting overview. I think the overlying picture is really compelling and I appreciate you guys coming on and talking about it.

TM: Yeah, thanks for having us.

This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!