Elon Musk vs. OpenAI, OpenAI’s Response, OpenAI’s Foundational Problem

Good morning,

Stratechery has a YouTube channel! Free Articles (the ones that are available on Stratechery’s homepage) are being released as video essays — you can check them out here. These do take a week or two to put together, so they aren’t a replacement for you, my valued subscriber (and there aren’t any paid Updates), but I would still appreciate your subscription on YouTube as well. You may recognize the style of these videos: I’m partnering with Asianometry, itself a fantastic resource for video essays about tech.

On yesterday’s Sharp Tech we discussed Monday’s Article Aggregator’s AI Risk, along with a brief discussion of Claude 3 and Perplexity.

On to the update:

Elon Musk vs. OpenAI

From Bloomberg:

Elon Musk sued OpenAI and its Chief Executive Officer Sam Altman, alleging they violated the artificial intelligence startup’s founding mission by putting profit ahead of benefiting humanity. The 52-year-old billionaire, who was a co-founder of OpenAI but is no longer involved, said in a lawsuit filed late Thursday in San Francisco that the company’s close relationship with Microsoft Corp. has undermined its original mission of creating open-source technology that wouldn’t be subject to corporate priorities.

Musk, who is also CEO of Tesla Inc., has been among the most outspoken about the dangers of AI and artificial general intelligence, or AGI. The release of OpenAI’s ChatGPT more than a year ago popularized advances in AI technology and raised concerns about the risks surrounding the race to develop AGI, where computers are as smart as an average human. Musk also owns the social network X and is raising money for an AI venture called xAI that features its own competing chatbot Grok.

“To this day, OpenAI Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity,’” the lawsuit said. “In reality, however, OpenAI Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

Musk’s case, which was filed in California (not Delaware, where OpenAI’s various entities are incorporated), is here. There are three primary claims:

  • Breach of Contract:

    From OpenAI, Inc.’s founding in 2015 through September 2020, Plaintiff contributed tens of millions of dollars, provided integral advice on research directions, and played a key role in recruiting world-class talent to OpenAI, Inc. in exchange and as consideration for the Founding Agreement, namely, that: OpenAI, Inc. (a) would be a non-profit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons. This Founding Agreement is memorialized in, among other places, OpenAI, Inc.’s founding Articles of Incorporation and in numerous written communications between Plaintiff and Defendants over a multi-year period.

  • Promissory Estoppel (i.e. going back on a promise that led to material action, even if that promise was not formalized in a contract):

    In order to induce Plaintiff to make millions of dollars in contributions to OpenAI, Inc. over a period of years, and to induce him to provide substantial time and other resources to get OpenAI, Inc. off the ground as alleged herein, Defendants repeatedly promised Plaintiff, including in writing, that OpenAI (a) would be a non-profit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons. In doing so, Defendants reasonably expected that Plaintiff would (as he did) rely on their promises and provide funding, time and other resources to OpenAI, Inc.

  • Breach of Fiduciary Duty:

    Under California law, Defendants owe fiduciary duties to Plaintiff, including a duty to use Plaintiff’s contributions for the purposes for which they were made. Defendants have repeatedly breached their fiduciary duties to Plaintiff, including by:

    • Using monies received from Plaintiff, and by using intellectual property and derivative works funded by those monies, for “for-profit” purposes that directly contravene both the letter and the express intent of the parties’ agreement, thereby breaching Defendants’ contractual promises to Plaintiff, and also breaching Defendants’ promises to the express intended third-party beneficiaries of the parties’ agreement, i.e., the public…

    • Failing to disclose to the public, among other things, details on GPT-4’s architecture, hardware, training method, and training computation, and further by erecting a “paywall” between the public and GPT-4, requiring per-token payment for usage, in order to advance Defendants and Microsoft’s own private commercial interests, despite agreeing that OpenAI’s technology would be open-source, balancing only countervailing safety considerations.

    • Permitting Microsoft, a publicly traded for-profit corporation, to occupy a seat on OpenAI, Inc.’s Board of Directors and exert undue influence and control over OpenAI, Inc.’s non-profit activities including, for example, the determination of whether and to what extent to make OpenAI’s technology freely available to the public.

There is, right off the bat, one obvious problem with Musk’s complaint: there is no contract! Musk’s lawsuit includes three exhibits: OpenAI’s Certificate of Incorporation, which does say that “the corporation will seek to open source technology for the public benefit” but adds the caveat “when applicable”; an email in which Musk agreed with Altman’s proposal for OpenAI (which says “We’d have an ongoing conversation about what work should be open-sourced and what shouldn’t”); and OpenAI’s initial blog post.

OpenAI’s Response

OpenAI yesterday posted a blog post addressing the other complaints (in addition to the lack of a contract). The relevant points include:

  • Email evidence that Musk was at the forefront of making the point that OpenAI needed to raise far more money than would be possible as a non-profit.
  • Email evidence that Musk was not opposed to a for-profit path, but that he wanted that path to be as a part of Tesla, or if a standalone company, one in which he had majority control.
  • Email evidence that Musk recognized that some of OpenAI’s work would not be open-sourced.

The Wall Street Journal had an Article with many of the same details (I have a strong suspicion of where the details were leaked from!), and also noted that Musk is building a potential OpenAI competitor with xAI and has tried to hire OpenAI staff.

The part that isn’t really addressed in either piece is Microsoft: Musk’s complaint is wrong about Microsoft having a seat on the board — Microsoft has an observer position — but one can certainly understand how Microsoft, an unquestionably commercially oriented company, having exclusive access to the non-open-sourced GPT-4 would, from Musk’s perspective, be contrary to OpenAI’s original mission.

At the same time, you don’t get investment for free, and to the extent that Musk understood that OpenAI needed to take investment, what did he expect? If OpenAI were a part of Tesla, obviously its research would be used for Tesla’s benefit; perhaps a standalone company would be cleaner, but to the extent that Musk agreed that OpenAI needed to form a for-profit entity to fundraise is the extent to which he implicitly understood that investors would benefit from that relationship.

That noted, I do think this is probably the area where Musk has the strongest argument. What is shakier, but also much funnier, is the argument that GPT-4 is already artificial general intelligence (AGI), and thus ought not be licensed to Microsoft under the terms of the investment agreement (which is only for non general artificial intelligence). The primary evidence for this claim is a paper from Microsoft Research published shortly after the release of GPT-4 entitled Sparks of Artificial General Intelligence: Early experiments with GPT-4. The paper is mostly a breathless hype piece about GPT-4; what makes me laugh is that, should Musk win the case and Microsoft is stripped of access to GPT-4 (which I don’t think will happen), it would be fairly incredible if the largest impact Microsoft Research ended up having on the company’s AI efforts is to have published a paper that costs Microsoft its most important partnership.

OpenAI’s Foundational Problem

With the usual caveat that I am not a lawyer, I have a hard time seeing Musk winning this case; the lack of any formal contract to be breached looms large, and OpenAI was, from the beginning, pretty clear — particularly in its internal correspondence — that it would not in fact open source everything.

At the same time, Musk does seem to have a bit of a moral point, if not a legal one. Just look at the name: obviously OpenAI isn’t open at all; that seems like a bit of a bait-and-switch. OpenAI is also, at a functional level, not a non-profit in any way in which normal people understand the concept: it has an entity valued at $86 billion! The primary beneficiary of OpenAI, meanwhile, is indeed Microsoft, which not only has a financial stake in the for-profit entity, but also hosts the model, sells access to the APIs, and incorporates it into its products. There are a lot of things in this world that are not illegal but wrong; in this case OpenAI may not have breached a non-existent contract or provably gone back on specific promises, but the spirit of the entire entity is definitely a lot different than what it seemed it might be at its launch.

Then again, I have been cynical about OpenAI from the very beginning. I wrote in a 2015 Update:

Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has an mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence…

To be clear, I think the OpenAI approach is a very smart one; given Google and Facebook’s data advantage, it makes sense to set up a structure for everyone else to cooperate in an attempt to catch up, and if said structure appeals to the better angels of the industry’s best researchers, all the better. Frankly though, I could do without the positioning of OpenAI as a gift to the world ultimately intended to keep us safe; indeed, it is by no means clear to me that making something allegedly so dangerous widely available will improve the problem as opposed to make it worse. But, like I said, I don’t think OpenAI is first and foremost about “safety”; I think it’s smart business, and if that is a motivation, I wish Musk and Altman would say so.

Looking back, what I got wrong was the focus on data; the real scarcity was compute, which ultimately led to the Microsoft partnership. What I got right, though, was the cynicism: the fundamental flaw of OpenAI from the very beginning has been the nonprofit structure. This obviously came to bear last fall when Altman was ousted for a week; I wrote in OpenAI’s Misalignment and Microsoft’s Gain:

Much of the discussion on tech Twitter over the weekend has been shock that a board would incinerate so much value. First off, Altman is one of the Valley’s most-connected executives, and a prolific fund-raiser and dealmaker; second is the fact that several OpenAI employees already resigned, and more are expected to follow in the coming days. OpenAI may have had two tribes previously; it’s reasonable to assume that going forward it will only have one, led by a new CEO in Shear who puts the probability of AI doom at between 5 and 50 percent and has advocated a significant slowdown in development.

Here’s the reality of the matter, though: whether or not you agree with the Sutskever/Shear tribe, the board’s charter and responsibility is not to make money. This is not a for-profit corporation with a fiduciary duty to its shareholders; indeed, as I laid out above, OpenAI’s charter specifically states that it is “unconstrained by a need to generate financial return”. From that perspective the board is in fact doing its job, as counterintuitive as that may seem: to the extent the board believes that Altman and his tribe were not “build[ing] general-purpose artificial intelligence that benefits humanity” it is empowered to fire him; they do, and so they did.

My conclusion in that Article — which was published before Altman mostly won the power struggle — is that the non-profit structure actually compromised OpenAI’s stated goals, which would have been better served with a traditional corporate structure:

The counter to the argument I just put forth about Microsoft’s poor decision to partner with a non-profit is the reality of AI development, specifically the need for massive amounts of compute. It was the need for this compute that led OpenAI, which had barred itself from making a traditional venture capital deal, to surrender their IP to Microsoft in exchange for Azure credits. In other words, while the board may have had the charter of a non-profit, and an admirable willingness to act on and stick to their convictions, they ultimately had no leverage because they weren’t a for-profit company with the capital to be truly independent.

All of this trouble, from last year’s board fight to this lawsuit, could have been avoided if OpenAI were simply a normal company. And yet, the argument goes, OpenAI would have never gotten off the ground or recruited the talent it did if it were anything but a non-profit. How is that an excuse, though? To put it another way, the true founding principle of OpenAI was, at least if we look at outcomes, deception, both of the public and of OpenAI’s initial staff and, at least according to Musk, its initial funder. It may pass legal muster, but it doesn’t seem ideal that the very foundations of the entity seeking to create god-like powers isn’t exactly trustworthy.


This Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.

The Stratechery Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a subscriber, and have a great day!