Musk Re-Commits to Twitter; Meta, AITemplate, and Nvidia; Facebook Shutters Bulletin

Good morning,

Next Monday is National Day in Taiwan and a federal holiday in the U.S.; there will be no Update (there will be a Stratechery Interview tomorrow).

On yesterday’s Sharp Tech we revisited Cable’s Last Laugh, including why it might have been a bad idea to bet on the cable companies.

On to the update:

Musk Re-Commits to Twitter

A letter from Elon Musk’s lawyers to the SEC:

On behalf of X Holdings I, Inc., X Holdings II, Inc. and Elon R. Musk (the “Musk Parties”), we write to notify you that the Musk Parties intend to proceed to closing of the transaction contemplated by the April 25, 2022 Merger Agreement, on the terms and subject to the conditions set forth therein and pending receipt of the proceeds of the debt financing contemplated thereby, provided that the Delaware Chancery Court enter an immediate stay of the action, Twitter vs. Musk, et al. (C.A. No. 202-0613-KSJM) (the “Action”) and adjourn the trial and all other proceedings related thereto pending such closing or further order of the Court.

The Musk Parties provide this notice without admission of liability and without waiver of or prejudice to any of their rights, including their right to assert the defenses and counterclaims pending in the Action, including in the event the Action is not stayed, Twitter fails or refuses to comply with its obligations under the April 25, 2022 Merger Agreement or if the transaction contemplated thereby otherwise fails to close.

I don’t have much to add to this news other than a few quick points:

  • This does not mean the deal is closed. At this point Musk doesn’t deserve any benefit of the doubt; it’s perfectly reasonable to wonder if he trying to run out the clock on financing for example. That noted, Musk did tweet that Twitter would play a role in his “X everything app” vision, so perhaps he is moving forward.
  • The most obvious explanation as to why Musk is apparently giving up on his attempt to get out of the deal is that he was losing in court continuously; at least now he could potentially save on legal fees.
  • Not all of Musk’s financing is debt: he also rounded up a number of investors in the deal, who very well may have been embarrassed by the release of text messages related to the deal and who may have pushed Musk to give it up.
  • Just before the letter was released the Court said that Twitter could investigate whether or not Twitter whistleblower Mudge contacted Musk before Musk tried to pull out of the deal.
  • Twitter is not obligated to withdraw their case; however, they may fear discovery on their side, and there remains the possibility that Musk could win, thanks in part to Mudge’s claims.

The Chancery Daily tweeted:

This seems like a reasonable outcome: Musk shouldn’t be allowed off the hook at this point, but avoiding the mess of a trial is probably best for everyone involved, including the Court.

Meta, AITemplate, and Nvidia

From Reuters:

Facebook parent Meta Platforms Inc said on Monday it has launched a new set of free software tools for artificial intelligence applications that could make it easier for developers to switch back and forth between different underlying chips. Meta’s new open-source AI platform is based on an open-source machine learning framework called PyTorch, and can help code run up to 12 times faster on Nvidia Corp’s (NVDA.O) flagship A100 chip or up to four times faster on Advanced Micro Devices Inc’s (AMD.O) MI250 chip, it said. But just as important as the speed boost is the flexibility the sofware can provide, Meta said in a blog post.

From that blog post:

GPUs play an important role in the delivery of the compute needed for deploying AI models, especially for large-scale pretrained models in computer vision, natural language processing, and multimodal learning. Currently, AI practitioners have very limited flexibility when choosing a high-performance GPU inference solution because these are concentrated in platform-specific, and closed black box runtimes. A machine learning system designed for one technology provider’s GPU must be completely reimplemented in order to work on a different provider’s hardware. This lack of flexibility also makes it difficult to iterate and maintain the code that makes up these solutions, due to the hardware dependencies in the complex runtime environments. Moreover, AI production pipelines often require fast development. Developers are eager to try novel modeling techniques because the field is advancing rapidly. Although proprietary software toolkits such as TensorRT provide ways of customization, they are often not enough to satisfy this need. Furthermore, the closed, proprietary solution may make it harder to quickly debug the code, reducing development agility.

To address these industry challenges, Meta AI has developed and is open-sourcing AITemplate (AIT), a unified inference system with separate acceleration back ends for both AMD and NVIDIA GPU hardware. It delivers close to hardware-native Tensor Core (NVIDIA GPU) and Matrix Core (AMD GPU) performance on a variety of widely used AI models such as convolutional neural networks, transformers, and diffusers. With AIT, it is now possible to run performant inference on hardware from both GPU providers. We’ve used AIT to achieve performance improvements up to 12x on NVIDIA GPUs and 4x on AMD GPUs compared with eager mode within PyTorch.

AITemplate is a Python framework that transforms AI models into high-performance C++ GPU template code for accelerating inference. Our system is designed for speed and simplicity. There are two layers in AITemplate — a front-end layer, where we perform various graph transformations to optimize the graph, and a back-end layer, where we generate C++ kernel templates for the GPU target. In addition, AIT maintains a minimal dependency on external libraries. For example, the generated runtime library for inference is self-contained and only requires CUDA/ROCm runtime environments. (CUDA, NVIDIA’s Compute Unified Device Architecture, allows AI software to run efficiently on NVIDIA GPUs. ROCm is an open source software platform that does the same for AMD’s GPUs.)

This is a fascinating development with potentially big ramifications for the AI field generally and Nvidia in particular. To review, there are two parts of the AI process: the first is model training, which is still best done on Nvidia GPUs (with the exception of dedicated chips like Google’s Tensor Processing Unit); the second part is inference, which is leveraging the model to output results.

What Meta has open-soured is a framework for running inference in a GPU-agnostic way (AITemplate currently supports Nvidia and AMD GPUs, and more broad-based support, including for Apple’s M-series of chips, is on the way). Moreover, according to Meta’s numbers, the performance of this more generalizable solution is not, as you might expect, worse than a chip-specific solution from Nvidia or AMD, but actually better. Finally, AITemplate seamlessly fits into an existing CUDA-based toolchain.

CUDA, you will recall, is Nvidia’s AI toolchain, and is an essential part of the company’s business model: CUDA is the foundation of an entire ecosystem, and while it is free to use, it only works on Nvidia chips — unless you use AITemplate for inference, that is.

Meta’s motivation for creating this tool is straightforward: the company is one of the largest users of AI and thus, by extension, biggest consumers of Nvidia GPUs in the world; this tool will enable the company to more easily set Nvidia and AMD in competition against each other in terms of providing chips for inference, because the company won’t need to rewrite its software based on whoever offers the best performance per dollar (or per watt). Moreover, it seems certain that the company’s in-development AI chip will be optimized for AITemplate.

Still, at the end of the day, Meta is only one Nvidia customer, and Nvidia could alter CUDA and its chips to make AITemplate less effective. That is where open-sourcing AITemplate makes sense: to the extent AITemplate becomes the standard library for inference — and why wouldn’t companies want to adopt it, given they have the same desire to escape from Nvidia’s pricing power and lock-in? — is the extent to which Nvidia has to ultimately support AITemplate instead of trying to defeat it, which is to Meta’s long-term benefit.

There are some downsides for Meta, including making it easier for companies to compete with Meta’s AI; perhaps Meta has had the same sort of realization I have had, though, that AI is actually going to be much more decentralized than previously thought, and it was better to leverage that decentralization to, in the long run, drive its own costs lower.

As for Nvidia, time will tell how much traction this gets, but it strikes me as a big blow that not only does this tool exist but that it could be so much more performant than Nvidia’s own implementation. That makes it that much more likely AITemplate gets traction: not only are there long-term reasons to favor it, but short-term ones as well.

Facebook Shutters Bulletin

From the New York Times:

Facebook is shuttering its Bulletin subscription service, ending its attempt to compete with Substack and other newsletter services. Facebook, which is now part of the parent company Meta, has contacted writers within the program to tell them that the Bulletin platform will be wound down early next year. “Bulletin has allowed us to learn about the relationship between creators and their audiences and how to better support them in building their community on Facebook,” the company confirmed in a statement on Tuesday. “While this off-platform product itself is ending, we remain committed to supporting these and other creators’ success and growth on our platform.”

Facebook is committed to one thing: leveraging content it gets for free to drive increased engagement from its users it can monetize with ads. That is how the platform works and, like Google and Stadia, absolutely no one should be surprised at this news. Bulletin cost Facebook money (because it had to share revenue with creators), didn’t leverage Facebook’s algorithms (because the concept was based on direct connections with customers), and didn’t scale to any sort of meaningful business (because free + ads would always be a massively larger market).

I made the argument on Dithering last week that it’s fair to wonder if an ad-supported platform can ever build a meaningful paid product, given the different requirements necessary both in terms of the product itself and also the go-to-market motion, and the relatively smaller addressable market that makes it hard to justify the investments necessary; put this as another piece of evidence that the answer is no.


This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!