Substack and Platformer, Moderation and Infrastructure, Bars and Culture

Good morning,

It’s 2024, and it’s good to be back. In case you missed it, I started the year out with a (late) Article entitled The New York Times’ AI Opportunity that covered the New York Times’ copyright lawsuit against OpenAI (OpenAI issued a response yesterday).

We also covered the case on the first 2024 episode of Sharp Tech.

On to the Update:

Substack and Platformer

From Casey Newton at Platformer:

Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week. The company will not change the text of its content policy, it says, and its new policy interpretation will not include proactively removing content related to neo-Nazis and far-right extremism. But Substack will continue to remove any material that includes “credible threats of physical harm,” it said.

I referenced this controversy briefly in yesterday’s Article:

  • The Atlantic wrote an article claiming that “Substack Has a Nazi Problem”, highlighting a number of Substacks with Nazi imagery in particular.
  • 247 Substackers wrote a letter to Substack claiming they were “putting [their] thumb on the scale” by virtue of moderating some content but not these particular sites.
  • Substack founder Hamish McKenzie wrote a Substack Note explaining that while Substack did not like Nazis, they did not believe that “censorship (including through demonetizing publications) makes the problem go away—in fact, it makes it worse.”

This is where Platformer in particular started pushing on the issue. On January 3 Newton said he was compiling a list of objectionable Substacks, and wrote:

We’re now building a database of extremist Substacks. Katz kindly agreed to share with us a full list of the extremist publications he reviewed prior to publishing his article, most of which were not named in the piece. We’re currently reviewing them to get a sense of how many accounts are active, monetized, display Nazi imagery, or use genocidal rhetoric.

We plan to share our findings both with Substack and, if necessary, its payments processor, Stripe. Stripe’s terms prohibit its service from being used by “any business or organization that a. engages in, encourages, promotes or celebrates unlawful violence or physical harm to persons or property, or b. engages in, encourages, promotes or celebrates unlawful violence toward any group based on race, religion, disability, gender, sexual orientation, national origin, or any other immutable characteristic.”

It is our hope that Substack will reverse course and remove all pro-Nazi material under its existing anti-hate policies. If it chooses not to, we will plan to leave the platform.

Note that Stripe bit — I’ll come back to it in a moment.

Bars and Culture

To return to yesterday’s post, Newton appears satisfied for now:

Substack’s removal of Nazi publications resolves the primary concern we identified here last week. At the same time, as noted above, this issue has raised concerns that go beyond the small group of publications that violate the company’s existing policy guidelines.

As we think through our next steps, we want to hear from you. If you have unsubscribed from Platformer or other publications over the Nazi issue, does the company’s new stance resolve your concerns? Or would it take more? If so, what?

Paid subscribers can comment below; everyone is welcome to email us with their thoughts.

The comments, for what it’s worth, are filled with folks declaring their intention to unsubscribe from all Substack publications anyways. I was also forwarded recruitment emails from other platforms trying to recruit Substack authors, including this one from Supporting Cast:

Competing platforms were trying to recruit Substack customers

This does seem like a validation of Mike Masnick’s “Nazi Bar” concept; from Techdirt:

The key point: your reputation as a private site is what you allow. If you allow garbage, you’re a garbage site. If you allow Nazis, you’re a Nazi site. You’re absolutely allowed to do that, but you shouldn’t pretend to be something that you’re not. You should own it, and say “these are our policies, and we realize what our reputation is”…

So this is, more or less, what I had asked them to do back in April. If you’re going to host Nazis just say “yes, we host Nazis.” And, I even think it’s fair to say that you’re doing that because you don’t think that moderation does anything valuable, and certainly doesn’t stop people from being Nazis. And, furthermore, I also think Substack is correct that its platform is slightly more decentralized than systems like ExTwitter or Facebook, where content mixes around and gets promoted. Since most of Substack is individual newsletters and their underlying communities, it’s more equivalent to Reddit, where the “moderation” questions are pushed further to the edges: you have some moderation that is centralized from the company, some that is just handled by people deciding whether or not to subscribe to certain Substacks (or subreddits), and some that is decided by the owner of each Substack (or moderators of each subreddit).

And Hamish and crew are also not wrong that censorship is frequently used by the powerful to silence the powerless. This is why we are constantly fighting for free speech rights here, and against attempts to change that, because we know how frequently those rights are abused.

But the Substack team is mixing up “free speech rights” — which involve what the government can limit — with their own expressive rights and their own reputation. I don’t support laws that stop Nazis from saying what they want to say, but that doesn’t mean I allow Nazis to put signs on my front lawn. This is the key fundamental issue anyone discussing free speech has to understand. There is a vast and important difference between (1) the government passing laws that stifle speech and (2) private property owners deciding whether or not they wish to help others, including terrible people, speak. Because, as private property owners, you have your own free speech rights in the rights of association. So while I support the rights of Nazis to speak, that does not mean I’m going to assist them in using my property to speak, or assist them in making money.

Substack has chosen otherwise. They are saying that they will not just allow Nazis to use their property, but they will help fund those Nazis. That’s a choice. And it’s a choice that should impact Substack’s own reputation.

Masnick is not wrong: the First Amendment precludes government action, and not just in terms of proscribing speech, but also in compelling it. What is implicit in this explanation, though, is the abandonment of a culture of free speech; I wrote two years ago in the context of Spotify and Joe Rogan:

I am not here to re-litigate the argument, but rather to make a point that ought to be top of mind for tech executives everywhere: it seems quite clear that the argument is over, and my position lost. Just look at Facebook: Tech and Liberty was published in the weeks after CEO Mark Zuckerberg made a speech at Georgetown entitled Standing for Voice and Free Expression; over the ensuing two years Facebook has significantly expanded the definition of what is harmful content (driven both by the pandemic and the aftermath of the 2020 election) and by extension the volume of content that it removes…

Ek is right — the slope is slippery — and the fact of the matter is that we are, at least in terms of the elite culture dominated by U.S. media, at the bottom of the hill. Yes, the First Amendment still exists as a law, but it is hard to argue free expression still exists as a value. In other words, while it used to be the case that simply declaring that you were in favor of free speech was enough to tamp down most controversies, particularly in terms of comedians or edgy personalities, today one is quickly mired in the exact sort of muck I noted above, trying to defend broad principles while distancing oneself from specific examples. It’s an impossible dance, and — not to defend any specific instances of speech (here I dance) — I’m not sure we have yet fully internalized the long-term costs of no longer accepting free speech as a blanket principle citable by everyone from cranks to CEOs.

If this wasn’t top-of-mind for Substack executives, it clearly is now (those banned accounts, by the way, had 100 active readers and 0 paid subscribers).

Moderation and Infrastructure

Newton had another post on the topic last week, entitled Why Substack is at a Crossroads:

When it was founded in 2017, Substack offered simple infrastructure for individuals to create and grow their email newsletters. From the start, it promised not to take a heavy hand with content moderation. And because it only offered software, this approach drew little criticism. If you wrote something truly awful in Word, after all, no one would blame Microsoft. Substack benefited similarly from this distance.

Over time, though, the company evolved. It began encouraging individual writers to recommend one another, funneling tens of thousands of subscribers to like-minded people. It started to send out an algorithmically ranked digest of potentially interesting posts to anyone with a Substack account, showcasing new voices from across the network. And in April of this year, the company launched Notes, a text-based social network resembling Twitter that surfaces posts in a ranked feed.

By 2023, in other words, Substack no longer could claim to be the simple infrastructure it once was. It was a platform: a network of users, promoted via a variety of ranked surfaces. The fact that it monetized through subscriptions rather than advertising did not change the fact that, just as social networks have at times offered unwitting support to extremists, Substack was now at risk of doing the same.

This echoes my Framework for Moderation:

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

A drawing of The Position In the Stack Matters for Moderation

What Newton is arguing in the above excerpt is that as Substack moved up the stack its moderation responsibilities shifted. The problem, though, is that Newton also threatened Stripe (in the second article I excerpted above). Stripe is not running a social network, they’re not promoting posts, they’re not algorithmically featuring content — they’re not hosting content at all. They’re an infrastructure provider! I’m not saying Stripe shouldn’t have policies — infrastructure is in the middle of the stack, not the bottom — but I think it is alarming that the idea was even mooted.

Unfortunately, it does seem inevitable that, at some point in the future, more and more demands to censor content are going to come lower and lower down the stack, and it won’t just be about Nazis; elsewhere Newton notes that Substack hosts “anti-vaccine pseudo-science, Covid conspiracy theories and other material that is generally restricted on mainstream social networks”. It’s not clear if that includes “You can still get COVID after the vaccine” or “It’s very possible that COVID was a lab leak”, both of which were in some form or another banned or demoted on various social networks for being anti-vaccine pseudo-science and a Covid conspiracy theory. In other words, free speech does still matter, even if no one wants to be in a Nazi bar.

That is why I didn’t object last fall when the Biden administration moved to reinstate Title II net neutrality rules to ISPs, even though I previously argued that Title II was the wrong type of regulation for the Internet. I still think that is true — and, I must note, all of the predictions about the demise of net neutrality in the wake Title II’s repeal under President Trump were totally off-base — but six years on and I am increasingly worried about censorship moving down the stack, not for commercial reasons, but political ones.


This Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.

The Stratechery Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a subscriber, and have a great day!