The Facebook Files, Instagram and Teens, Facebook versus App Stores

Good morning,

I have serious egg on my face, and I appreciate the many, many, readers that helped administer it! California Soul was released 54 years ago by The Messengers, and has been covered multiple times since then; it does, though, appear that the version in Apple’s keynote was a fresh cover. No excuse though! Thank you to all of the folks that emailed in, and I can assure you I am appropriately embarassed!

On to the update:

The Facebook Files

From the Wall Street Journal’s cover-page for what it is calling the Facebook Files:

Facebook Inc. knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands. That is the central finding of a Wall Street Journal series, based on a review of internal Facebook documents, including research reports, online employee discussions and drafts of presentations to senior management.

Time and again, the documents show, Facebook’s researchers have identified the platform’s ill effects. Time and again, despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them. The documents offer perhaps the clearest picture thus far of how broadly Facebook’s problems are known inside the company, up to the chief executive himself.

There are, as I am writing this update, three articles in the series, although it seems likely there will be at least two more over the next two days; those three are:

The articles, at least the three that have been published so far, are best read as a set. For example, if you go back to the introduction I excerpted above, the line “the company didn’t fix them” stands out; then you read that Facebook didn’t subject famous users to automated post removal or account-banning, including for fear of public relations disasters, and well yeah, of course they didn’t. The solutions to many problems have problems of their own, a fact that critics are all too happy to be hypocritical about.

Moreover, there are other factors beyond simply public relations:

At times, pulling content from a VIP’s account requires approval from senior executives on the communications and public-policy teams, or even from Mr. Zuckerberg or Chief Operating Officer Sheryl Sandberg, according to people familiar with the matter. In June 2020, a Trump post came up during a discussion about XCheck’s hidden rules that took place on the company’s internal communications platform, called Facebook Workplace. The previous month, Mr. Trump said in a post: “When the looting starts, the shooting starts.”

A Facebook manager noted that an automated system, designed by the company to detect whether a post violates its rules, had scored Mr. Trump’s post 90 out of 100, indicating a high likelihood it violated the platform’s rules. For a normal user post, such a score would result in the content being removed as soon as a single person reported it to Facebook. Instead, as Mr. Zuckerberg publicly acknowledged last year, he personally made the call to leave the post up. “Making a manual decision like this seems less defensible than algorithmic scoring and actioning,” the manager wrote.

I discussed Facebook’s handling of Trump’s post at the time; beyond the fact that the post was a re-post of a tweet — which itself was reproduced endlessly in news publications, raising the question as to what exactly was being accomplished by deleting it — when the question at hand is removing a post from the democratically-elected President of the United States that seems like the canonical example of a decision that very much should be made at the CEO level. I suppose there is a certain egalitarian spirit to the idea that algorithms should make the decisions for everyone, but it is an idea utterly divorced from the reality that Facebook operates in the real world, with real considerations both political and moral that ultimately are the responsibility of CEO Mark Zuckerberg. To defer to an algorithm would be an abdication of responsibility.

And, of course, there is the veil of algorithmic ignorance assumed by the quoted manager that ignores the fact that Facebook itself made the algorithm. This is where the third article comes in, about Facebook’s 2018 News Feed change that was intended to drive more engagement in the form of reactions, comments, and re-shares, and less time spent watching professionally-produced content, particularly video:

Facebook’s chief executive, Mark Zuckerberg, said the aim of the algorithm change was to strengthen bonds between users and to improve their well-being. Facebook would encourage people to interact more with friends and family and spend less time passively consuming professionally produced content, which research suggested was harmful to their mental health.

Within the company, though, staffers warned the change was having the opposite effect, the documents show. It was making Facebook’s platform an angrier place. Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook…They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder.

The real problem, I think, was explained in two paragraphs from the Wall Street Journal series; first, from the VIP-exception article:

Facebook’s stated ambition has long been to connect people. As it expanded over the past 17 years, from Harvard undergraduates to billions of global users, it struggled with the messy reality of bringing together disparate voices with different motivations—from people wishing each other happy birthday to Mexican drug cartels conducting business on the platform. Those problems increasingly consume the company.

Second, from the conclusion of the angry algorithm article:

James Barnes, a former Facebook employee who left in 2019, said Facebook had hoped that giving priority to user engagement in the News Feed would bring people closer together. But the platform had grown so complex the company didn’t understand how the change might backfire.

This is the fundamental crux of the matter that both consumes the entire Facebook debate and also supersedes it: is giving the entire world frictionless access to each other a good idea, and once accomplished, can it be controlled — or fixed — by anyone? At a minimum one hopes the anonymous manager will eventually realize that algorithms are made by people making real trade-offs.

Instagram and Teens

It is the second story, about Instagram and teenage girls, that I suspect will have the largest impact (again, pending future stories in this series):

Around that time, researchers inside Instagram, which is owned by Facebook Inc., were studying this kind of experience and asking whether it was part of a broader phenomenon. Their findings confirmed some serious problems.

“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers said in a March 2020 slide presentation posted to Facebook’s internal message board, reviewed by The Wall Street Journal. “Comparisons on Instagram can change how young women view and describe themselves.”

For the past three years, Facebook has been conducting studies into how its photo-sharing app affects its millions of young users. Repeatedly, the company’s researchers found that Instagram is harmful for a sizable percentage of them, most notably teenage girls.

Notably, Instagram’s own research suggested the photo-centric app had worse effects than other social networks:

They came to the conclusion that some of the problems were specific to Instagram, and not social media more broadly. That is especially true concerning so-called social comparison, which is when people assess their own value in relation to the attractiveness, wealth and success of others.

“Social comparison is worse on Instagram,” states Facebook’s deep dive into teen girl body-image issues in 2020, noting that TikTok, a short-video app, is grounded in performance, while users on Snapchat, a rival photo and video-sharing app, are sheltered by jokey filters that “keep the focus on the face.” In contrast, Instagram focuses heavily on the body and lifestyle.

The features that Instagram identifies as most harmful to teens appear to be at the platform’s core. The tendency to share only the best moments, a pressure to look perfect and an addictive product can send teens spiraling toward eating disorders, an unhealthy sense of their own bodies and depression, March 2020 internal research states. It warns that the Explore page, which serves users photos and videos curated by an algorithm, can send users deep into content that can be harmful. “Aspects of Instagram exacerbate each other to create a perfect storm,” the research states.

I am (obviously) not a child psychologist (although I do have a teenage girl), but what strikes me about this research is the sense you get of something confirming your worst suspicions. That doesn’t mean it’s definitive — I’m always on the lookout for confirmation bias — but I think this piece in particular is going to resonate with the general public. I noted after a Congressional hearing last March that a huge number of lawmakers on both sides of the aisle honed in on the deleterious effects of social media on children, and this story is going to add fuel to the fire.

The story also shows the big advantage Facebook has in researching these issues relative to external researchers; on the Facebook side:

Its effort includes focus groups, online surveys and diary studies in 2019 and 2020. It also includes large-scale surveys of tens of thousands of people in 2021 that paired user responses with Facebook’s own data about how much time users spent on Instagram and what they saw there…

Meanwhile, when it comes to external studies:

Other studies also found discrepancies between the amount of time people say they use social media and the amount of time they actually use such services. Mr. Mosseri has pointed to these studies as evidence for why research using self-reported data might not be accurate.

The ability to pair detailed user data with user reports is extraordinarily powerful, but its unlikely that Facebook will ever give that access to external resources, and not simply because of concerns over bad PR. Rather, it would be a pretty blatant violation of user privacy. Everything is a trade-off.

The most striking part of the article, though, was the recounting of internal Facebook debates about these studies:

In March, the researchers said Instagram should reduce exposure to celebrity content about fashion, beauty and relationships, while increasing exposure to content from close friends, according to a slide deck they uploaded to Facebook’s internal message board. A current employee, in comments on the message board, questioned that idea, saying celebrities with perfect lives were key to the app. “Isn’t that what IG is mostly about?” he wrote. Getting a peek at “the (very photogenic) life of the top 0.1%? Isn’t that the reason why teens are on the platform?” A now-former executive questioned the idea of overhauling Instagram to avoid social comparison. “People use Instagram because it’s a competition,” the former executive said. “That’s the fun part.”

It’s important to keep in mind that unvarnished internal conversations, selectively chosen, can make any company look bad. And, according to the story, a majority of teens, including girls, don’t suffer from using Instagram. Moreover, there is the caveat that applies to every product used by billions of people: it’s very hard to determine the order of causality; how many struggling teens would be struggling regardless?

At the same time, any honest appraisal of these issues must acknowledge that Facebook itself is not starting from a position of honest inquiry, willing to follow the evidence where it might lead; the company’s base position is that Instagram is harmless, that Facebook is a positive force in the world, and that connecting people is inherently good. That’s even before you get to the fundamental business issues, which were also heavily implicated in the angry algorithm story, about Facebook’s need to drive increased engagement:

Facebook training videos and internal memos show another reason for the change — the company’s growing concern about a decline in user engagement, which typically refers to actions like commenting on or sharing posts. Engagement is viewed inside the company as an important sign for the health of the business. Comments, likes and reshares declined through 2017, while “original broadcast” posts — the paragraph and photo a person might post when a dog dies — continued a yearslong decline that no intervention seemed able to stop, according to the internal memos. The fear was that eventually users might stop using Facebook altogether.

I know a lot of folks are skeptical about media reports about Facebook; I am too. At the same time, given Facebook’s incentives, both to believe the best about their products and also to drive business results, it seems likely that to the extent this story is true, the likelihood that the report was suppressed or ignored is true as well (if anything the fact the report exists is a credit to Facebook’s management).

That leads to one further point: perhaps the answer to this problem, particularly given the incentives involved, is regulation; as I noted, Congress is already on this path (and it’s also worth pointing out this Wall Street Journal investigation of TikTok and how its algorithm is in many respects even worse for minors; this isn’t just a Facebook issue). Note, though, that regulation, whatever form it might take, is distinct from antitrust. The very incentives that lead Facebook to push for engagement and to fight for their share of user time are indicative of the point I made in Regulators and Reality: Facebook isn’t a monopoly, for better and, perhaps in this case, for worse.

Facebook versus App Stores

One more quick point: a private Twitter user, who wishes to remain anonymous, was asking why it is that people in tech get so much more worked up the App Store than they do Facebook:

Somehow arcane App Store revenue splits are a greater crime against humanity than building a business model that thrives largely if not entirely on promoting content and engagement that’s proven to have had a materially negative social and political impact.

The answer is straightforward: not only is Facebook an incredibly difficult problem, one that doesn’t simply touch on the company but also the human condition in a world without friction, which means there are no easy solutions, but also Facebook doesn’t have direct control over what businesses can or cannot be built. That’s the issue with the App Store: Apple decides what is or is not allowed, and there is no escaping it (the revenue split, if anything, is actually a secondary issue). That’s why Apple is an antitrust issue, while Facebook is a regulatory one.


This Daily Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!