Child Sexual Abuse Material Online, The Problem With Community, Towards More Friction

Good morning,

It is a typhoon day in Taiwan, which is exactly what it sounds like: Typhoon Mitag is approaching, which means school and work are cancelled. Don’t worry, it’s not that big of a deal here in Taipei, but a house full of slightly stir-crazy kids means this update is a bit later than usual.

On to the update (note: this update has been made free to share):

Child Sexual Abuse Material Online

From the New York Times:

Pictures of child sexual abuse have long been produced and shared to satisfy twisted adult obsessions. But it has never been like this: Technology companies reported a record 45 million online photos and videos of the abuse last year.

More than a decade ago, when the reported number was less than a million, the proliferation of the explicit imagery had already reached a crisis point. Tech companies, law enforcement agencies and legislators in Washington responded, committing to new measures meant to rein in the scourge. Landmark legislation passed in 2008. Yet the explosion in detected content kept growing — exponentially.

An investigation by The New York Times found an insatiable criminal underworld that had exploited the flawed and insufficient efforts to contain it. As with hate speech and terrorist propaganda, many tech companies failed to adequately police sexual abuse imagery on their platforms, or failed to cooperate sufficiently with the authorities when they found it. Law enforcement agencies devoted to the problem were left understaffed and underfunded, even as they were asked to handle far larger caseloads. The Justice Department, given a major role by Congress, neglected even to write mandatory monitoring reports, nor did it appoint a senior executive-level official to lead a crackdown. And the group tasked with serving as a federal clearinghouse for the imagery — the go-between for the tech companies and the authorities — was ill equipped for the expanding demands.

A paper recently published in conjunction with that group, the National Center for Missing and Exploited Children, described a system at “a breaking point,” with reports of abusive images “exceeding the capabilities of independent clearinghouses and law enforcement to take action.” It suggested that future advancements in machine learning might be the only way to catch up with the criminals.

The article is a difficult read but an important one; it also lays bare some fundamental trade-offs in terms of regulating technology. As so often seems to be the case, Facebook is at the center of things. Note this tweet from one of the article’s authors:

Unsurprisingly, lots of folks glommed onto this factoid as more evidence of how Facebook is bad for society. The truth, though, is that Facebook has the most reports because it is doing the most work to uncover this abuse. The article admitted:

In some sense, increased detection of the spiraling problem is a sign of progress. Tech companies are legally required to report images of child abuse only when they discover them; they are not required to look for them. After years of uneven monitoring of the material, several major tech companies, including Facebook and Google, stepped up surveillance of their platforms. In interviews, executives with some companies pointed to the voluntary monitoring and the spike in reports as indications of their commitment to addressing the problem.

Indeed, the clearest takeaway is that while tech companies, particularly Facebook, have dramatically stepped up their efforts to discover and report this horrific content, it is the government that has failed to respond to the problem, particularly from a resource perspective.

What is worth discussing though — and if you read past the flashing lights emoji, the purported point of the tweet — is Facebook’s decision to encrypt all of its messaging applications. This means that the vast majority of those 12 million reports are going to go away. As the Times article notes:

Data obtained through a public records request suggests Facebook’s plans to encrypt Messenger in the coming years will lead to vast numbers of images of child abuse going undetected. The data shows that WhatsApp, the company’s encrypted messaging app, submits only a small fraction of the reports Messenger does.

This is an issue that CEO Mark Zuckerberg admitted to in his post about turning on end-to-end encryption for Messenger:

At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services. Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can. We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work. But we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.

I was already pretty cynical about Facebook’s alleged privacy pivot, including this point broadly:

Another issue is misinformation: for all of the issues surrounding misinformation on Facebook, at least misinformation is traceable; that is not the case if messages are encrypted, which has already been an issue with WhatsApp in India. One could certainly make the cynical argument that, in the process of cloaking itself in privacy, Facebook is washing its hands of misinformation.

This report about child sexual abuse makes the point much more meaningful, and leads me to reframe the questions I originally raised in that piece: might it be the case that Facebook’s decision to encrypt conversations is not both good for consumers and good for itself, but rather good for itself and actively bad for society?

The Problem With Community

Last month, in A Framework for Moderation, I wrote:

Third, it is likely that at some point 8chan will come back, thanks to the help of a less scrupulous service, just as the Daily Stormer did when Cloudflare kicked them off two years ago. What, ultimately is the point? In fact, might there be harm, since tracking these sites may end up being more difficult the further underground they go?

This third point is a valid concern, but one I, after long deliberation, ultimately reject. First, convenience matters. The truly committed may find 8chan when and if it pops up again, but there is real value in requiring that level of commitment in the first place, given said commitment is likely nurtured on 8chan itself. Second, I ultimately reject the idea that publishing on the Internet is a right that must be guaranteed by 3rd parties. Stand on the street corner all you like, at least your terrible ideas will be limited by the physical world. The Internet, though, with its inherent ability to broadcast and congregate globally, is a fundamentally more dangerous medium that is by-and-large facilitated by third parties who have rights of their own. Running a website on a cloud service provider means piggy-backing off of your ISP, backbone providers, server providers, etc., and, if you are controversial, services like Cloudflare to protect you. It is magnanimous in a way for Cloudflare to commit to serving everyone, but at the end of the day Cloudflare does have a choice.

I suspect that child sexual abuse material falls into a broadly similar category as white nationalism or other terrorist movements in this respect: to have these horrific beliefs is a lonely existence when everyone around you rightly judges them to be horrible. It is easier to simply adjust to social mores, because they are inescapable.

On the Internet, though, there is, to use Zuckerberg’s favorite word, a “community” for everything, including white nationalism, terrorist movements broadly, or child sexual abuse material. Suddenly horrific desires or beliefs are not so taboo, rather they are affirmed and celebrated. One of the websites discussed in the New York Times article was called “Love Zone” for goodness sake! This is what I meant above when I said the Internet is “a fundamentally more dangerous medium”; the ability to build global communities unconstrained by geography — or, by extension, by social mores — is a terrifying proposition.

You’ll note, by the way, that this site — indeed, its very existence — tends to celebrate how the Internet makes “niche” audiences accessible. The problem is that “niche” is an amoral term: there are good niches, and there are truly horrifying ones.

Towards More Friction

From another recent Article, this time Privacy Fundamentalism:

Indeed, that is why my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

I continue to find analogies between the offline world and online world to not simply be insufficient but actively harmful. Go back to Zuckerberg’s announcement that I linked above:

Over the last 15 years, Facebook and Instagram have helped people connect with friends, communities, and interests in the digital equivalent of a town square. But people increasingly also want to connect privately in the digital equivalent of the living room.

The problem, as I just explained, is that one person’s “living room” is another child’s crime scene. The question around Facebook, then, is where do we want the defaults?

The fact of the matter, as I noted above, is that encryption is a real things that exists, and it is not going anywhere. Evil folks will always be able to figure out the most efficient way to be evil. The question, though, is how much friction do we want to introduce into the process? Do we want to make it the default that the most user-friendly way to discover your “community”, particularly if that community entails the sexual abuse of children, is by default encrypted? Or is it better that at least some modicum of effort — and thus some chance that perpetrators will either screw up or give up — be necessary?

To take this full circle, I find those 12 million Facebook reports to be something worth celebrating, and preserving. But, if Zuckerberg follows through with his “Privacy-Focused Vision for Social Networking”, the opposite will occur. I do remain a fierce defender of encryption, and opponent of backdoors, but at the same time, we do as a society at some point have to grapple with the downside of the removal of Friction.


The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!