I’m anticipating a quiet schedule until the election; naturally Facebook is stepping up to fill the topic void.
On to the update:
Facebook and the NYU Ad Observatory
From the Wall Street Journal:
Facebook Inc. is demanding that a New York University research project cease collecting data about its political-ad-targeting practices, setting up a fight with academics seeking to study the platform without the company’s permission. The dispute involves the NYU Ad Observatory, a project launched last month by the university’s engineering school that has recruited more than 6,500 volunteers to use a specially designed browser extension to collect data about the political ads Facebook shows them.
Following a furor about the opaque nature of political advertising in the 2016 presidential campaign, Facebook launched an archive of advertisements that run on its platform, with information such as who paid for an ad, when it ran and the geographic location of people who saw it. But that library excludes information about the targeting that determines who sees the ads. The researchers behind the NYU Ad Observatory said they wanted to provide journalists, researchers, policy makers, and others with the ability to search political ads by state and contest to see what messages are targeted to specific audiences and how those ads are funded.
This is arguably the perfect Facebook story: good intentions, legitimate objections, seemingly nefarious motives, understandable justifications, elections, consent decrees, it has it all! And, in classic Facebook fashion, I think they have the core issue correct and the implementation and public relations almost completely wrong.
First, though, it is important to establish exactly what it is the Ad Observatory is doing. From the project’s FAQ:
How does the Facebook Ad Observatory collect data on Facebook ads?
The Ad Observatory includes information from the Ad Observer project and combines it with information from the Facebook Library API. The Ad Observer project is a browser plugin installed by volunteers that lets them automatically share data about the Facebook ads that they’re shown (and how those ads are targeted) with us. No personal information from volunteers is collected.
That link is to adobserver.org, which states:
How it Works
Ad Observer is a tool you add to your Web browser. It copies the ads you see on Facebook, so anyone, on any part of the political spectrum, can see them in our public database. If you want, you can enter basic demographic information about yourself in the tool to help improve our understanding of why advertisers targeted you. However, we’ll never ask for information that could identify you. It doesn’t collect your personal information. We take your privacy very seriously.
In the Chrome installation instructions it states:
- Go to this link. It will open a new tab that takes you to the Chrome Web Store.
- Click the blue button that says + ADD TO CHROME.
- A little window will pop up that says “Add ‘Ad Observer?’ It can: Read and change your data on all facebook.com sites, youtube.com sites, and observations.nyuapi.org.” This is the disclaimer Chrome requires us to use, but we promise we aren’t changing your Facebook or YouTube data or collecting anything besides the ads. If you’re OK with that, click the white “Add extension” button.
- You’ll see the Ad Observer icon in the top right corner of your browser.
Here is the exact screenshot you get in Chrome:
This warning is 100% accurate (and, interestingly enough, different than what the adobserver.org site says): browser extensions have full control over everything that is displayed on the domains to which they have access. This access is a function of how they work: extensions manipulate data that your browser has already downloaded, interrupting the display process to add or remove elements in the page.
What this means in the context of this particular extension is that Ad Observer has access to not simply public posts on Facebook, but also whatever content you access while logged in. This might include your personal data, or, as is almost certainly the case on any page on Facebook, some amount of data from your friends.
This last point is the critical one, at least as far as modern debates about privacy are concerned. The Ad Observatory might argue that its extension is installed by people willingly, who trust the group’s promises about not collecting personal information, but the critical point is that those people’s friends did not agree to the deal, but the Ad Observer plugin has access to their information all the same.
For the record, I think this specific point is where a lot of privacy regulation has gone wrong. I wrote in the context of the then-impending GDPR in 2017:
Several folks have suggested that the GDPR’s requirements around data portability, including that it be machine accessible (i.e. not just a PDF) will help new networks form, but in fact the opposite is the case. Note this section from the Guidelines on the right to data portability…
This forbids what I proposed: the easy re-creation of one’s social graph on other networks. Moreover, it’s a reasonable regulation: my friend on Facebook didn’t give permission for their information to be given to Snapchat, for example. It does, though, make it that much more difficult to bootstrap a Facebook competitor: the most valuable data (from a business perspective, anyways) is the social graph, not the updates and pictures that must now be portable, which means that again, thanks to (reasonable!) regulation, Facebook’s position is that much more secure.
I am still working out the specifics of my grand “this is how we regulate big tech companies” approach, but a critical component will be the importance of clearly delineating between where customers have a reasonable expectation of privacy and where they do not, and my sense is that most regulatory bodies currently grant too much privacy — i.e. even more than customer expectations — not too little.
Facebook’s FTC Decrees
This sense of expectation amongst consumers is, to be sure, something that has changed over time. The original FTC consent decree against Facebook involved the company’s decision to unilaterally change posts that by-default were only visible to your friends to being public; that 2011 consent decree stated:
IT IS FURTHER ORDERED that Respondent and its representatives, in connection with any product or service, in or affecting commerce, prior to any sharing of a user’s nonpublic user information by Respondent with any third party, which materially exceeds the restrictions imposed by a user’s privacy setting(s), shall:
B. obtain the user’s affirmative express consent.
Right off the bat you can see that Ad Observer violates this consent decree; of course, Ad Observer is not from Facebook, and is thus not bound by the decree, and, of course, Facebook itself violated the consent decree. Notably, though, as I noted in a 2019 Daily Update, the part of the Consent Decree Facebook violated was not about changing posts from “private” to “public”, but rather about how 3rd parties used Facebook data. That is why the company’s 2019 consent decree mandates:
IT IS ORDERED that Respondent, including Representatives of Respondent, in connection with any product or service, shall not misrepresent in any manner, expressly or by implication, the extent to which Respondent maintains the privacy or security of Covered Information, including, but not limited to:
A. Its collection, use, or disclosure of any Covered Information;
B. The extent to which a consumer can control the privacy of any Covered Information maintained by Respondent and the steps a consumer must take to implement such controls;
C. The extent to which Respondent makes or has made Covered Information accessible to third parties;
D. The steps Respondent takes or has taken to verify the privacy or security protections that any third party provides;
E. The extent to which Respondent makes or has made Covered Information accessible to any third party following deletion or termination of a User’s account with Respondent or during such time as a User’s account is deactivated or suspended; and
F. The extent to which Respondent is a member of, adheres to, complies with, is certified by, is endorsed by, or otherwise participates in any privacy or security program sponsored by a government or any self-regulatory or standard-setting organization, including but not limited to the EU-U.S. Privacy Shield framework, the Swiss-U.S. Privacy Shield framework, and the APEC Cross-Border Privacy Rules.
It is difficult to read this, or several other relevant sections of the consent decree, in any other way than compelling Facebook to demand the Ad Observatory stop operating a browser extension that has access to a user’s entire Facebook page. It is not enough for the Ad Observatory to give its word that it is not collecting anything untowards: all of those folks upset about Facebook’s actions in this case were chiding the company for trusting the legally binding agreements Cambridge Analytica agreed to when it came to having access to non-consenting friend’s data, and the FTC is pretty clear that Facebook has to take responsibility for third party access.
Tradeoffs and Politics
That noted, Facebook seems to be on shaky legal ground, at least as far as mandating the Ad Observatory stop offering the browser extension. I wrote earlier this year about the 9th Circuit Court’s decision in hiQ Labs, Inc. v. LinkedIn Corp., which denied LinkedIn an injunction stopping hiQ from scraping user profile data (the Supreme Court is considering a writ of certiorari).
The context of that Daily Update is notable: it was about Clearview AI, the facial recognition company that had just been profiled in the New York Times. Lots of folks were outraged about a company whose true differentiation was not its technology; from the Daily Update:
The point of this history is to highlight that Clearview’s technology is not at all novel; frankly, despite Clearview’s claims that it uses “a facial recognition algorithm that was derived from academic papers” that is “a ‘state-of-the-art’ neural net”, I wouldn’t be at all surprised if Clearview were in fact using one of the aforementioned services from the big cloud providers. The company’s true differentiation is its data, and how it obtains it.
What Clearview AI did was scrape images from Facebook in particular, and then input them into bog standard facial recognition software and sell it to law enforcement. Suddenly LinkedIn — or Facebook for that matter — didn’t look like a big meanie trying to deny hardworking businesses or startups from data:
In short, Clearview AI, to a greater extent than almost any example I can think of, forces us to confront tradeoffs imposed on us by the lack of friction. Scraping is something new and different than human-directed access, and it can be used for both good and bad outcomes. Facial recognition itself, meanwhile, is simply math: it cannot be banned any more than you can ban encryption. Moreover, like scraping and encryption, it has both good and bad outcomes, both generally speaking and in the specific case of law enforcement.
This episode about the Ad Observatory strikes me as the bizarro version of the Clearview AI case: whereas Clearview AI was the obvious “this is bad” example, the Ad Observatory is the obvious “this is good” example. More visibility and transparency into Facebook’s impact on the political process and the power of political ads is a good thing (indeed, this is one of the reasons why Facebook ads are in fact preferable to old direct mailing approaches). As in the case of Clearview AI, though, any attempt at consistency in rules and principles becomes devilishly hard when confronted with the real world.
That is why I find Facebook in the right here, at least in a technocratic sense: both government regulators and social sentiment generally has been pushing the company to lock down user data, particularly data obtained without their consent (as is the case with friend’s data accessible by the Ad Observer plugin), and has furthermore been very clear that Facebook is not to trust 3rd-party attestations about what data they will and will not collect.
The rub, though, is that no one actually cares about technocratic rules: the loudest objectors to Facebook’s cease-and-desist letter are the same folks that applauded the FTC’s consent decree, were aghast at Clearview AI, and still hold up Cambridge Analytica as a scandal; consistency isn’t their strong suit. To that end, Facebook should have taken a far more political approach to this issue:
- Wait until after the election.
- Offer up some sort of API that meets some of the project’s demographic needs in a way that satisfies Facebook’s mandated-privacy agreements.
- Write a pre-emptive blog post explaining the offering before sending a cease-and-desist letter.
Would any of these actions change the underlying trade-offs? No, not really, but again, that isn’t the point: right-and-wrong, at least when it comes to this issue, is not about rules and consistency, but about who says it is such. Facebook has little choice but to play the game.
This Daily Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.
The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.
Thanks for being a supporter, and have a great day!