The NSO Group, BlastDoor and Software Bugs, Apple’s Response

Good morning,

I’m taking tomorrow off as I am attending a sporting event this evening. I’ve added the date to the Daily Update Schedule as a vacation day, in addition to Thursday’s scheduled Summer Day; I will be back (with a normal four day-a-week schedule) on Monday.

Thank you for your understanding, and Go Bucks!

On to the update:

The NSO Group

From the Washington Post:

Military-grade spyware licensed by an Israeli firm to governments for tracking terrorists and criminals was used in attempted and successful hacks of 37 smartphones belonging to journalists, human rights activists, business executives and two women close to murdered Saudi journalist Jamal Khashoggi, according to an investigation by The Washington Post and 16 media partners. The phones appeared on a list of more than 50,000 numbers that are concentrated in countries known to engage in surveillance of their citizens and also known to have been clients of the Israeli firm, NSO Group, a worldwide leader in the growing and largely unregulated private spyware industry, the investigation found.

The list does not identify who put the numbers on it, or why, and it is unknown how many of the phones were targeted or surveilled. But forensic analysis of the 37 smartphones shows that many display a tight correlation between time stamps associated with a number on the list and the initiation of surveillance, in some cases as brief as a few seconds. Forbidden Stories, a Paris-based journalism nonprofit, and Amnesty International, a human rights group, had access to the list and shared it with the news organizations, which did further research and analysis. Amnesty’s Security Lab did the forensic analyses on the smartphones.

There are two very interesting angles to this story — the NSO angle, and the Apple angle — and what makes the angles interesting is how fundamentally similar they are. Start with the NSO Group, which issued a series of statements in response to the coordinated release of stories about the phone list:

NSO does not operate the systems that it sells to vetted government customers, and does not have access to the data of its customers’ targets. NSO does not operate its technology, does not collect, nor possesses, nor has any access to any kind of data of its customers. Due to contractual and national security considerations, NSO cannot confirm or deny the identity of our government customers, as well as identity of customers of which we have shut down systems.

As NSO has previously stated, our technology was not associated in any way with the heinous murder of Jamal Khashoggi. We can confirm that our technology was not used to listen, monitor, track, or collect information regarding him or his family members mentioned in your inquiry. We previously investigated this claim, which again, is being made without validation.

These two paragraphs are, at first glance, in direct contradiction to each other: NSO first states that it doesn’t operate its software or have data on whose phones were accessed, and then states that it is certain its software was not associated with the murder of Khashoggi. Presumably there is some sort of missing step where NSO had access to the software it sold to the Saudi government, but its absence in this statement is striking.

There is a similar dissonance in these two paragraphs as well:

Even if Forbidden Stories were correct that an NSO Group client in Mexico targeted the journalist’s phone number in February 2017, that does not mean that the NSO Group client, or data collected by NSO Group software, were in any way connected to the journalist’s murder the following month. Correlation does not equal causation, and the gunmen who murdered the journalist could have learned of his location at a public carwash through any number of means not related to NSO Group, its technologies, or its clients…

The fact is NSO Group’s technologies have helped prevent terror attacks, gun violence, car explosions and suicide bombings. The technologies are also being used every day to break up paedophilia, sex- and drug-trafficking rings, locate missing and kidnapped children, locate survivors trapped under collapsed buildings, and protect airspace against disruptive penetration by dangerous drones. Simply put, NSO Group is on a life-saving mission, and the company will faithfully execute this mission undeterred, despite any and all continued attempts to discredit it on false grounds.

Here is the same contradiction: in the first paragraph, responding to a specific allegation about how NSO’s software was misused, the company wants to sow doubt that its software had a causal relationship with a journalist’s murder; in the second paragraph, though (which, to be clear, was further down in the statement), NSO is very confident that its software has a causal relationship with the prevention of terrorism, sex-trafficking, etc. So which is it?

The truth is that lots of things can be true at the same time: first, NSO makes a tool, Pegasus, that, like all technology, is inherently amoral; whether it is used for good or bad is dependent on who wields it. At the same time, though, Pegasus, like all software, is infinitely replicable; this is good for NSO group’s business model, as they can invest substantial amounts of money into discovering and exploiting bugs in smartphones, and then earn it back by selling the resultant software multiple times over, but it also effectively guarantees as a matter of probability that this amoral tool will end up in the hands of some number of bad actors.

BlastDoor and Software Bugs

From another Washington Post story, entitled Despite the hype, iPhone security no match for NSO spyware:

The text delivered last month to the iPhone 11 of Claude Mangin, the French wife of a political activist jailed in Morocco, made no sound. It produced no image. It offered no warning of any kind as an iMessage from somebody she didn’t know delivered malware directly onto her phone — and past Apple’s security systems. Once inside, the spyware, produced by Israel’s NSO Group and licensed to one of its government clients, went to work, according to a forensic examination of her device by Amnesty International’s Security Lab. It found that between October and June, her phone was hacked multiple times with Pegasus, NSO’s signature surveillance tool, during a time when she was in France.

The examination was unable to reveal what was collected. But the potential was vast: Pegasus can collect emails, call records, social media posts, user passwords, contact lists, pictures, videos, sound recordings and browsing histories, according to security researchers and NSO marketing materials. The spyware can activate cameras or microphones to capture fresh images and recordings. It can listen to calls and voice mails. It can collect location logs of where a user has been and also determine where that user is now, along with data indicating whether the person is stationary or, if moving, in which direction. And all of this can happen without a user even touching her phone or knowing she has received a mysterious message from an unfamiliar person — in Mangin’s case, a Gmail user going by the name “linakeller2203.”

These kinds of “zero-click” attacks, as they are called within the surveillance industry, can work on even the newest generations of iPhones, after years of effort in which Apple attempted to close the door against unauthorized surveillance — and built marketing campaigns on assertions that it offers better privacy and security than rivals.

I thought this story was really good and well worth a read. The fundamental flaw being exploited is the fact that iMessage immediately processes data that it receives, whether that be a hyperlink, an image, a GIF, etc. What NSO’s software does is create a purposely malformed image, for example, that exploits a bug in iOS’s parsing software, gaining access to parts of memory that are supposed to be off-limits, which it then further exploits to gain access to the phone as a whole.

The solution may seem obvious: fix the bug! Here’s the problem, though: software, at the end of the day, is not only created by humans, it is created by humans on top of humans, going back literally decades. There are bugs everywhere. This is particularly the case for some of iOS’s older code, which was written in languages that left memory control in the hands of the programmer; this made the languages very powerful, but it also dramatically increased the possibility of the sort of bugs that NSO’s software exploits.

The fact that iMessage had these sorts of vulnerabilities has been well-known for a while; Google’s Project Zero wrote a series of blog posts about these sorts of attacks in early 2020, and a similar zero-click attack against journalists was described by The Citizen Lab late last year. Apple, though, had a solution: BlastDoor, which Project Zero wrote about earlier this year:

One of the major changes in iOS 14 is the introduction of a new, tightly sandboxed “BlastDoor” service which is now responsible for almost all parsing of untrusted data in iMessages (for example, NSKeyedArchiver payloads). Furthermore, this service is written in Swift, a (mostly) memory safe language which makes it significantly harder to introduce classic memory corruption vulnerabilities into the code base.

What appears to be the problem is that while BlastDoor is written in a memory-safe language, it still relies on libraries that aren’t; the blog post noted:

Inside BlastDoor, the work mostly happens in BlastDoor.framework and MessagesBlastDoorService. As most of it is written in Swift, it is fairly unpleasant to statically reverse engineer it (no symbols, many virtual calls, swift runtime code sprinkled all over the place), but fortunately, that is also not really necessary for the purpose of this blog post. However, it is worth noting that while the high level control flow logic is written in Swift, some of the parsing steps still involve the existing ObjectiveC or C implementations. For example, XML is being parsed by libxml, and the NSKeyedArchiver payloads by the ObjectiveC implementation of NSKeyedUnarchiver.

This is what I meant by “humans on top of humans”; very little software is written completely from scratch. Indeed, this is one of the tech industry’s biggest advantages relative to the analog world: once something is created, it can be duplicated endlessly, which means it is almost always a better use of resources to create something new than to re-invent the wheel. The problem, though, is what if the wheel is flawed, or perhaps obsolete? In this case it seems likely NSO is exploiting a bug in some of Apple’s low-level parsing code, which, for the vast majority of Apple’s customers, works perfectly fine; it’s a hard choice to decide to rewrite said code instead of building something else on top.

Apple’s Response

Apple’s head of Security Engineering and Architecture Ivan Krstić told the Washington Post:

Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. For over a decade, Apple has led the industry in security innovation and, as a result, security researchers agree iPhone is the safest, most secure consumer mobile device on the market. Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data.

This is an interesting argument: basically Apple is saying that most of you just aren’t important enough to be exploited — security by insignificance! And, frankly, the company is probably right. What is worth pointing out, though, is that everyone with an iPhone is vulnerable to these exploits. After all, another way to think about the way in which Apple integrates software and hardware is to realize that every iPhone is identical. In this way Apple is the mirror image of NSO: NSO can make a tool once and sell it to anyone, because Apple makes a version of iOS once and installs it everywhere. Apple does, to be clear, invest a lot in security, which is why governments are willing to pay so much for NSO’s tool; that investment, though, is like rebar in an oak tree. Sure, the oak tree is that much less likely to blow over in a storm, but once it breaks, it is completely broken.

The good news is that there are ways that Apple can stick with its integrated model while making its security that much better: the company needs to open up iOS more to security researchers and dramatically improve its bug bounty program. The solution to a zero marginal cost problem (i.e. one bug impacts billions of devices) is to leverage zero marginal costs in your solution, in this case by getting more eyes on iOS, not fewer. The bad news, though, is that the bug bounty program in particular is kind of a mess; this bit from the Washington Post story fits with what I have heard multiple times over the years:

In its email to The Post, Apple said it uses automated tools and in-house researchers to catch the vast majority of bugs before they’re released and that it is the best in the industry. Apple also was a relative latecomer to “bug bounties,” where companies pay independent researchers for finding and disclosing software flaws that could be used by hackers in attacks. Krstić, Apple’s top security official, pushed for a bug bounty program that was added in 2016, but some independent researchers say they have stopped submitting bugs through the program because Apple tends to pay small rewards and the process can take months or years.

Last week, Nicolas Brunner, an iOS engineer for Swiss Federal Railways, detailed in a blog post how he submitted a bug to Apple that allowed someone to permanently track an iPhone user’s location without their knowledge. He said Apple was uncommunicative, slow to fix the bug and ultimately did not pay him. Asked about the blog post, an Apple spokesman referred to Apple’s email in which it said its bug bounty program is the best in the industry and that it pays higher rewards than any other company. In 2021 alone, it has paid out millions of dollars to security researchers, the email said.

Here’s the problem: millions of dollars is how much the NSO Group is charging for its software. Apple ought to be paying out multiples of that, and should do so triumphantly. Pretending like its software is perfect is a negative signal, not a positive one. Meanwhile:

Once a bug is reported to Apple, it’s given a color code, said former employees familiar with the process. Red means the bug is being actively exploited by attackers. Orange, the next level down, means the bug is serious but that there is no evidence it has been exploited yet. Orange bugs can take months to fix, and the engineering team, not security, decides when that happens. Former Apple employees recounted several instances in which bugs that were not believed to be serious were exploited against customers between the time they were reported to Apple and when they were patched.

This is the “new feature” vs. “security re-write” conundrum I noted above, but more damning to Apple (because the bugs are already identified). It also raises another philosophical conundrum endemic to tech: what difference, if any, is there between an act of commission — exploiting a bug — and an act of omission — not fixing a bug you know exists? Again, bugs are everywhere — anyone that works in software has a lot more sympathy for those who don’t fix bugs than they do for those who exploit them — but the other thing to keep in mind is the NSO Group, for all its flaws, is at least an organization we know about that operates under some pretense of the rule of law. There are a whole host of other actors we don’t know about that are seeking out the same bugs, which is to say acts of commission done in public expose acts of omission before they lead to unseen damage. It behooves Apple — and everyone else in tech — to dramatically increase the visibility and profitability of exploiting bugs in public, or else they will, sooner or later, be exploited in private.


This Daily Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!