Unmasking Twitter

On March 16 Twitter posted An Update on Our Continuity Strategy During COVID-19 that included this bit on how the company was changing its policies about content moderation:

Broadening our definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information. Rather than reports, we will enforce this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content.

So what is the latest guidance from authorities? The World Health Organization (WHO) helpfully has a tweet:

So does the CDC:

So does the U.S. Surgeon General:

What I thought was particularly noteworthy, though, was this tweet from the Surgeon General yesterday:

Everyone is taking their guidance from the WHO, and that’s a problem.

WHO and China

The Wall Street Journal, in an article in February, explored why the WHO seems to act with such deference to China.

When the World Health Organization declared a global public-health emergency at the end of last month, it praised China’s “extraordinary” efforts to combat the coronavirus epidemic and urged other countries not to restrict travel. “China is actually setting a new standard for outbreak response,” WHO Director-General Tedros Adhanom Ghebreyesus said. Many governments ignored the travel advice. Other public-health experts criticized his unqualified praise for China.

Among the complaints directed at Dr. Tedros: He was bending to Beijing by lauding a Chinese response that included quarantining 60 million people—which many health experts see as inconsistent with WHO guidelines—while calling on other countries not to cut off travel and trade with China…By praising China’s response effusively, the WHO is compromising its own epidemic response standards, eroding its global authority, and sending the wrong message to other countries that might face future epidemics, they say…

Dr. Mackenzie questioned why Chinese authorities appeared to delay reporting an increase in infections in the first half of January. Many health experts believe the outbreak spread more quickly early on because local authorities tried to cover it up, including by reprimanding a local doctor who sought to raise the alarm, and then were slow to announce it could pass person to person. “China is obviously an important player,” said Dr. Mackenzie. “So everything the WHO does has to keep that in mind. At the same time, you can be overly effusive.”

The entire article is well worth a read, but the takeaway is this: at every step of this outbreak the WHO has sought to praise and accommodate China, despite the fact that news about the initial outbreak was forcibly suppressed, the fact that China violated WHO guidelines with the severity of its quarantines (which to be clear, appear to have been effective), the fact that China hid the transmission rate amongst health care workers from the WHO until February 14 and waited weeks to even allow the WHO into the country, and only then on carefully scripted and chaperoned tours.

Those tours — which again, took weeks to negotiate, even as the coronavirus was spreading all over the globe — resulted in this report. It is, indeed, exceptionally effusive of the Chinese response, and contains no mention of China’s cover-up of human-to-human transmission in particular, which led to this tweet from the WHO:

This was particularly unfortunate given that Taiwan had told the WHO on December 31 that there was human-to-human transmission.

At the same time, much of the report is genuinely useful, particularly this warning to other countries:

COVID-19 is spreading with astonishing speed; COVID-19 outbreaks in any setting have very serious consequences; and there is now strong evidence that non-pharmaceutical interventions can reduce and even interrupt transmission. Concerningly, global and national preparedness planning is often ambivalent about such interventions. However, to reduce COVID-19 illness and death, near-term readiness planning must embrace the large-scale implementation of high-quality, non-pharmaceutical public health measures. These measures must fully incorporate immediate case detection and isolation, rigorous close contact tracing and monitoring/quarantine, and direct population/community engagement.

Had this been heeded by Western countries, all would be in far better shape than they are.

Asymptomatic Transmission

Even so, it might not have mattered, because of this paragraph:

Asymptomatic infection has been reported, but the majority of the relatively rare cases who are asymptomatic on the date of identification/report went on to develop disease. The proportion of truly asymptomatic infections is unclear but appears to be relatively rare and does not appear to be a major driver of transmission.

This is problematic in three ways:

  • First, it is at odds with evidence from the Diamond Princess (where every member of the population was tested), Iceland (where a statistically representative sample of the entire population has been tested), and South Korea (where testing was widely available even without symptoms); all show a high rate of asymptomatic infection.
  • Second, there are multiple | reports | from | China that asymptomatic carriers spread the virus.
  • Third, there is compelling statistical evidence that asymptomatic carriers drove the majority of the virus’s spread within China (and likely, by extenstion, around the world). This, notably, suggests that social distancing and travel bans are particularly effective.

It seems likely this paragraph about the lack of asymptomatic transmission was strongly argued for by China. Caixin reported at the beginning of March about China’s push in this area:

China’s decision to exclude individuals who carry the new coronavirus but show no symptoms from the country’s public tally of infections has drawn debate over whether this approach obscures the scope of the epidemic, with a document received by Caixin showing a significant proportion of one province’s cases show no symptoms. Since early February, the National Health Commission (NHC) has concluded that “asymptomatic infected individuals” can infect others and demanded local authorities to report those cases. However, the commission has also decided not to include these people in its statistics for “confirmed cases” or indeed to release data on asymptomatic cases.

In an interview with Nature last week, Wu Zunyou, China’s chief epidemiologist, defended the country’s treatment of asymptomatic data. He told the magazine that a positive nucleic acid test — a genetic sequencing test used to detect the coronavirus — does not necessarily indicate an infection because viral genetic material detected through throat or nasal swabs does not confirm the virus has entered cells and begun to multiply. This notion was also echoed by Chinese representatives at the WHO.

But this view has been challenged by both domestic and overseas experts, who said that a virus must have replicated to reach detectable levels.

I am no expert, but given that a virus cannot replicate on its own, but rather must leverage the body’s cells to churn out copies of itself, it seems rather self-evident that if it is detectable it has entered cells. And yet, Director General Tedros Adhanom Ghebreyesus argued — on Twitter! — that asymptomatic carriers were not a concern:

Again, an increasing amount of evidence is that this just isn’t true: asymptomatic carriers are a major problem.

Mask Efficacy

This is where masks come in. Much of the discussion of their efficacy has been focused on whether they keep you safe from the virus, and the evidence suggests that the answer is probably. SlateStarCodex has a comprehensive overview of the evidence here.

Everyone agrees, though, that those who are sick should wear masks; as the Taiwan CDC puts it, “Masks are mainly used for preventing the spread of disease and protecting people around you.” This, though, highlights the shortcomings of the “Don’t wear masks if you’re not sick” recommendations:

  • First, people are terrible in general at estimating if they are sick, particularly if their symptoms are mild.
  • Second, as Zeynep Tufekci argued in the New York Times, saying that only sick people should wear them stigmatizes the sick and makes them less likely to wear them.
  • Third, and most importantly, asymptomatic transmission means you don’t even know if you are sick in the first place.

This point was well-made by Sui Huang on Medium:

There is no scientific support for the statement that masks worn by non-professionals are “not effective”. In contrary, in view of the stated goal to “flatten the curve”, any additional, however partial reduction of transmission would be welcome — even that afforded by the simple surgical masks or home-made (DIY) masks (which would not exacerbate the supply problem). The latest biological findings on SARS-Cov-2 viral entry into human tissue and sneeze/cough-droplet ballistics suggest that the major transmission mechanism is not via the fine aerosols but large droplets, and thus, warrant the wearing of surgical masks by everyone.

This is where China’s push to exclude asymptomatic cases is so damaging: it excluded what may be the most important SARS-CoV-2 transmission vector, which resulted in the WHO not updating its guidelines, which may have resulted in far more people in the West getting sick than might have otherwise.

The good news is that the authorities appear to be listening: the Washington Post is reporting that the CDC is considering revisiting its guidelines, and suggesting that people use nonmedical masks or cloth coverings (because N95 masks and surgical masks should be reserved for healthcare workers); Austria already made masks compulsory, joining Slovakia, the Czech Republic, and Bosnia-Herzegovina (masks are, of course, widespread in most of Asia, although, contrary to popular belief, not compulsory by law, although often enforced by private businesses). Germany is considering the same.

To be very clear, N95 masks and even surgical masks, at least until they are widely available, should be saved for healthcare workers. That’s ok though: homemade masks work, and governments should be honest about that.

Twitter’s Theoretical Value

Twitter, in its guidelines, lists multiple examples of when it might enforce its new policy. The third one stood out:

Description of harmful treatments or protection measures which are known to be ineffective, do not apply to COVID-19, or are being shared out of context to mislead people, even if made in jest, such as “drinking bleach and ingesting colloidal silver will cure COVID-19.”

It sure seems like multiple health authorities — the experts Twitter is going to rely on — have told us that masks “are known to be ineffective”: is Twitter going to delete the many, many, many tweets — some of which informed this article — arguing the opposite?

The answer, obviously, is that Twitter won’t, because this is another example of where Twitter has been a welcome antidote to “experts”; what is striking, though, is how explicitly this shows that Twitter’s policy is a bad idea, not just because it allows countries like China to indirectly influence its editorial decisions, but also because it limits the search for truth.

You can think about the value of disagreeing with experts theoretically. Suppose that experts are correct 9 out of 10 times (and honestly, that’s probably low). However, if they are wrong, they have to pay out $100 (if they are right, they don’t get anything, because that is how the world works; the payout comes from being an expert in the first place). In this case, the expected cost of being an expert is:

9 x $0 + 1 x $100 = $100

-$100: yes, you may be right most of the time, but when you get it wrong, it is going to cost you.

Now, suppose experts have to put up with Twitter and having people question them. It’s a real pain in the rear end, what with all of the trolls and misinformation. To that end, let’s suppose every episode now costs the expert $5 because they have to argue with people who aren’t experts. This suggests the cost is:

10 x $5 + 1 x $100 = $150

The problem is that this overlooks the possiblity that the non-experts are sometimes right, or, perhaps more realistically, that they force the experts to re-visit their assumptions and predictions. Suppose that 10% of the time they are actually useful; now the expected cost is:

10 x $5 + 90%(1 x $100) = $140

Better than the worst case scenario, but not great.

That, though, isn’t quite right either, because it misses the fact that on a medium like Twitter, there are effectively infinite counter-arguments — indeed, that is why Twitter is so costly ($5) in the first place! The pay-off, though, is that the right argument is that much more likely to surface — let’s say 90% of the time. Now the expected cost is:

10 x $5 + 10%(1 x $100) = $60

That is a big improvement over the base case!

These numbers are, obviously, completely made up, but frankly, I think they are conservative. The cost of the coronavirus crisis in particular is so astronomical that basically any amount of investment to have avoided it or to ameliorate it is well worth it. Masks, hopefully, will be a good example: if Twitter is right, and the CDC is wrong, and economies are able to open sooner than they might have otherwise, that will be well worth all of the misinformation and terrible takes that Twitter produced in the meantime.

Internet Optimism

There is a further, even more optimistic, takeaway. In the analog world, politicians and experts needed the media to reach the general population; debates happened between experts, and the media reported their conclusions. Today, though, politicians and experts can go direct to people — note that I used nothing but tweets from experts above. That should be freeing for the media in particular, to not see Twitter as opposition, but rather as a source to challenge experts and authority figures, and make sure they are telling the truth and re-visiting their assumptions.

Indeed, while many on the right gripe that the media’s general opposition to Trump is driven by partisanship, I actually think it is a healthy approach to authority in general, particularly when that authority doesn’t need help going directly to people. Imagine if the media applied the same skepticism they give to Trump to the CDC or WHO, much less the Obama administration’s approach to the Great Financial Crisis or the Bush administration’s approach to Iraq (or, for that matter, Chinese data).

As I have argued from the beginning of this site, the Internet is an amoral force: it is up to us to decide if it is for good for bad. The best way forward is embracing Internet assumptions and using the overwhelming amount of information and free access to anyone to make things better, not try and build a moat around what experts say is right or wrong.

I wrote a follow-up to this article in this Daily Update.

Compaq and Coronavirus

To live in a moment that will be in history books is not a particularly pleasant experience; history, though, has another cruelty: those that are not remembered at all.

Compaq’s Impact

Consider Compaq: it was one of the most important companies in tech history, and today it is all-but forgotten. For example, look at this brief history of the IBM PC I wrote in 2013:

You’ve heard the phrase, “No one ever got fired for buying IBM.” That axiom in fact predates Microsoft or Apple, having originated during IBM’s System/360 heyday. But it had a powerful effect on the PC market.

In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired…”

IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS – DOS – could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft.

The rest, as they say, is history.

But wait, there was one critical part of this story that I excluded! IBM wasn’t completely stupid: while much of the IBM PC was outsourced, the BIOS — Basic Input/Output System, which was the firmware that that actually turned on the PC hardware and loaded the operating system — was copyrighted, and, IBM presumed, defensible in court. Compaq, though, figured out how to reverse-engineer the BIOS anyways. Rod Canion, who co-founded Compaq, explained on the Internet History Podcast:

What our lawyers told us was that, not only can you not use it [the copyrighted code] anybody that’s even looked at it — glanced at it — could taint the whole project. (…) We had two software people. One guy read the code and generated the functional specifications. So, it was like, reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then, once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function…

[We had] just a bull-headed commitment to making all the software run. We were shocked when we found out none of our competitors had done it to the same degree. We could speculate on why they had stopped short of complete compatibility: It was hard. It took a long time. And there was a natural rush to get to market. People wanted to be first. There was only one thing for us: we didn’t have a product if we couldn’t run the IBM-PC software. And if you didn’t run all of it, how would anyone be confident enough to buy your computer, if they didn’t know they were always going to be able to run new software? We took it very, very seriously.

The result was a company that came to dominate the market; in fact, Compaq was the fastest startup to hit $100 million in revenue, then the youngest firm to break into the Fortune 500, then the fastest company to hit $1 billion in revenue. By 1994 Compaq was the largest PC maker in the world.

Compaq’s Virtualization

Canion was, by that point, long gone; the board had ousted him in 1991 when the company was struggling to compete with direct-to-consumer PC makers selling “good enough” computers that were not nearly as well-engineered as Compaqs, but were faster to market and much cheaper. New CEO Eckhard Pfeiffer introduced the low-cost Presario line, which leveraged cheaper parts to break the sub-$1,000 price point, leading to Compaq achieving that first place position. By 1996, though, growth was again slowing, and Pfeiffer needed a new plan. Part 1 was expanding into more markets; Bloomberg explains part 2:

The second part of the formula — for producing profits along with growth — will involve wider use of outsourcing and partnership deals. That’s because the new financial yardstick — return on assets — will force the divisions to slash investment in assets such as plant, inventory, and overhead wherever possible. If the $3 billion home-PC business can cut its asset base, for instance, it can still deliver a 20% annual return to the company — even though price competition in home PCs will likely keep operating margins at around 2%.

To get there, Compaq has already started “virtualizing” parts of its business. After cutting $57 off the cost of each home PC last year by building the chassis at its plant in Shenzhen, China, the company went a step further in cutting the cost of business desktop PCs: Instead of investing millions to expand the Shenzhen plant, Gregory E. Petsch, senior vice-president for operations, persuaded a Taiwanese supplier to build a new factory adjacent to Compaq’s to build the mechanicals for the business models. The best part of the deal: The Taiwanese supplier owns the inventory until it arrives at Compaq’s door in Houston. “This is the right way to do it,” says Sanford C. Bernstein & Co. computer analyst Vadim D. Zlotnikov.

It worked for a time: Compaq’s stock price surged over the next two years as the company rode the Internet wave and outsourced not only the building of PCs and eventually their design, but also their new businesses:

To compete in the big-iron business profitably, Compaq is counting on a series of relationships with other companies that can supply the kind of handholding that companies such as IBM are famous for. Instead of investing in legions of field technicians and programmers — and building up costly assets — the computer maker will use the resources of systems integrator Andersen Consulting and software maker SAP, among others. These companies have the personnel to install and maintain systems the way IBM or HP do. So Compaq gets to play in the big-iron market without incurring the costs of running its own services or software businesses. Using these partners, Compaq is already delivering packages of networks, servers, and services to big customers including General Motors, British Telecommunications, First Interstate Bancorp, and Deutsche Bundespost.

Compaq, however, may not be able to play through their intermediaries forever. “The real solution is to create your own capability. It takes longer and is more painful, but ultimately, it is more successful,” says Graham Kemp, president of G2 Research Inc.

Compaq never did bother; the engineering determination exemplified by Canion was long gone, and soon Compaq was as well: the company merged with HP in 2002 (resulting in a huge destruction in shareholder value), served as the badge for HP’s cheapest computers for a decade, and in 2012 was written down completely for $1.2 billion.

And no one even noticed.

Coronavirus Action

Compaq’s demise was, to be fair, first and foremost about the value chain within which it competed. The entire reason Compaq could build the business it did was because as long as you had an IBM-compatible BIOS, an x86 processor, and a license for Windows, you could sell a PC that was compatible with all of the software out there. That, though, meant commoditization in the long-run, which is exactly what happened to Compaq and, it should be noted, basically all of its competitors.

Still, while I could not ascertain exactly which Taiwanese manufacturer it was that Compaq persuaded to build its PCs and hold them on its balance sheet, I suspect there is a good chance it is still in business: companies like Quanta and Compal took over PC manufacturing in the 1990s, and PC design entirely in the 2000s. Brand names were simply that: names, and not much more. This, of course, made for a fantastic return on assets; it was not so great for long-term sustainable revenue and profits.

It is at this point, 1400+ words in, that I must make what is probably an obvious analogy to the historical moment we are in. While there may have been an opportunity to stop SARS-CoV-2 late last year, by January (when the W.H.O. parroted China’s insistence that there was no human-to-human transmission), worldwide spread was probably inevitable; the New York Times brilliantly illustrated the travel patterns that explain why.

Since then, though, there has been divergence between countries that acted and countries that talked. Taiwan, where I live, is perhaps the best example of the former; Dr. Jason Wang wrote an overview of Taiwan’s actions (and published a list of 124 action items), including:

  • Passengers on flights from Wuhan were screened for fever starting in December, and banned from entry in January; the rest of Hubei Province, and then China as a whole — including non-Chinese who had recently visited China — soon followed.
  • Data from the National Immigration Agency was integrated into the National Health Insuance Administration, allowing officials to quickly match-up COVID-19 symptoms with recent travel history; full access was given to hospitals in late February.
  • People designated for home quarantine are tracked via their smartphones, and fined heavily for any violations.

What stood out to me was mask production; on January 23, the day that China locked down Wuhan, Taiwan had the capability of producing 2.44 million masks a day; this week Taiwan is expected to exceed 13 million masks a day, a sufficient number for not only medical workers but also the general public. The mobilization bridged government, industry, and workers, and is ongoing — the plan is for Taiwan to be able to export masks soon.

The public has done its part as well: most restaurants and buildings check the temperature of anyone who enters, and far more people than usual are wearing said masks, which worked to stop the spread of SARS in 2003, and which are likely particularly effective in the case of asymptomatic carriers of SARS-CoV-2.

The Great Resignation

The contrast with Western countries is stark: to the extent government officials across the Western world were discussing the coronavirus a month ago, it was to express support for China or insist that life carry on as before; I already praised the role Twitter played in sounding the alarm — often in the face of downplaying from the media — but even that was, by definition, talk. What does not appear to have happened anywhere across the West is any sort of meaningful action until it was far too late.

This has resulted in two problems: first, by the time Western governments acted, the only available option has been widespread lockdowns. Second, the talk itself is missing even the possibility of action. For example, over the last 48 hours there has been increasing discussion about trade-offs, specifically the trade-off between limiting the spread of the coronavirus and the halt in economic activity that is required to do so. Given how much I write about tradeoffs, I must surely consider this a good thing, no?

In fact, I think it is incredibly tragic, but not for the reasons you might think. The fact of the matter is that we do make tradeoffs between human lives and economic activity all the time — speed limits are perhaps the most banal example. What is truly tragic is the utter lack of resolve and lack of a bias for action in this so-called tradeoff. The only options are to give up the economy or give in to the virus: the possibility of actually beating the damn thing is completely missing from the conversation. To put it another way, the West feels like Compaq in the 1990s, relying on its brand name and partnerships with other entities to do the actual work, forgetting that it was hard work and determination that made it great in the first place.

The best overview of how actual hard work could make a difference was written by Tomas Pueyo in this article entitled The Hammer and the Dance; to briefly summarize, the idea is to lockdown now to stop the uncontrolled spread of SARS-CoV-2, and then leverage the same sort of epidemilogical tools that countries like Taiwan have, including aggressive quarantining of known infections and extensive contact tracing.

This gets to the second reason why the current discussion of tradeoffs is so disappointing: not only is it debating a tradeoff that we don’t necessarily need to make, at least in the long run, it is also foreclosing discussions on tradeoffs we absolutely need to consider. Consider this picture:

Police scooters checking on a quarantined citizen

That was taken by me, outside of my apartment building; apparently one of my neighbors just returned from America and the police were checking on his home quarantine. In fact, look more closely at what Taiwan has done to contain SARS-CoV-2 to-date — you can reframe everything in a far more problematic way:

  • Restrict international movement and close borders (including banning all non-resident foreigners this week)
  • Integrate and share private data across government agencies and with hospitals.
  • Track private individual movements via their smartphones.

Even the mask production I praised required requisitioning private property by the government, and the refusal of local businesses to serve customers without masks or insist on taking their temperature is probably surprising to many in the West.

And yet, life here is normal. Kids are in school, restaurants are open, the grocery stores are well-stocked. I would be lying if I didn’t admit that the rather shocking assertions of government authority and surveillance that make this possible, all of which I would have decried a few months ago, feel pretty liberating even as it is troubling. We need to talk about this!

Policing Talk

The first problem of being a society of talk, not action, is the inability to even consider hard work as a solution; the second is a blindness to the real trade-offs at play. The third, though, is the most sinister of all: if talk is all that matters, then policing talk becomes an end to itself.

I know, for example, that I am going to get pushback on this Article, telling me to stick in my lane, and leave discussions of the coronavirus to the experts or government officials. Never mind that so many of those experts and officials have made mistake after mistake — it’s all in the memory hole now!

This is not at all to say that non-experts have the answers either; as I wrote last week the amount of misinformation is exploding. Rather, the point is that this is a situation with an unmatched-in-my-lifetime combination of massive uncertainty with unfathomable stakes. It follows, then, that the likelihood of any one person or entity having the correct answer is low, while the imperative to allow the right answer to bubble up — or, more accurately, be discovered step-by-step, idea-after-discarded-idea — is high. There is more value than ever in verifying or disproving ideas and information, and far more danger than ever in policing them.

Moreover, if the real tradeoffs to consider are about trading away civil liberties — which is exactly what has happened in Taiwan, at least to some extent — then the imperative to preserve debate about these matters is even more important. The most precious civil liberty of all is the ability to talk. Indeed, that is the terrible irony of losing the capability and will for action: it ultimately endangers the only thing we seem to be good at, and in this case, the potential writedown to too terrible to consider.

Defining Information

Last Wednesday morning, I wrote a piece called Zero Trust Information, where I lauded social media generally and Twitter specifically for functioning as an early warning system for the impending coronavirus crisis. For weeks a motley collection of folks — some epidemiologists and public health officials, but many not — had been sounding the alarm on Twitter about the exponential spread of SARS-CoV-2 and the impact the resultant COVID-19 disease would have on health care systems, culminating in a member of the Seattle Flu Study tweeting the results of an illegal test showing community transmission in Washington State. As I wrote in that piece:

Once we get through this crisis, it will be worth keeping in mind the story of Twitter and the heroic Seattle Flu Study team: what stopped them from doing critical research was too much centralization of authority and bureaucratic decision-making; what ultimately made their research materially accelerate the response of individuals and companies all over the country was first their bravery and sense of duty, and secondly the fact that on the Internet anyone can publish anything.

Later that night, after a presidential address, the infection of Tom Hanks, and the suspension of the NBA, the rest of the country finally woke up, and along the way, something interesting happened: Twitter became a much worse source of information.

Information Over Time

The biggest complaint I received about Zero Trust Information was this graph, which some folks argued misrepresented the situation online:

A drawing of The Implication of More Information

While I used a normal distribution for illustrative purposes, not as an assertion about relative volumes, I can understand why some people took it literally; in fact, my only point was to show that an increase on the negative left side of the distribution — whatever that distribution ultimately looks like — was enabled by the exact same forces that allowed for an increase on the positive right side of the distribution.

I would make two further observations: first, generally speaking, the left side of that distribution — again, whatever it looks like — is almost certainly larger in quantity than the right side; producing misinformation is cheap and can even be automated (i.e. misinformation bots on social media). At the same time, when you consider something like the coronavirus, the right side of the distribution is massively larger in impact.

What I noticed over the last week, though, is how these things change over time. Consider some variations of the above graph — none of which, I must stress, are making specific assertions about quantities, but which I suspect are directionally correct.

Here is what the coronavirus information graph might have looked like in early February:

The information landscape for the coronavirus in early February

There was a lot of valuable chatter on Twitter discussing the potential impact of the coronavirus, a bit of China-focused coverage in the media (which was largely focused on President Trump’s impeachment trial), and relatively little misinformation. Note also that the absolute amount of information was quite small.

By late February it looked like this:

The information landscape for the coronavirus in late February

There was a huge amount of chatter on Twitter discussing the potential impact of the coronavirus, as well as an increasing number of people — and media — arguing it was “just the flu” (that is the part under misinformation); overall coverage was higher but still relatively muted.

The first week of March looked like this:

The information landscape for the coronavirus in early March

Note how the total amount of information was rising significantly, particularly valuable information on Twitter as well as increased media coverage.

Then came the events of last Wednesday, and the information graph exploded:

The information landscape for the coronavirus last week

Pretty much by definition the most growth in information happened on the left two-thirds of the graph. There were very few people who learned about the coronavirus last week who were offering meaningfully interesting new information on Twitter; there were plenty, though, that were passing along whatever information they could get their hands on without much care as to whether it was accurate or not.

(Computer) Viruses

If you will permit a digression about a very different type of virus, back in the 2000s one of the eternal debates on message boards and comment threads was the relative security of Windows versus the Mac. Apple would advertise that Macs had far fewer viruses (brace yourself for a startling lack of social distancing):

Back in 2006, when this commercial was released, there were several aspects of Unix-based Macs that were more secure than pre-Vista Windows, including a better security model and privilege escalation checks, enforced filesystem permissions, better browser sandboxing, etc. Just as important, though, was the fact there just weren’t that many Macs, relatively speaking.

A virus is, after all, a program, which means that someone needs to write it, debug it, and distribute it. Given that over 90% of the PCs in the world ran Windows, writing a virus for Windows offered a far higher return on investment for hackers that were primarily looking to make money.

Notably, though, if your motivation was something other than money — status, say — you attacked the Mac. That is what earned headlines:

Hacking a Mac made headlines

I suspect we see the same sort of dynamic with information on social media in particular; there is very little motivation to create misinformation about topics that very few people are talking about, while there is a lot of motivation — money, mischief, partisan advantage, panic — to create misinformation about very popular topics.

In other words, the utility of social media as a news source is inversely correlated to how many people are interested in a given topic:

Utility versus interest on social media

This makes intuitive sense: social networks are often about friends and family, which are intensely important to you but not to anyone else, because they care about their own friends and family. Needless to say, Macedonian teens aren’t spreading rumors about Aunt Virgina or Uncle Robert.

They also weren’t talking about the coronavirus — but people who cared were.

Information Types

The title Zero Trust Information was an analogy to Zero Trust Networking, which authenticates at the level of the individual, instead of relying on the castle-and-moat model which cared whether or not a device is behind a firewall. Generally an individual has to have both a valid password and a verified device to access sensitive information and applications from anywhere on the Interent — including on the corporate network. My argument is that information verification also has to happen at the level of the individual, but what is the equivalent of a password and verified device?

I think an understanding of the the different types of information and how it is distributed gives some helpful heuristics:

  • For emergent information, like the coronavirus in February, you need a high degree of sensitivity and a high tolerance for uncertainty.
  • For facts, like the coronavirus right now, you need a much lower degree of sensitivity and a much lower tolerance of uncertainty: either something is verifiably known or it isn’t.

You could even make a two-by-two:

Sensitivity, uncertainty, facts, and emergent information

It is interesting, by the way, to consider what fits in the other two corners:

Four types of information

Narratives around ongoing stories rely on a high degree of sensitivity (in an attempt to find the narrative thread) and a low tolerance for uncertainty (in an attempt to sell the narrative). History, on the other hand, requires a low degree of sensitivity (record what matters) and a high tolerance of uncertainty (we weren’t there).

Information Business Models

There is also a business model aspect to these different types of information. To return to The Internet and the Third Estate:

The economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.

How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing language across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers.

This model was ideal for information that required a low degree of sensitivity — facts and history. It required a fair bit of expense upfront to create a newspaper or a book, and the way to gain maximum leverage on that expense was to produce things that were valuable to the most people possible.

The Internet, though, changed the cost equation on the production side too:

What makes the Internet different from the printing press? Usually when I have written about this topic I have focused on marginal costs: books and newspapers may have been a lot cheaper to produce than handwritten manuscripts, but they are still not-zero. What is published on the Internet, meanwhile, can reach anyone anywhere, drastically increasing supply and placing a premium on discovery; this shifted economic power from publications to Aggregators.

Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world.

This is what made both emergent information and narratives not just financially viable, but in fact more lucrative than facts or history. Emergent information can come from anywhere, which is another way of saying anyone can publish, and most of what people have to say is really only interesting to a small circle of friends and family. That, though, scales perfectly with the Internet’s free distribution, capturing the attention of everyone individually, which can then be sold to advertisers.

As for narratives, at their best they appeal to the innate human desire for stories and our desire to make sense of the world; at their worst they appeal to people’s confirmation bias and tribal instincts. Either way, they tend to be polarizing, which is bad news in a world of fixed up-front costs, but exactly what you want when production is cheap and attention is scarce.

Again, neither emergent information nor narratives are inherently bad. Both, though, can lead to bad outcomes: emergent information can be easily overwhelmed by misinformation, particularly when the incentives are wrong, and narratives can themselves corrupt facts. Or, as I narrated last week, they can reveal valuable information that would not otherwise be published.

The Clarifying Coronavirus

In some respects this discussion feels besides the point; there are lot of people suffering right now, and everyone is scared. Some will get COVID-19, some will die, and everyone will have their lives disrupted.

Perhaps, though, that is why the coronavirus seems so clarifying when it comes to defining information. Emergent information was critical, both in terms of being censored in China, and in how it helped sound the alarm in the U.S. That success, though, was met by the failure of allowing narratives to obscure facts, whether those narratives were “just the flu”, or a suggestion of a media conspiracy, or mocking excitable tech bros on Twitter. And, looming over it all, is the reality that this moment will make it into the history books.

Zero Trust Information

Yesterday Google ordered its entire North American staff to work from home as part of an effort to limit the spread of SARS-CoV-2, the virus that leads to COVID-19. It is an appropriate move for any organization that can do so; furthermore, Google, along with the other major tech companies, also plans to pay its army of contractors that normally provide services for those employees.

Google’s larger contribution, though, happened five years ago when the company led the move to zero trust networking for its internal applications, which has been adopted by most other tech companies in particular. While this wasn’t explicitly about working from home, it did make it a lot easier to pull off on short notice.

Zero Trust Networking

In 1974 Vint Cerf, Yogen Dalal, and Carl Sunshine published a seminal paper entitled “Specification of Internet Transmission Control Program”; it was important technologically because it laid out the specifications for the TCP protocol that undergirds the Internet, but just as notable, at least from a cultural perspective, is that it coined the term “Internet.” The name feels like an accident; most of the paper refers to the “internetwork” Transmission Control Program and “internetwork” packets, which makes sense: networks already existed, the trick was figuring out how to connect them together.

Networks came first commercially as well. In the 1980s Novell created a “network operating system” that consisted of local servers, ethernet cards, and PC software, to enable local area networks that ran inside of large corporations, enabling the ability to share files, printers, other resources. Novell’s position was eventually undermined by the inclusion of network functionality in client operating systems, commoditized ethernet cards, channel mismanagement, and a full-on assault from Microsoft, but the model of the corporate intranet enabling shared resources remained.

The problem, though, was the Internet: connecting any one computer on the local area network to the Internet effectively connected all of the computers and servers on the local area network to the Internet. The solution was perimeter-based security, aka the “castle-and-moat” approach: enterprises would set up firewalls that prevented outside access to internal networks. The implication was binary: if you were on the internal network, you were trusted, and if you were outside, you were not.

Castle and Moat Network Security

This, though, presented two problems: first, if any intruder made it past the firewall, they would have full access to the entire network. Second, if any employee were not physically at work, they were blocked from the network. The solution to the second problem was a virtual private network, which utilized encryption to let a remote employee’s computer operate as if it were physically on the corporate network, but the larger point is the fundamental contradiction represented by these two problems: enabling outside access while trying to keep outsiders out.

These problems were dramatically exacerbated by the three great trends of the last decade: smartphones, software-as-a-service, and cloud computing. Now instead of the occasional salesperson or traveling executive who needed to connect their laptop to the corporate network, every single employee had a portable device that was connected to the Internet all of the time; now, instead of accessing applications hosted on an internal network, employees wanted to access applications operated by a SaaS provider; now, instead of corporate resources being on-premises, they were in public clouds run by AWS or Microsoft. What kind of moat could possibly contain all of these use cases?

The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

Zero Trust Networking

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

  • If there is no internal network, there is no longer the concept of an outside intruder, or remote worker
  • Individual-based authentication scales on the user side across devices and on the application side across on-premises resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

Castles and Moats

Castle-and-moat security is hardly limited to corporate information; it is the way societies have thought about information generally from, well, the times of actual castles-and-moats. I wrote last fall in The Internet and the Third Estate:

In the Middle Ages the principal organizing entity for Europe was the Catholic Church. Relatedly, the Catholic Church also held a de facto monopoly on the distribution of information: most books were in Latin, copied laboriously by hand by monks. There was some degree of ethnic affinity between various members of the nobility and the commoners on their lands, but underneath the umbrella of the Catholic Church were primarily independent city-states.

With castles and moats!

The printing press changed all of this. Suddenly Martin Luther, whose critique of the Catholic Church was strikingly similar to Jan Hus 100 years earlier, was not limited to spreading his beliefs to his local area (Prague in the case of Hus), but could rather see those beliefs spread throughout Europe; the nobility seized the opportunity to interpret the Bible in a way that suited their local interests, gradually shaking off the control of the Catholic Church.

This resulted in new gatekeepers:

Just as the Catholic Church ensured its primacy by controlling information, the modern meritocracy has done the same, not so much by controlling the press but rather by incorporating it into a broader national consensus.

Here again economics play a role: while books are still sold for a profit, over the last 150 years newspapers have become more widely read, and then television became the dominant medium. All, though, were vehicles for the “press”, which was primarily funded through advertising, which was inextricably tied up with large enterprise…More broadly, the press, big business, and politicians all operated within a broad, nationally-oriented consensus.

The Internet, though, threatens second estate gatekeepers by giving anyone the power to publish:

Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

The current gatekeepers are sure it is a disaster, especially “misinformation.” Everything from Macedonian teenagers to Russian intelligence to determined partisans and politicians are held up as existential threats, and it’s not hard to see why: the current media model is predicated on being the primary source of information, and if there is false information, surely the public is in danger of being misinformed?

The Implication of More Information

The problem, of course, is that focusing on misinformation — which to be clear, absolutely exists — is to overlook the other part of the “everyone is a publisher” equation: there has been an explosion in the amount of information available, true or not. Suppose that all published information followed a normal distribution (I am using a normal distribution for illustrative purposes only, not claiming it is accurate; obviously in sheer volume, given the ease with which it is generated, there is more misinformation):

The normal distribution of information

Before the Internet, the total amount of misinformation would be low in relative and absolute terms, because the total amount of information would be low:

Less information means less misinformation

After the Internet, though, the total amount of information is so much greater that even if the total amount of misinformation remains just as low relatively speaking, the absolute amount will be correspondingly greater:

More information = more misinformation

It follows, then, that it is easier than ever to find bad information if you look hard enough, and helpfully, search engines are very efficient in doing just that. This makes it easy to write stories like this New York Times article on Sunday:

As the coronavirus has spread across the world, so too has misinformation about it, despite an aggressive effort by social media companies to prevent its dissemination. Facebook, Google and Twitter said they were removing misinformation about the coronavirus as fast as they could find it, and were working with the World Health Organization and other government organizations to ensure that people got accurate information.

But a search by The New York Times found dozens of videos, photographs and written posts on each of the social media platforms that appeared to have slipped through the cracks. The posts were not limited to English. Many were originally in languages ranging from Hindi and Urdu to Hebrew and Farsi, reflecting the trajectory of the virus as it has traveled around the world…The spread of false and malicious content about the coronavirus has been a stark reminder of the uphill battle fought by researchers and internet companies. Even when the companies are determined to protect the truth, they are often outgunned and outwitted by the internet’s liars and thieves. There is so much inaccurate information about the virus, the W.H.O. has said it was confronting a “infodemic.”

As I noted in the Daily Update on Monday:

The phrase “a search by The New York Times” is the tell here: the power of search in a world defined by the abundance of information is that you can find whatever it is you wish to; perhaps unsurprisingly, the New York Times wished to find misinformation on the major tech platforms, and even less surprisingly, it succeeded.

A far more interesting story, to my mind, is about the other side of that distribution. Sure, the implication of the Internet making everyone a publisher is that there is far more misinformation on an absolute basis, but that also suggests there is far more valuable information that was not previously available:

More information = more valuable information

It is hard to think of a better example than the last two months and the spread of COVID-19. From January on there has been extensive information about SARS-CoV-2 and COVID-19 shared on Twitter in particular, including supporting blog posts, and links to medical papers published at astounding speed, often in defiance of traditional media. In addition multiple experts including epidemiologists and public health officials have been offering up their opinions directly.

Moreover, particularly in the last several weeks, that burgeoning network has been sounding the alarm about the crisis hitting the U.S. Indeed, it is only because of Twitter that we knew that the crisis had long since started (to return to the distribution illustration, in terms of impact the skew goes in the opposite direction of the volume).

The Seattle Flu Study Story

Perhaps the single most important piece of information about the COVID-19 crisis in the United States was this March 1 tweet thread from Trevor Bedford, a member of the Seattle Flu Study team:

You can draw a direct line from this tweet thread to widespread social distancing, particularly on the West Coast: many companies are working from home, traveling has plummeted, conferences are being canceled. Yes, there should absolutely be more, but every little bit helps; information that came not from authority figures or gatekeepers but rather Twitter is absolutely going to save lives.

What is remarkable about these decisions, though, is that they were made in an absence of official data. The President has spent weeks downplaying the impending crisis, and the CDC and FDA have put handcuffs on state and private labs even as they have completely dropped the ball on test kits that would show what is surely a significant and rapidly growing number of cases. Incredibly, as this New York Times story documents, those handcuffs were quite explicitly applied to Bedford’s team:

[In late January] the Washington State Department of Health began discussions with the Seattle Flu Study already going on in the state. But there was a hitch: The flu project primarily used research laboratories, not clinical ones, and its coronavirus test was not approved by the Food and Drug Administration. And so the group was not certified to provide test results to anyone outside of their own investigators…

C.D.C. officials repeatedly said it would not be possible [to test for coronavirus]. “If you want to use your test as a screening tool, you would have to check with F.D.A.,” Gayle Langley, an officer at the C.D.C.’s National Center for Immunization and Respiratory Disease, wrote back in an email on Feb. 16. But the F.D.A. could not offer the approval because the lab was not certified as a clinical laboratory under regulations established by the Centers for Medicare & Medicaid Services, a process that could take months.

The Seattle Flu Study, led by Dr. Helen Y. Chu, finally decided to ignore the CDC:

On the other side of the country in Seattle, Dr. Chu and her flu study colleagues, unwilling to wait any longer, decided to begin running samples. A technician in the laboratory of Dr. Lea Starita who was testing samples soon got a hit…

“What we were allowed to do was to keep it to ourselves,” Dr. Chu said. “But what we felt like we needed to do was to tell public health.” They decided the right thing to do was to inform local health officials…

Later that day, the investigators and Seattle health officials gathered with representatives of the C.D.C. and the F.D.A. to discuss what happened. The message from the federal government was blunt. “What they said on that phone call very clearly was cease and desist to Helen Chu,” Dr. Lindquist remembered. “Stop testing.”

Still, the troubling finding reshaped how officials understood the outbreak. Seattle Flu Study scientists quickly sequenced the genome of the virus, finding a genetic variation also present in the country’s first coronavirus case.

And thus came Bedford’s tweetstorm, and the response from private companies and individuals that, while weeks later than it should have been, was still far earlier than it might have been in a world of gatekeepers.

The Internet and Individual Verification

The Internet, famously, grew out of a Department of Defense project called ARPANET; that was the network Cerf, Dalal, and Sunshine developed TCP for. Contrary to popular myth, though, the goal was not to build a communications network that could survive a nuclear attack, but something more prosaic: there were a limited number of high-powered computers available to researchers, and the Advanced Research Projects Agency (ARPA) wanted to make it easier to access them.

There is a reason that the nuclear war motive has stuck, though: for one, that was the motivation for the theoretical work around packet switching that became the TCP/IP protocol. Two is the fact that the Internet is in fact so resilient: despite the best efforts of gatekeepers, information of all types flows freely.1 Yes, that includes misinformation, but it also includes extremely valuable information as well; in the case of COVID-19 it will prove to have made a very bad problem slightly better.

This is not to say that the Internet means that everything is going to be ok, either in the world generally or the coronavirus crisis specifically. But once we get through this crisis, it will be worth keeping in mind the story of Twitter and the heroic Seattle Flu Study team: what stopped them from doing critical research was too much centralization of authority and bureaucratic decision-making; what ultimately made their research materially accelerate the response of individuals and companies all over the country was first their bravery and sense of duty, and secondly the fact that on the Internet anyone can publish anything.

To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.

  1. China is an obvious exception; I addressed the contrast in the aforelinked “The Internet and the Third Estate”. []

Email Addresses and Razor Blades

At first glance, the proposed (and now-withdrawn) acquisition of Harry’s Razors by Edgewell Personal Care Co. — the makers of Schick — and Intuit’s announced acquisition of Credit Karma don’t appear to have much in common. There is, though, a common thread: digital advertising, and the dominance of Facebook and Google.

FTC Sues to Block Harry’s Acquisition

Start with Harry’s: the acquisition, which was announced last May, is off the table after the FTC filed a suit to block the acquisition. Interestingly, despite the fact that Harry’s is associated with direct-to-consumer (DTC), the reasoning the FTC used was Harry’s presence in brick-and-mortar retail. From the FTC complaint:

Harry’s and Dollar Shave Club quickly succeeded in — and largely filled — the previously untapped online space. But the successful entry by Harry’s and Dollar Shave Club with their online Direct to Consumer (“DTC”) models did not stop the price increases by P&G and Edgewell, both of which sold their products primarily through brick-and-mortar retailers.

Significant change came when Harry’s made the first — and, to date, only — successful jump from an online DTC platform into brick-and-mortar retail. In August 2016, Harry’s launched exclusively at Target with suggested retail prices several dollars below the most comparable Schick and Gillette products, a significant discount. Harry’s arrival in Target made a substantial impact, with Harry’s immediately winning customers from Edgewell and P&G. Edgewell described Harry’s trajectory as one of “[REDACTED]” and observed that Harry’s took “[REDACTED].”

Harry’s entry at Target ended the long-standing practice of reciprocal price increases by Gillette and Edgewell. Shortly after Harry’s successful launch at Target, P&G implemented a “[REDACTED]” price reduction across its portfolio of razors, reversing course on its practice of leading yearly price increases. Edgewell changed course as well, abandoning its strategy of being a “[REDACTED]” of Gillette’s pricing actions. Rather than match Gillette’s price decrease, Edgewell began tracking Harry’s growth and increased promotional spend (funding for discounts and other promotions) [REDACTED]. Edgewell hoped that this effort would “[REDACTED],” [REDACTED].

The FTC went on to note:

Harry’s significant entry into brick-and-mortar retail transformed the wet shave razor market from a comfortable duopoly to a competitive battleground. Edgewell, in particular, has found itself fighting the threat that Harry’s poses to both its branded products and its private label offerings (i.e., razors manufactured by Edgewell for a retailer partner, to be sold under the retailer’s brand). Consumers benefited from the resulting price discounts and the introduction of additional Edgewell branded and private label choices.

The Proposed Acquisition is likely to result in significant harm by eliminating competition between important head-to-head competitors. The Proposed Acquisition also will harm competition by removing a particularly disruptive competitor from the marketplace at a time when that competitor is currently expanding into additional retailers.

Finally, the FTC concluded that the acquisition was presumptively illegal using the Herfindahl-Hirschman Index, which measures concentration in a given market:

Under the 2010 U.S. Department of Justice and Federal Trade Commission Horizontal Merger Guidelines (“Merger Guidelines”), a post-acquisition market concentration level above 2,500 points, as measured by the Herfindahl-Hirschman Index (“HHI”), and an increase in HHI of more than 200 points renders an acquisition presumptively unlawful. Transactions in highly concentrated markets—markets with an HHI above 2,500 points — with an HHI increase of more than 100 points potentially raise significant competitive concerns and warrant scrutiny. The HHI is calculated by totaling the squares of the market shares of every firm in the relevant market pre- and post-acquisition.

The market for the manufacture and sale of wet shave razors in the United States is already highly concentrated, with an HHI of over 3,000. The Proposed Acquisition increases the concentration in this market by more than 200 points and is therefore presumptively illegal…Changes in HHI based on current market shares understate the competitive significance of the Proposed Acquisition because Harry’s continues to expand into additional brick-and-mortar retailers. Recognizing that the Proposed Acquisition will arrest Harry’s independent expansion, it is appropriate to analyze Harry’s competitive significance by using prior entry events to project future competitive significance. Moreover, current market shares especially understate the competitive significance of Harry’s in markets that include sales of women’s razors because Harry’s Flamingo product launched very recently.

Schick abandoned the deal a few days after the FTC filed suit; Harry’s, surprisingly, did not negotiate a breakup fee.

Acquisitions and Incentives

I quoted fairly extensively from the FTC’s complaint because, frankly, it’s quite compelling. Harry’s emergence led to lower prices for consumers, and Edgewell was almost certainly looking to relieve said downward pressure on prices, along with other more upstanding motivations like gaining new management expertise and its own DTC channel.

At the same time, you can see some of the problematic incentives inherent in blocking a merger that I discussed two weeks ago. First, given that most CPG categories are dominated by a small number of incumbents (given the scale advantages necessary to compete for shelf space globally), investors have to be increasingly wary of investing in the space given that the precedent is that an acquisition will be ruled to be anticompetitive; much will rest on Harry’s ability to fulfill the FTC’s faith in its ability to be a standalone competitor (of which I am dubious, for reasons I will explain). Second, Harry’s was in some respects punished by its leap from online-only sales to bricks-and-mortar sales; as noted in an excerpt above, Harry’s success in online sales didn’t have any appreciable impact on the bricks-and-mortar market. Is the lesson for other DTC companies to stick with online sales alone for fear of foreclosing the possibility of being acquired?

That drives to a broader question: why did Harry’s feel the need to pursue the brick-and-mortar market at all?

The Conservation of Attractive Consumer Packaged Goods

Much of the excitement around DTC was about the potential of eliminating the middleman; the margin taken by retailers could instead be devoted to a better product, lower prices, and better margins for the brand in question. The problem is that value chain transformation is far more dynamic than that. Go back to The Conservation of Attractive Profits, which I wrote about in the context of Netflix in 2015:

The Law of Conservation of Attractive Profits1 was first explained by Clayton Christensen in his 2003 book The Innovator’s Solution:

Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and de-commoditization, commoditization, that exists in order to optimize the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

That’s a bit of a mouthful, but the example that follows in the book shows how powerful this observation is:

If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computer’s architecture had to be modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one-size-fits-all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

Did you catch that? That was Christensen, a full four years before the iPhone, explaining why it was that Intel was doomed in mobile even as ARM would become ascendent.2 When the basis of competition changed away from pure processor performance to a low-power system the chip architecture needed to switch from being integrated (Intel) to being modular (ARM), the latter enabling an integrated BlackBerry then, and an integrated iPhone four years later.3

The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces.
The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces. Do note that this is a drastically simplified illustration.

More broadly, breaking up a formerly integrated system — commoditizing and modularizing it — destroys incumbent value while simultaneously allowing a new entrant to integrate a different part of the value chain and thus capture new value.

Commoditizing an incumbent's integration allows a new entrant to create new integrations -- and profit -- elsewhere in the value chain.
Commoditizing an incumbent’s integration allows a new entrant to create new integrations — and profit — elsewhere in the value chain.

This is exactly what is happening with Airbnb, Uber, and Netflix too.

The old value chain in consumer packaged goods (CPG) looked like this:

CPG Value Chain

CPG companies like P&G harvested most of the value by integrating research and development, manufacturing, marketing, and shelf space; raw materials, retail, and logistics were modularized and commoditized.

DTC companies, meanwhile, saw research and development as increasingly unnecessary in overserved markets (as I noted in the context of Dollar Shave Club, razors are a particularly salient example of overserving), and shelf space on the Internet was effectively infinite. Their goal was to integrate marketing, retail, and manufacturing:

Theoretical DTC Value Chain

The problem, though, is that marketing on the Internet was entirely different than the analog marketing that previously dominated the CPG industry. There being good at advertising, whether it be coupons in the Sunday paper or television ads during the evening news, was mostly a matter of the abiltiy to spend, which was itself a matter of scale. Digital marketing, though, didn’t really work at scale, at least relative to TV; in fact, it only made sense if you could target consumers with advertising and track how it performed.

On one hand, this was another critical factor in making DTC companies viable. The advantage of targeted advertising is that it takes a lot less money relative to TV to reach customers who are actually interested in your product; the problem, though, is that getting good at targeted advertising requires massive amounts of both research and development to buid the capability, and inventory across a sufficiently large customer base to make the effort worthwhile. In the end, no DTC company was actually good at marketing; they outsourced it to Google and Facebook, which both had the inventory and the capability to spend the billions necessary to develop sophisticated targeted advertising.

The problem is that in the process of depending on Google and Facebook for marketing, the DTC companies gave up their planned integration in the value chain, and the associated profits, to Facebook and Google:

Actual DTC Value Chain

The actual integrated players — Google and Facebook — integrate customers and research and development to dominate marketing; DTC may have online retail operations, but that is a modularized — and thus commoditized — part of the value chain (and meanwhile, Amazon was in the process of integrating retail and logistics). I wrote last week when Brandless folded:

Here is the problem for DTC companies: Facebook really is better at finding them customers than anyone else. That means that the best return-on-investment for acquiring customers is on Facebook, where DTC companies are competing against all of the other DTC companies and mobile game developers and incumbent CPG companies and everyone else for user attention. That means the real winner is Facebook, while DTC companies are slowly choked by ever-increasing customer acquisition costs. Facebook is the company that makes the space work, and so it is only natural that Facebook is harvesting most of the profitability from the DTC value chain.

To be fair to the DTC companies, they are hardly the first to make this mistake: way back when the world wide web first started publishers looked at the Internet and only saw the potential of reaching new customers; they didn’t consider that because every other publisher in the world could now reach those exact same customers, the integration that drove their business — publishing and distribution in a unique geographic area — had disintegrated. It is a lesson that can be taken broadly: if some part of the value chain becomes free, that is not simply an opportunity but also a warning that the entire value chain is going to be transformed.

Harry’s Difficult Road

This takes us back to Harry’s, and the decision to pursue bricks-and-mortar retail in the first place. It’s a choice that doesn’t make much sense in the theoretical value chain I sketched out above, where DTC companies integrate marketing and retail. However, once it became apparent that Facebook and Google squeezed far more value out of the online value chain than offline, the only option left was to pursue some sort of low-end disruption in the old value chain. Or, to put it in blunter terms, be cheaper.

This, though, resulted in two problems: first, there was no technologically-based reason that Harry’s razors should be cheaper than Schick’s or Gillette’s; that meant that Schick and Gillette responded by lowering prices to match Harry’s. Secondly, because price as a proxy for consumer welfare is the most important factor driving regulatory review of acquisitions, Harry’s actually closed off their most viable exit. The fact of the matter is that Schick and Gillette specifically, and large CPG companies broadly, are unsurprisingly better suited to compete in the bricks-and-mortar markets they were built to dominate. Harry’s, despite its factory in Germany and omnichannel distribution strategy, faces a long road to actually achieving that $1.37 billion valuation on their own.

Credit Karma and Acquiring Customers

Harry’s outcome seems paritcularly unfair in light of yesterday’s news that Intuit is buying Credit Karma for $7.1 billion. Credit Karma doesn’t have a factory in Germany. Indeed, they don’t make any money from customers at all. Rather, Credit Karma offers free services that attract users to their site, and monetizes those users by directing them to credit cards and other financial products that pay an affiliate fee.

Here there is one angle where this deal looks a bit like Harry’s: one of Credit Karma’s free offerings is a free tax filing service; that is obviously a threat to TurboTax, Intuit’s biggest money-maker (which has a free version it hopes you never find). It is even possible that the FTC seeks to block the deal on these grounds. I suspect, though, that Credit Karma and Intuit will simply agree to spin off the tax filing unit, because that is not Credit Karma’s true value.

What is actually valuable are Credit Karma’s users — 90 million of them in the U.S. alone, 50% of whom are millennials. Those 90 million users don’t just visit Credit Karma directly, they have already shared substantial amounts of their personal financial data, and have consented to receiving emails about their credit scores. They are, in other words, the best possible customer acquisition channel for a company like Intuit, and for all of the reasons I just recounted, customer acquisition is the most valuable part of the digital value chain. Intuit will gladly suffer a tax filing competitor as long as it has the best possible channel to acquire the next generation of tax filers.

This gets at the real commonality between Harry’s and Credit Karma: Harry’s is less valuable than it might have been because of Facebook and Google’s dominance of digital advertising; Credit Karma is more valuable than it might seem because they offer a way to acquire customers without depending on Facebook and Google. This is a particularly notable insight given the FTC’s involvement in the Harry’s acquisition, and potential involvement in Credit Karma: one potential outcome of the greater competition that may have arisen in digital advertising absent Facebook’s acquisition of Instagram and Google’s acquisition of DoubleClick would be increased viability for DTC companies, and decreased value for simply aggregating an audience with no direct business model.

Still, I wouldn’t take the counter-factual too far: DTC makes far more sense with radically lower cost structures; if you are going to take advantage of the Internet transforming one part of the value chain, you had best ensure you are anticipating the transformations in the other parts as well. And, on the flipside, in a world of abundance being able to aggregate demand is more valuable than being able to create supply; it may offend our analog sensibilities that 90 million email addresses are more valuable than real-world factories, but such is the transformative nature of the Internet.

  1. Later renamed the Law of Conservation of Modularity. []
  2. I have my differences with Christensen about the iPhone, but as I’ve said repeatedly my criticism comes from an attempt to build on his brilliant work, not tear it down. []
  3. As I’ve noted, the iPhone is in fact modular at the component level; the integration is between the completed phone and the software. Not appreciating that the point of integration (or modularity) can be anywhere in the value chain is, I believe, at the root of a lot of mistaken analysis about the iPhone in particular []

The Daily Update Podcast

Today Stratechery is launching a new product: the Daily Update Podcast.

What is the Daily Update Podcast?

The Daily Update Podcast is the audio version of the Daily Update. The Daily Update consists of three subscriber-only posts that, in addition to the free Weekly Article, arrive in your inbox every morning. Now, you can not only read the Daily Update via email or the web, you can also choose to listen to the Daily Update (and the Weekly Article) in your favorite podcast player.

Who reads the Daily Update Podcast?

Most days, I will read the Daily Update, with an assist from Daman Rangoola (he reads the blockquotes, to make it easier to follow). If I am traveling or otherwise unable to record, then Daman will record the Daily Update Podcast.

As for who listens to the Daily Update Podcast (which, to be clear, includes the free Weekly Article), it is Daily Update subscribers only.

When does the Daily Update Podcast come out?

The Daily Update podcast will come out a few hours after the Daily Update email. It takes some time to edit the podcast, including adding all of the cool features that make this a particularly unique podcast.

For example, the show notes for every podcast contain the full Daily Update post, so you can easily find links and illustrations. The Daily Update podcast also supports chapters, which correspond to the different sections of the Daily Update or Weekly Article. Plus, some podcast players, like Overcast, show Stratechery illustrations as cover art at precisely the right moments:

Stratechery Daily Update features

And yes, future Stratechery interviews will be more than just transcripts.

Where do I listen to the Daily Update Podcast?

The Daily Update Podcast can be played in any podcast player that supports the open ecosystem of podcasting (unfortunately this does not include Spotify, Google Podcasts, or Stitcher; to be very clear, this is not my choice). You can also read the Daily Update in any RSS Reader.

Why did you launch the Daily Update Podcast?

The Daily Update Podcast has been one of the most requested features since I launched the Daily Update. Many subscribers have long commutes and would like to listen to the Daily Update in the car, on the train, or while walking.

How does the Daily Update Podcast work?

Every Daily Update subscriber has access to their own individual feed. Stratechery makes it easy for you to add that feed to your favorite podcast player.

Start by visiting the Daily Update Podcast page.

If you are on your phone or tablet:

  • On iOS, simply tap the icon of your favorite podcast player and follow the prompts:

  • On Android, tap ‘Android’ and choose your preferred player:1

If you are on your PC or Mac, and wish to listen on your phone:

  • Choose your favorite podcast player, then scan the corresponding QR code with your phone’s camera. The Daily Update podcast feed will be added to your chosen player.

  • Or enter your phone number, and Stratechery will text you a link to the Daily Update Podcast page.

If you are on your PC or Mac, and wish to listen there:

  • If you are on macOS Catalina, simply click on the Apple Podcasts icon:

  • If you are older versions of macOS or Windows, simply click on the iTunes icon:

If you have another podcast player, or wish to read the Daily Update in your RSS Reader:

  • Copy your custom URL and paste it into your podcast player or RSS Reader

  • Please note that this will be the only way to read the Stratechery Daily Update via RSS; previous member-only RSS feeds are depracated.

To be very clear, the Daily Update is not going anywhere; after all, it is the source material for the Daily Update Podcast! To that end, you don’t need to subscribe to one or the other — it’s the same content, just in two different forms.

Going forward, there will be a Daily Update Podcast for every Daily Update; for now the feed includes two Weekly Articles and two Daily Updates that demonstrate some of the cool features of the Daily Update Podcast.

I am extremely excited about the Daily Update Podcast: it is a product I have been hoping to launch for a very long time. I want to thank Daman Rangoola for project managing the development of this new feature, my good friends at ModelRocket for building it, and most of all, my subscribers for giving me a reason to get it done.

To learn more about the Stratechery Daily Update, please visit the updated Daily Update page. If you are already a subscriber, you can get started with the Daily Update Podcast here.

  1. Android’s approach is much better than iOS’s! []