Stratechery Plus Update

  • The Anti-Amazon Alliance

    The nonsensical tie-in has long been one of my favorite PR genres, and the coronavirus crisis has created a whole host of examples; Casey Newton posted a particularly egregious one:

    Using a current news event as cover is hardly limited to bad PR pitches;1 look no further than this announcement from Google, which used the coronavirus crisis to frame a major change in Google Shopping:

    The retail sector has faced many threats over the years, which have only intensified during the coronavirus pandemic. With physical stores shuttered, digital commerce has become a lifeline for retailers. And as consumers increasingly shop online, they’re searching not just for essentials but also things like toys, apparel, and home goods. While this presents an opportunity for struggling businesses to reconnect with consumers, many cannot afford to do so at scale.

    In light of these challenges, we’re advancing our plans to make it free for merchants to sell on Google. Beginning next week, search results on the Google Shopping tab will consist primarily of free listings, helping merchants better connect with consumers, regardless of whether they advertise on Google. With hundreds of millions of shopping searches on Google each day, we know that many retailers have the items people need in stock and ready to ship, but are less discoverable online.

    For retailers, this change means free exposure to millions of people who come to Google every day for their shopping needs. For shoppers, it means more products from more stores, discoverable through the Google Shopping tab. For advertisers, this means paid campaigns can now be augmented with free listings. If you’re an existing user of Merchant Center and Shopping ads, you don’t have to do anything to take advantage of the free listings, and for new users of Merchant Center, we’ll continue working to streamline the onboarding process over the coming weeks and months.

    This has nothing to do with the coronavirus: what this change really means is the biggest missing piece in the Anti-Amazon Alliance is now all-in.

    Swiffer and Shelf Space

    The concept of a Purchase Funnel was first published in 1925 in Edward Strong’s The Psychology of Selling and Advertising; Strong credited E. St. Elmo Lewis for originally formulating the idea:

    Many changes in selling procedure have of necessity been made in the past fifteen years. Among them is the growing recognition of the buyer’s point of view. The development of the famous slogan — “attention, interest, desire, action, satisfaction” — illustrates this. In 1898 E. St. Elmo Lewis used the slogan, “Attract attention, maintain interest, create desire,” in a course he was giving in advertising in Philadelphia. He writes that he obtained the idea from reading the psychology of William James. Later on he added to the formula, “get action.” About 1907, A.F. Sheldon made the further addition of “permanent satisfaction” as essential to the slogan. Very few in 1907 felt the need for the last phrase, but on every hand today is heard the necessity for rendering service, of securing the goodwill of the buyer, of selling him what he needs, of establishing permanent satisfaction

    These changes have taken place so gradually that many salesmen and advertisers have failed to appreciate their inherent relationship to each other or their significance. Many have not seen, that honest service to a buyer in terms of his needs so that he will feel goodwill, and be permanently satisfied, means that the buyer’s interests must be dominant, not the seller’s. And fewer still have seen that the easist way, and in fact the only way, to guarantee that this will be achieved is for the seller to present his proposition from the buyer’s point of view.

    The AIDA model, as the purchase funnel is also known, is exactly as Lewis described it; I always understood it best in terms of problem-solving:

    • Attention: make the buyer aware of a problem they have
    • Interest: the buyer becomes interested in solving their problem
    • Desire: the buyer becomes interested in your solution to their problem
    • Action: the buyer acquires your solution

    A classic example of this model compressed into a single commercial is the roll-out campaign for the Swiffer mop:

    The entire funnel is in this ad:

    • Attention: Traditional cleaning methods stir up dirt
    • Interest: Dirt needs to be removed, not just moved
    • Desire: Swiffer cloths collect dirt and can be thrown away
    • Action: Find Swiffer in the household cleaning aisle

    The aisle reference is critical: shelf space was long the linchpin for large consumer packaged goods companies: Swiffer had widespread distribution the moment it launched because P&G could leverage its other popular products when it came to negotiations with retailers. And, of course, simply being on shelves increases the chances customers discover you on their own, or recognize you after repeated exposure in advertising: shelf space provided both distribution and discovery.

    Facebook and Discovery

    That commerical, I should note, is actually not the best example of how the marketing funnel normally worked for P&G and its ilk; while it included Interest, Desire, and Action, it was mostly about Attention: over the months and years to come P&G would run many more campaigns across media of all types, from TV to coupons to end-caps in retailers, all with the goal of making Swiffer into a habitual purchase. The fact that that commercial compressed all parts of the funnel into 45 seconds was, though, helpful for explaining the AIDA framework!

    It was also a sign of things to come: one of the hallmarks of the Internet is that the entire funnel is often compressed into a single Facebook ad that you might only see for a fraction of a second; perhaps something will catch your eye, and you will swipe to see more, and if you are intrigued, you can complete the purchase right then and there. You might even forget about your purchase right up until a mysterious package shows up at your door a few days later.

    The reason this works is the sheer scale of Facebook; there are so many people scrolling through so many feeds and swiping through so many stories that the advertising game is basically the inverse of what worked before: instead of carefully planning a multi-pronged advertising campaign to over time move people down a funnel to a purchase decision in front of a stocked shelf, advertisers iterate their targeting criteria and ad content over time to convert customers immediately.

    Facebook — experiments with Instagram checkout notwithstanding — is only one piece of the e-commerce stack that makes this possible:

    • Facebook helps find the customers
    • Shopify or WooCommerce build the storefronts
    • Stripe or PayPal handle payments
    • Third-party logistics providers package and ship the goods
    • USPS, Fedex, and UPS deliver the actual packages

    To put it another way, Facebook provides the digital shelves for customers to find what they didn’t know they wanted, and the rest of the ecosystem fills in the pieces.

    Amazon the Integrator

    When the Internet digitizes what used to be analog assets, we often find out that jobs that were once done together end up in radically different places. The classic example are newspapers: they carried both editorial and advertisements, but it turns out that was simply a function of who owned printing presses and delivery trucks; once the Internet came along advertisers, which cared about reaching customers, not supporting journalists, switched to Facebook and Google, which had aggregated the former and commoditized the latter.

    So it is with shelves: when the supply was constrained by physical space, they were extremely valuable for not just discovery but also distribution, but once the Internet made shelf space effectively infinite, it shouldn’t be a surprise that the solutions to discovery and distribution developed differently.

    As I just noted, the answer to the former has been Facebook; the answer to the latter, though, is search. I first wrote about this transition six years ago in How Technology is Changing the World (P&G Edition):

    That’s great for Amazon, but not so great for P&G: remember, dominating shelf space was a core part of their strategy, and while I’m no mathematician, I’m pretty sure dominating an infinite resource is a losing proposition. What matters now is dominating search. That is the primary way people arrive at product pages like this:

    There are two big challenges when it comes to winning search:

    • Because search is initiated by the customer, you want that customer to not just recognize your brand (which is all that is necessary in a physical store), but to recall your brand (and enter it in the search box). This is a much stiffer challenge and makes the amount of time and money you need to spend on a brand that much greater.
    • If prospective customers do not search for your brand name but instead search for a generic term like “laundry detergent” then you need to be at the top of the search results. And, the best way to be at the top is to be the best-seller. In other words, having lots of products in the same space can work against you because you are diluting your own sales and thus hurting your search results.

    The way to deal with both challenges is the same way you break through the noise: you put more focus on fewer brands.

    The challenge for P&G and basically everyone else in the retail space is that there is no bigger brand than Amazon itself. According to eMarketer earlier this year, 49% of Internet shoppers start their searches on Amazon, and only 22% on Google; Amazon’s share is far higher for Prime subscribers, which include over half of U.S. households.

    This means that Amazon has effectively integrated the entire e-commerce stack when it comes to the distribution of goods consumers are explicitly searching for:

    • Customers come to Amazon directly
    • Searches on Amazon lead to Amazon product pages or 3rd-party merchant listings that look identical to Amazon product pages
    • Amazon handles payments
    • Amazon packages and ships the goods
    • Amazon increasingly delivers the actual packages

    Given this level of integration, it is hardly a surprise that Amazon has for several years been moving into selling its own products as well; not only does it have customers and search, it has data on what people want. Notably, as the Wall Street Journal reported last week, Amazon is allegedly gathering that data from not just its own sales but also those of 3rd-party merchants:

    The online retailing giant has long asserted, including to Congress, that when it makes and sells its own products, it doesn’t use information it collects from the site’s individual third-party sellers—data those sellers view as proprietary. Yet interviews with more than 20 former employees of Amazon’s private-label business and documents reviewed by The Wall Street Journal reveal that employees did just that. Such information can help Amazon decide how to price an item, which features to copy or whether to enter a product segment based on its earning potential, according to people familiar with the practice, including a current employee and some former employees who participated in it.

    This is, to be clear, a major problem for Amazon, first and foremost because the company has long insisted it does no such thing, and told Congress the same; as John Gruber put it on Daring Fireball, “Amazon isn’t hurting for revenue (especially now), but they are hurting for trust.”

    What is truly surprising about this news, though, is that Amazon ever made such a promise in the first place, and, to be honest, that I believed them. On one hand, the reasoning is obvious: making Amazon.com into a platform lets Amazon offer many more items and get paid to do so, as opposed to carrying huge amounts of inventory on which it must expend working capital.

    The problem is that the gravitational pull of an integrated offering like Amazon.com is nearly impossible to resist; witness how Windows once spoiled Microsoft services, and Android Google services. Perhaps it was too much to expect Amazon to be any different — or, to put it another way, maybe Amazon always was an integrator, just at a far grander scale.

    The Anti-Amazon Alliance

    The antidote to an integrator is modularization; that is why I wrote last year that Shopify, not Walmart, was Amazon’s true competitor. From Shopify and the Power of Platforms:

    At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed.

    The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

    A drawing of The Shopify Platform

    This means they have to stand out not in a search result on Amazon.com, or simply offer the lowest price, but rather earn customers’ attention through differentiated product, social media advertising, etc. Many, to be sure, will fail at this: Shopify does not break out merchant churn specifically, but it is almost certainly extremely high. That, though, is the point. Unlike Walmart, currently weighing whether to spend additional billions after the billions it has already spent trying to attack Amazon head-on, with a binary outcome of success or failure, Shopify is massively diversified. That is the beauty of being a platform: you succeed (or fail) in the aggregate.

    I sort of skated over that customer acquisition piece, mentioning the sort of discovery that Facebook advertising enables in passing. The big missing piece, though, has been that other shelf functionality — distribution. How do you find the specific thing that you want whereever it might be on the Shopify platform, or WooCommerce or Walmart.com or anywhere else on the Internet?

    The answer is not to go to Shopify.com, which, as I noted, no customer knows about; rather the solution is the most powerful search engine on earth — Google. What Google’s announcement unlocks is the same sort of modular stack for distribution that already exists for discovery:

    • Google helps find the products on 3rd-party e-commerce sites
    • Shopify or WooCommerce build the storefronts (or big box retailers like Walmart)
    • Stripe or PayPal handle payments
    • Third-party logistics providers package and ship the goods
    • USPS, Fedex, and UPS deliver the actual packages

    This stack existed before, but inserting an additional payment layer into product discovery made product discovery worse (by limiting the number of products and retailers), hurting the entire ecosystem. Unsurprisingly, Shopify and WooCommerce are partnering with Google to make it easy for small retailers to get into Google’s results, and I wouldn’t be surprised if Google is working with larger retailers as well.

    This cooperation is also evidence of Amazon’s growing clout: one of the reasons why Google made Shopping pay-to-play back in 2012 was because the company deemed that the best way to get real-time data for Google Shopping; retailers paying to be featured would be motivated to give Google quality data. Today, though, additional motivation is unnecessary: everyone in commerce is, whether they realize it or not, in the Anti-Amazon Alliance, and that provides plenty of motivation.

    The Market Responds

    The regulatory angle on this is a surprising one. Start with Google: another reason to go with a pay-to-play model is that it made Google Shopping quite explicitly into something other than a shopping comparison service. That, though, didn’t stop the European Commission from handing down an ill-advised decision that said that Google was guilty of crowding out other shopping comparison services anyways. There is definitely a sense that Google is “damned if they do and damned if they don’t” when it comes to providing anything other than ten blue links.

    Moreover, the existence of Amazon and its clear clout in the market rather strongly suggests the European Commission missed the point: market control comes from aggregating customers; Google can’t anymore restrict competition from sites that depend on Google than a car can restrict competition from a trailer it is towing. Winning online is not about functionality, but about what app or website customers open of their own volition. In the case of shopping, that website is increasingly Amazon, and now it is Google that is partnering with others in response.

    At the same time, as Benedict Evans detailed last December, Amazon — including 3rd-party merchant sales — accounted for about 6% of U.S. retail sales; to put that in context Walmart accounts for 9%. Yes, e-commerce is growing as a whole even as Amazon increases its share — especially now — but to expect current growth rates to maintain forever is rarely correct, particularly when physical goods (i.e. with marginal costs) are involved.

    I would also note that Walmart, like all major retailers, sells private label goods, and unquestionably depends on sales data to decide on what to make, and how much; Amazon should be able to do the same. At the same time, what makes Amazon’s alleged snooping problematic is the fact that it is looking at 3rd-party merchants for which it claims it is only an agent, not retailer.

    That, though, points to an obvious market-based response: 3rd-party merchants, particularly those with differentiated products and brands, should seek to leave Amazon’s platform sooner-rather-than-later. It is hard to be in the Anti-Amazon Alliance if you are asking Amazon to find you your customers, stock your inventory, package your products, and deliver your goods; there are alternatives and — now that Google is all-in — the only limitation is a merchant’s ability to acquire and keep customers in a world where their products are as easy to buy as bad PR pitches are easy to find.

    I wrote a follow-up to this article in this Daily Update.


    1. It’s also inaccurate: BASIC is approaching its 56th anniversary 


  • How Tech Can Build

    It was, at first glance, hard to understand how anyone could be upset at the idea that It’s Time to Build. That’s the title of a recent essay by Marc Andreessen, and of course I agree; I expressed the same sort of frustration Andreessen opens with last month in Compaq and Coronavirus:

    There has been divergence between countries that acted and countries that talked. Taiwan, where I live, is perhaps the best example of the former…The contrast with Western countries is stark: to the extent government officials across the Western world were discussing the coronavirus a month ago, it was to express support for China or insist that life carry on as before; I already praised the role Twitter played in sounding the alarm — often in the face of downplaying from the media — but even that was, by definition, talk. What does not appear to have happened anywhere across the West is any sort of meaningful action until it was far too late…

    The first problem of being a society of talk, not action, is the inability to even consider hard work as a solution; the second is a blindness to the real trade-offs at play. The third, though, is the most sinister of all: if talk is all that matters, then policing talk becomes an end to itself.

    “Action” is a different word than “build”, but, at least from my perspective, they express the same sentiment: bend the world to our will, instead of simply accepting our fate. In that light Andreessen’s article was meaningful not for the examples of what might be built, but rather for arguing for the action of building as an end goal in and of itself.

    Andreessen and Me

    Andreessen, who today is perhaps more well-known for his eponymous venture capital firm Andreessen Horowitz, is first-and-foremost a living legend for having created Mosaic, the first web browser that supported graphics; Mosaic became the basis for Netscape’s Navigator, whose 1995 IPO kicked off the dot-com era.

    An irony of Andreessen’s claim to fame, though, is that while it provided access to information from anywhere by anyone, perhaps the most important impact on Andreessen was getting him out of the Midwest and to Silicon Valley. That, at least, was a theory put forward in a fascinating 2015 profile in the New Yorker:

    One afternoon, as we sat at his baronial dining table, he made an agonized but sincere effort to discuss his blue-collar childhood without mentioning his nuclear family. “I really identified with Charles Schulz in the David Michaelis biography of him, ‘Schulz and Peanuts,’ ” he said. I was struck by the parallels between Andreessen and both “Peanuts” — in which Charlie Brown has a massive bald head and the parents are kept offstage — and its creator. Charles Schulz, who grew up in Minnesota, was socially awkward, hated being embraced, and loathed his mother’s Norwegian relatives, a farming family. Andreessen went on, “Ninety-six per cent of the people who grow up like he and I did, in the Midwest, just stay there, but the ones who leave” — the cartoonist, too, moved to California — “become intensely interested in the future. In Schulz’s last ten years, he really focussed on Rerun, Linus’s younger brother—the youngest and most optimistic character.”

    I can, given my own childhood in small-town Wisconsin and current residence in a country so far West it is called East, relate to Andreessen in this regard. For me, the Internet was a way out, first to learn, and then to live abroad, and now, a way to make a living. I know it gives me a positive bias towards technology; I’m not convinced it is wholly unearned, but an easier way out should always be viewed with some amount of suspicion.

    Software Eats the World

    This perspective led to Andreessen’s most famous essay, 2011’s Why Software Is Eating The World:

    My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy. More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

    Why is this happening now?

    Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale. Over two billion people now use the broadband Internet, up from perhaps 50 million a decade ago, when I was at Netscape, the company I co-founded. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.

    On the back end, software programming tools and Internet-based services make it easy to launch new global software-powered start-ups in many industries—without the need to invest in new infrastructure and train new employees. In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon’s cloud costs about $1,500 a month.

    With lower start-up costs and a vastly expanded market for online services, the result is a global economy that for the first time will be fully digitally wired — the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

    Andreessen was right, which was good for him for lots of reasons. First, it’s good to be right generally, and even better to write the defining piece of an era.

    Second, software is, as I have discussed previously, perfectly suited to venture capital: it has significant capital costs, and mostly zero marginal costs, which means there is a big need for up-front investment combined with unlimited upside. In other words, if software is eating the world, then it is the venture capitalists who are among the best positioned to get fat, at least in theory (that noted, one would have likely been better off investing in the Big 5 tech companies in 2011 — for reasons I discussed last week — than in Andreessen Horowitz’s funds).

    Third, in Andreessen’s vision, Silicon Valley was doing the disrupting from, well, Silicon Valley, which had always been the plan. Andreessen told Wired in 2012 that when it came to Andreessen Horowitz (Chris Anderson, the interviewer, is in bold):

    Our vision was to be a throwback: a Silicon Valley venture capital firm. We were going to be a single-office firm, focusing primarily on companies in the US and then, within that, primarily companies in Silicon Valley. And — this is the crucial thing — we’re only going to invest in companies based on computer science, no matter what sector their business is in. We are looking to invest in what we call primary technology companies.

    Give me an example.

    Airbnb—the startup that lets you rent out your home or a room in your home. Ten years ago you would never have said you could build Airbnb, which is looking to transform real estate with a new primary technology. But now the market’s big enough…everything inside of how Airbnb runs has much more in common with Facebook or Google or Microsoft or Oracle than with any real estate company. What makes Airbnb function is its software engine, which matches customers to properties, sets prices, flags potential problems. It’s a tech company — a company where, if the developers all quit tomorrow, you’d have to shut the company down. To us, that’s a good thing.

    I’m probably a little bit elitist in this, but I think a “primary technology” would need to involve, you know, some fundamental new insight in code, some proprietary set of algorithms.

    Oh, I agree. I think Airbnb is building a software technology that is equivalent in complexity, power, and importance to an operating system. It’s just applied to a sector of the economy instead. This is the basic insight: Software is eating the world. The Internet has now spread to the size and scope where it has become economically viable to build huge companies in single domains, where their basic, world-changing innovation is entirely in the code.

    Software eating the world, with zero marginal costs, all from Silicon Valley.

    It’s Time to Build

    This, as far as I can tell, is where the disconnect for some comes with It’s Time to Build.1 The sort of building Andreessen calls for is very much in the real world, costs real money both up-front and on a marginal basis, and would surely make the most sense anywhere but Silicon Valley. From the essay:

    Why do we not have these things? Medical equipment and financial conduits involve no rocket science whatsoever. At least therapies and vaccines are hard! Making masks and transferring money are not hard. We could have these things but we chose not to — specifically we chose not to have the mechanisms, the factories, the systems to make these things. We chose not to *build*.

    You don’t just see this smug complacency, this satisfaction with the status quo and the unwillingness to build, in the pandemic, or in healthcare generally. You see it throughout Western life, and specifically throughout American life.

    You see it in housing and the physical footprint of our cities. We can’t build nearly enough housing in our cities with surging economic potential — which results in crazily skyrocketing housing prices in places like San Francisco, making it nearly impossible for regular people to move in and take the jobs of the future. We also can’t build the cities themselves anymore. When the producers of HBO’s “Westworld” wanted to portray the American city of the future, they didn’t film in Seattle or Los Angeles or Austin — they went to Singapore. We should have gleaming skyscrapers and spectacular living environments in all our best cities at levels way beyond what we have now; where are they?

    You see it in education…manufacturing…transportation…

    The point about Singapore — Asia broadly — could be made about every point that followed. And that includes the response to the coronavirus.

    Andreessen then states what he sees as the problems:

    The problem is desire. We need to want these things. The problem is inertia. We need to want these things more than we want to prevent these things. The problem is regulatory capture. We need to want new companies to build these things, even if incumbents don’t like it, even if only to force the incumbents to build these things. And the problem is will. We need to build these things.

    This leads to the core question about Silicon Valley and its relationship to Andreessen’s essay: has tech — specifically the software-centric tech that Andreessen has done more than anyone to proselytize — been the primary source of American innovation because it represented the future? Or has it been the future because it was the only space where innovation was possible, because of things like inertia and regulatory capture in the real world?

    I can’t speak for Andreessen, but having observed him for many years I would guess the answer was mostly the former: Andreessen’s entire career, from Mosaic to Loudcloud to Ning, has been about creating space online, obviating the constraints of the real world, which wasn’t worth much anyways. From that Wired interview:

    Think about Borders, the bookstore chain. Amazon drove Borders out of business, and the vast majority of Borders employees are not qualified to work at Amazon. That’s an actual, full-on problem. But should Amazon have been prevented from doing that? In my view, no. Because it’s so much better to live in a world where that happened, it’s so much better to live in a world where Amazon is ascendant. I told you that my childhood bookstore was something you had to drive an hour to get to. But it was a Waldenbooks, and it was, like, 800 square feet, and it sold almost nothing that you would actually want to read. It’s such a better world where we have Amazon, where everything is universally available. They’re a force for human progress and culture and economics in a way that Borders never was.

    Human progress in this view is solely online.

    What Tech Must Do

    I agree with Andreessen that much of the software revolution is inevitable; I also agree that tech’s seeming exclusivity on innovation has also been about the online space being the one place without the inertia and regulatory capture Andreessen decries. If you are talented and ambitious, what better place to be?

    What I also sense in Andreessen’s essay, though, is the acknowledgment that tech too has chosen the easier path. Instead of fighting inertia or regulatory capture, it has been easier to retreat to Silicon Valley, justify the massive costs of doing so by pursuing infinite-upside outcomes predicated on zero marginal costs, which means relying almost exclusively on software as the means of innovation. To put it another way, where did Andreessen’s personal preferences end and his vision begin? Note this paragraph:

    Building isn’t easy, or we’d already be doing all this. We need to demand more of our political leaders, of our CEOs, our entrepreneurs, our investors. We need to demand more of our culture, of our society. And we need to demand more from one another. We’re all necessary, and we can all contribute, to building.

    What it means to ask more of one another, at least in tech, is right there in the overlap between preferences and vision.

    First, tech should embrace and accelerate distributed work. It makes tech more accessible to more people. It seeds more parts of the country with potential entrepreneurs. It dramatically decreases the cost of living for employees. It creates the conditions for more stable companies that can take on less risky yet still necessary opportunities that may throw off a nice dividend instead of an IPO. And, critically, it gives tech companies a weapon to wield against overbearing regulation, because companies can always pick up and leave.

    Second, invest in real-world companies that differentiate investment in hardware with software. This hardware could be machines for factories, or factories themselves; it could be new types of transportation, or defense systems. The possibilties, at least once you let go of the requirement for 90% gross margins, are endless.

    Third — and related to both of the above — figure out an investing model that is suited to outcomes that have a higher likelihood of success along with a lower upside. This is truly the most important piece — and where Andreessen, given his position, can make the most impact. Andreessen Horowitz has thought more about how to change venture capital than anyone else, but the fundamental constraint has remained the assumption of high costs, high risk, and grand slam outcomes. We should keep that model, but surely there is room for another?


    I do believe that It’s Time to Build stands alone: the point is not the details, or the author, but the sentiment. The changes that are necessary in America must go beyond one venture capitalist, or even the entire tech industry. The idea that too much regulation has made tech the only place where innovation is possible is one that must be grappled with, and fixed.

    And yet, Andreessen himself said that we need to demand more from one another. We need to figure out how to fix Wisconsin, not flee from it. We need to figure out how to build real businesses that build real things, not virtualize everything. And we need to start fighting for not just infinite upside, but the sort of minute changes in cities, states, and nations that will make it possible to build the future.


    1. I see no point in bothering with those who appear to hate technology, and particularly Andreessen, as a lifestyle. 


  • The NBA and Microsoft

    From a Microsoft press release:

    The National Basketball Association (NBA) and Microsoft today announced a new multi-year collaboration, which will transform the way in which fans experience the NBA. As part of the collaboration, Microsoft will become the Official Artificial Intelligence Partner and an Official Cloud and Laptop Partner for the NBA, Women’s National Basketball Association (WNBA), NBA G League, and USA Basketball, beginning with the 2020-21 NBA season.

    Microsoft and NBA Digital — co-managed by the NBA and Turner Sports — will create a new, innovative direct-to-consumer platform on Microsoft Azure that will use machine learning and artificial intelligence to deliver next generation, personalized game broadcasts and other content offerings as well as integrate the NBA’s various products and services from across its business. The platform will re-imagine how fans engage with the NBA from their devices by customizing and localizing experiences for the NBA’s global fanbase, which includes the 1.8 billion social media followers across all league, team and player accounts.

    Beyond delivering live and on-demand game broadcasts through Microsoft Azure, the NBA’s vast array of data sources and extensive historical video archive will be surfaced to fans through state-of-the-art machine learning, cognitive search and advanced data analytics solutions. This will create a more personalized fan experience that tailors the content to the preferences of the fan, rewards participation, and provides more insights and analysis than ever before. Additionally, this platform will enable the NBA to uncover unique insights and add new dimensions to the game for fans, coaches and broadcasters. The companies will also explore additional ways technology can be used to enhance the NBA’s business and game operations.

    As part of the collaboration, Microsoft will become the entitlement partner of the NBA Draft Combine beginning next season and an associate partner of future marquee events, including NBA All-Star, MGM Resorts NBA Summer League and WNBA All-Star.

    The logic for the NBA in this deal is clear:

    • First, in my experience Turner has dropped the ball in terms of the NBA’s digital experience, particularly League Pass. Microsoft should dramatically improve the experience for the NBA’s digital customers. [UPDATE: I mostly watch International League Pass, which I now understand was not managed by Turner; my apologies for Turner for the mistake]
    • Second, the NBA is, in some respects, no different from a movie or television studio: it produces content and then sells it to the highest bidder, usually delineated by geography. Digital, though, makes it possible to own the customer relationship directly, a la Netflix. Or perhaps Disney+ is the better example, given how differentiated the NBA’s content is; this deal is clearly working towards that goal.
    • Third, that last paragraph from the press release is an important one: it seems likely that the NBA is going to make out well in this deal from a marketing perspective, even if this partnership is underwhelming.

    The Microsoft angle is equally interesting, and like many tech deals, has much higher risk/reward:

    • There are significant technical barriers to achieving what this deal entails. Microsoft is going to spend a lot of time and money on a relatively small business.
    • Microsoft, at the same time, is uniquely suited to solving these challenges: what stands out to me in the conversation below is the talk of Xbox, a division that failed to achieve Steve Ballmer’s goal of a universal “three-screens-and-a-cloud”, and has instead become a fine enough gaming option; its technologies, though, could really make this effort sing.
    • If Microsoft pulls this off, the potential to re-use the technology developed for the NBA, not only for other sports leagues, but for media entities of all types, could potentially be massive.

    There are other angles to this as well: one thing that intrigues me is the potential for channel conflict on the NBA side. It seems a bit far-fetched to think that the NBA seeking to own the customer relationship is good for TNT or ESPN, or that the latter will help the former achieve this goal. And yet TNT and ESPN pay the NBA’s bills. This will be a project worth watching for many months to come.

    An Interview with Adam Silver and Satya Nadella

    In the run-up to this announcement I was able to spend a few minutes with NBA Commissioner Adam Silver and Microsoft CEO Satya Nadella. A lightly edited transcript is available to Daily Update subscribers.

    A podcast of the interview, though, is available for free via the Stratechery Podcast service. Here is how to listen:

    If, in the future, you would like to listen to Daily Update podcasts as well — and I hope you do! — simply visit the link in your show notes or the Daily Update Subscription Management page and subscribe. Your feed will be updated immediately with Daily Update episodes (you don’t need to add it again).


  • Listen to Free Weekly Articles in Your Podcast Player

    While Stratechery is supported by subscriptions to the Daily Update, there has long been the option to receive free Weekly Articles via email. In a similar vein, I am happy to announce that the Daily Update Podcast has now been expanded to Stratechery Podcasts: you can receive just the free Weekly Article in your podcast player if you so choose.

    Here is how to listen:

    If, in the future, you would like to listen to Daily Update podcasts as well — and I hope you do! — simply visit the link in your show notes or the Daily Update Subscription Management page and subscribe. Your feed will be updated immediately with Daily Update episodes (you don’t need to add it again).


  • Coronavirus Clarity

    Apple and Google, who last Friday jointly | announced new capabilities for contact tracing coronavirus carriers at scale, released a new statement yesterday clarifying that no government would tell them what to do. Or, to put it in the gentler terms conveyed by CNBC:

    Apple and Google, normally arch-rivals, announced on Friday that they teamed up to build technology that enables public health agencies to write contact-tracing apps. The partnership is being closely watched: The two Silicon Valley giants are responsible for the two dominant mobile operating systems globally, iOS and Android, which together run almost 100% of smartphones sold, according to data from Statcounter…The fact that the apps work best when a lot of people use them have raised fears that governments could force citizens to use them. But representatives from both companies insist they won’t allow the technology to become mandatory…

    The way the system is envisioned, when someone tests positive for Covid-19, local public health agencies will verify the test, then use these apps to notify anybody who may have been within 10 or 15 feet of them in the past few weeks. The identity of the person who tested positive would never be revealed to the companies or to other users; their identity would be tracked using scrambled codes on phones that are unlocked only when they test positive. Only public health authorities will be allowed access these APIs, the companies said. The two companies have drawn a line in the sand in one area: Governments will not be able to require its citizens to use contact-tracing software built with these APIs — users will have to opt-in to the system, senior representatives said on Monday.

    The reality that tech companies, particularly the big five (Apple, Microsoft, Google, Amazon, and Facebook), effectively set the rules for their respective domains has been apparent for some time. You see this in debates about what content to police on Facebook or YouTube, what apps to allow and what rules to apply to them on iOS and Android, and the increasing essentiality of AWS and Azure to enterprise. What is critical to understand about this dominance is why it arises, why current laws and regulations don’t seem to matter, and what signal it is that actually drives big company decision-making.

    Scale and Zero Marginal Costs

    Tech, from the very beginning of Silicon Valley, has been about scale in a way few other industries have ever been: silicon, the core element in computer chips, is basically free, which meant the implication of zero marginal costs — and relatedly, the importance of investing in massive fixed costs — has been at the core of business from the time of Fairchild Semiconductor. From The Intel Trinity by Michael Malone:

    What Noyce explained and Sherman Fairchild eventually believed was that by using silicon as the substrate, the base for its transistors, the new company was tapping into the most elemental of substances. Fire, earth, water, and air had, analogously, been seen as the elements of the universe by the pre-Socratic Greek philosophers. Noyce told Fairchild that these basic substance — essentially sand and metal wire — would make the material cost of the next generation of transistors essentially zero, that the race would shift to fabrication, and that Fairchild could win that race. Moreover, Noyce explained, these new cheap but powerful transistors would make consumer products and appliances so inexpensive that it would soon be cheaper to toss out and replace them with a more powerful version than to repair them.

    This single paragraph remains the most important lens with which to understand technology. Consider the big 5:

    • Apple certainly incurs marginal costs when it comes to manufacturing devices, but those devices are sold with massively larger margins than Apple’s competitors thanks to software differentiation; software has huge fixed costs and zero marginal costs. That differentiation created the App Store platform, where developers differentiate Apple’s devices on Apple’s behalf without Apple having to pay them; in fact, Apple takes 30% of their revenue.
    • Microsoft built its empire on software: Windows created the same sort of platform as iOS, while Azure is first-and-foremost about spending an overwhelming amount of money on hardware and then charging companies to rent it (followed by software differentiation with platform services); Office, meanwhile, has shifted from the very profitable model of writing software and then duplicating it endlessly for license fees to the extremely profitable model of writing software and then renting it endlessly for subscription payments.
    • Google spends massively on software, data centers, and data collection to create virtuous cycles where users access its servers to gain access to 3rd-party content, whether that be web pages, videos, or ad-supported content, which incentivizes suppliers to create even more content that Google can leverage to make itself better and more valuable to users.
    • AWS is the same model as Azure; Amazon.com has invested massive amounts of money on logistic capabilities — with huge marginal costs, to be clear, which has always made Amazon unique — to create an indispensable platform for suppliers and 3rd-party merchants.
    • Facebook, like Google, spends massively on software, data centers, and data collection to create virtuous cycles where users access its servers to gain access to third-party content, but the real star of the show is first-party content that is exclusive to Facebook — making it incredibly valuable — and yet free to obtain.

    None of the activities I just detailed are illegal by any traditional reading of antitrust law (some of Google’s activities and Apple’s App Store policies come closest). The core problem are the returns to scale inherent in a world of zero marginal costs — first in the case of chips, and then in the case of software — that result in bigger companies becoming more attractive to both users and suppliers the larger they become, not less.

    Understanding Versus Approval

    Facebook, earlier this year, took this reality to its logical conclusion, at least as far as its battered image in the media was concerned. CEO Mark Zuckerberg, on the company’s earnings call in January, said:

    We’re also focused on communicating more clearly what we stand for. One critique of our approach for much of the last decade was that because we wanted to be liked, we didn’t always communicate our views as clearly because we were worried about offending people. So this led to some positive but shallow sentiment towards us and towards the company. And my goal for this next decade isn’t to be liked, but to be understood. Because in order to be trusted, people need to know what you stand for.

    So we’re going to focus more on communicating our principles, whether that’s standing up for giving people a voice against those who would censor people who don’t agree with them, standing up for letting people build their own communities against those who say that the new types of communities forming on social media are dividing us, standing up for encryption against those who say that privacy mostly helps bad people, standing up for giving small businesses more opportunity and sophisticated tools against those who say that targeted advertising is a problem, or standing up for serving every person in the world against those who say that you have to pay a premium in order to really be served.

    These positions aren’t always going to be popular, but I think it’s important for us to take these debates head-on. I know that there are a lot of people who agree with these principles, and there are a whole lot more who are open to them and want to see these arguments get made. So expect more of that this year.

    The social network, for once, was ahead of the curve, as the coronavirus showed just how critical it was to allow the free flow of information, something I detailed in Zero Trust Information:

    The implication of the Internet making everyone a publisher is that there is far more misinformation on an absolute basis, but that also suggests there is far more valuable information that was not previously available:

    A drawing of The Implication of More Information

    It is hard to think of a better example than the last two months and the spread of COVID-19. From January on there has been extensive information about SARS-CoV-2 and COVID-19 shared on Twitter in particular, including supporting blog posts, and links to medical papers published at astounding speed, often in defiance of traditional media. In addition multiple experts including epidemiologists and public health officials have been offering up their opinions directly.

    Moreover, particularly in the last several weeks, that burgeoning network has been sounding the alarm about the crisis hitting the U.S. Indeed, it is only because of Twitter that we knew that the crisis had long since started (to return to the distribution illustration, in terms of impact the skew goes in the opposite direction of the volume).

    The Problem With Experts

    If I can turn solipsistic for a moment, while preparing that piece, I warned a friend that it would be controversial, and he couldn’t understand why. In fact, though, I turned out to be right: lots of members of the traditional media didn’t like the piece at all, not because I attacked the traditional media — which I mostly didn’t, and in fact relied on its reporting, as I consistently do on Stratechery — but because I dared to suggest that a world without gatekeepers had upside, not just downside.

    I went further two weeks ago in Unmasking Twitter, arguing that the media’s overreliance on experts was precisely why social media should not be censored:

    It sure seems like multiple health authorities — the experts Twitter is going to rely on — have told us that masks “are known to be ineffective”: is Twitter going to delete the many, many, many tweets — some of which informed this article — arguing the opposite?

    The answer, obviously, is that Twitter won’t, because this is another example of where Twitter has been a welcome antidote to “experts”; what is striking, though, is how explicitly this shows that Twitter’s policy is a bad idea, not just because it allows countries like China to indirectly influence its editorial decisions, but also because it limits the search for truth.

    Interestingly, this self-reflective piece by Peter Kafka, appears to agree with at least the first part of that argument:

    As we head into the next phase of the pandemic, and as the stakes mount, it’s worth looking back to ask how the media could have done better as the virus broke out of China and headed to the US. Why didn’t we see this coming sooner? And once we did, why didn’t we sound the alarm with more vigor?

    If you read the stories from that period, not just the headlines, you’ll find that most of the information holding the pieces together comes from authoritative sources you’d want reporters to turn to: experts at institutions like the World Health Organization, the CDC, and academics with real domain knowledge.

    The problem, in many cases, was that that information was wrong, or at least incomplete. Which raises the hard question for journalists scrutinizing our performance in recent months: How do we cover a story where neither we nor the experts we turn to know what isn’t yet known? And how do we warn Americans about the full range of potential risks in the world without ringing alarm bells so constantly that they’ll tune us out?

    What is striking about Kafka’s assessment — which to be clear, should be applauded for its self-awareness and honesty — is the degree to which it effectively accepts the premise that journalists ought not think for themselves, but rather rely on experts.

    But when it came to grappling with a new disease they knew nothing about, journalists most often turned to experts and institutions for information, and relayed what those experts and institutions told them to their audience.

    Again, I appreciate the honesty; it backs up my argument in Unmasking Twitter that this reflected the traditional role the media played:

    In the analog world, politicians and experts needed the media to reach the general population; debates happened between experts, and the media reported their conclusions. Today, though, politicians and experts can go direct to people — note that I used nothing but tweets from experts above. That should be freeing for the media in particular, to not see Twitter as opposition, but rather as a source to challenge experts and authority figures, and make sure they are telling the truth and re-visiting their assumptions.

    This, notably, is another area where the biggest tech companies are far ahead.

    The Waning of East Coast Media

    Yesterday the New York Times wrote an article entitled, The East Coast, Always in the Spotlight, Owes a Debt to the West:

    The ongoing effort of three West Coast states to come to the aid of more hard-hit parts of the nation has emerged as the most powerful indication to date that the early intervention of West Coast governors and mayors might have mitigated, at least for now, the medical catastrophe that has befallen New York and parts of the Midwest and South.

    Their aggressive imposition of stay-at-home orders has stood in contrast to the relatively slower actions in New York and elsewhere, and drawn widespread praise from epidemiologists. As of Saturday afternoon, there had been 8,627 Covid-19 related deaths in New York, compared with 598 in California, 483 in Washington and 48 in Oregon. New York had 44 deaths per 100,000 people. California had two.

    But these accomplishments have been largely obscured by the political attention and praise directed to New York, and particularly its governor, Andrew M. Cuomo. His daily briefings — informed and reassuring — have drawn millions of viewers and mostly flattering media commentary…

    This disparity in perception reflects a longstanding dynamic in America politics: The concentration of media and commentators in Washington and New York has often meant that what happens in the West is overlooked or minimized. It is a function of the time difference — the three Pacific states are three hours behind New York — and the sheer physical distance. Jerry Brown, the former governor of California, a Democrat, found that his own attempts to run for president were complicated by the state where he worked and lived.

    Jerry Brown ran for President in 1976, 1980, and 1992; this analysis was likely correct then — before the Internet. What seems more likely, now, though, is that this article takes a dose of my previous solipsism and doubles down: the New York Times may not pay particular attention to the West, but that is not necessarily reflective of the rest of the world.

    Critically, it is not reflective of tech companies: what has been increasingly whitewashed in the story of California and Washington’s success in battling the coronavirus1 is the role tech companies played: the first work-from-home orders started around March 1st, and within a week nearly all tech companies had closed their doors; local governments followed another week later.

    This action by local governments was, to be clear, before the rest of the country, and without question saved thousands of lives; it should not be forgotten, though, that executives who listened not to the media but primarily to social and non-traditional media were the furthest ahead of the curve. In other words, it increasingly doesn’t matter who or what the media covers, or when: success comes from independent thought and judgment.

    Coronavirus Clarity

    This gets at why the biggest news to come out of Apple and Google’s announcement is, well, the lack of it. Specifically, we have a situation where two dominant companies — a clear oligopoly — are creating a means to track civilians, and there is no pushback. Moreover, it is baldly obvious that the only obstacle to this being involuntary is not the government, but rather Apple and Google. What is especially noteworthy is that the coronavirus crisis is the one time we might actually wish for central authorities to overcome privacy concerns, but these companies — at least for now — won’t do it.

    This is, in other words, the instantiation of Zuckerberg’s declaration that Facebook — and, apparently, tech broadly — would henceforth seek understanding, not necessarily approval. Apple and Google are leaning into their dominant position, not obscuring it or minimizing it. And, because it is about the coronavirus, we all accept it.

    It is, in fact, a perfect example of what I wrote about last week:

    At the same time, I think there is a general rule of thumb that will hold true: the coronavirus crisis will not so much foment drastic changes as it will accelerate trends that were already happening. Changes that might have taken 10 or 15 years, simply because of the stickiness of the status quo, may now happen in far less time.

    This seems likely to be the case when it comes to tech dominance, or at least the acceptance thereof. The truth is we have been living in a world where tech answers to no one, including the media, but we have all — both tech and the media — pretended otherwise. Those days seem over.

    The truth, though, is that this is, unequivocally, a good thing. To have pretended otherwise — for Facebook to have curried favor, or Apple to pretend like it didn’t have market power — was a convenient lie for everyone involved. The media was able to feel powerful, and tech companies were able to consolidate their position without true accountability.

    What we desperately need is a new conversation that deals with the world as it will be and increasingly is, not as we delude ourselves into what once was and wish still were. Tech companies are powerful, but antitrust laws, formulated for oil and railroad companies, don’t really apply. East coast media may dominate traditional channels, but those channels are just one of many on social media, all commoditized in personalized feeds. Centralized governments, predicated on leveraging scale, may be no match for either hyperscale tech companies or, on the flipside, the micro companies that are unlocked by the existence of platforms.

    I don’t have all of the answers here, although I think new national legislative approaches, built on the assumption of zero marginal costs, in conjunction with a dramatic reduction in local regulatory red-tape, gets at what better approaches might look like. Figuring out those approaches, though, means clarity about where we actually are; for that, it turns out, a virus, so difficult to understand, is tremendously helpful.


    1. Above-and-beyond the whitewashing about what happened in the San Francisco Bay Area  


  • Apple, Amazon, and Common Enemies

    Last week, without fanfare, Amazon Prime Video apps on iOS made a subtle change to the experience of purchasing or renting TV Shows and videos:

    Amazon's new purchase flow on iOS

    That screen is not Apple’s in-app purchase screen, which was, by fiat, the only way to buy digital content on iOS devices. That meant that Apple received somewhere between 15% and 30% of ebook, music, video, etc. sales and subscriptions, even if Apple had nothing to do with their creation; another word for this is rent, thanks to Apple’s dominant position in portable devices, particularly the high end.

    Apple was careful to say that this wasn’t a special deal to Amazon, noting in a statement:

    Apple has an established program for premium subscription video entertainment providers to offer a variety of customer benefits — including integration with the Apple TV app, AirPlay 2 support, tvOS apps, universal search, Siri support and, where applicable, single or zero sign-on. On qualifying premium video entertainment apps such as Prime Video, Altice One and Canal+, customers have the option to buy or rent movies and TV shows using the payment method tied to their existing video subscription.

    The Canal+ deal is likely tied to the premium channel’s decision to offer Apple TV set top boxes in lieu of its own back in 2018; Altice One made a similar deal earlier this year. Both, though, were about existing TV distributors (premium channels in the case of Canal+, and a traditional MVPD — multichannel video programming distributor — in the case of Altice One); Amazon Prime Video is an over-the-top streaming service with its own exclusive content, making this a much bigger deal than Apple’s statement might suggest.

    Moreover, it’s not simply a big deal, it’s a fascinating one as well: you can make an argument it is better for Amazon, better for Apple, better for both, or neither — it just depends on your point of view.

    Apple Versus Amazon

    View 1: A Win for Amazon

    Start with the narrowest point of view: the fact that Amazon can now sell and rent TV shows and movies on iOS devices from iPhones to iPads to Apple TV is a big win for Amazon. The reasoning is straightforward: 30% is a lot to pay to Apple, and now Amazon doesn’t need to.

    It’s not a complete win, though: Amazon still can’t sell e-books in the Kindle app. In that case (as with Prime Video previously), Amazon leaves it up to customers to figure out that they have to browse to Amazon in Safari to make a purchase; Apple doesn’t allow apps like Kindle or Spotify to even suggest that customers can subscribe or purchase on the web.

    View 2: A Quid Pro Quo

    Of course Apple didn’t make this deal in a vacuum: the company’s statement clearly implied that the Amazon Prime Video app — which until a couple of years ago, didn’t even exist on Apple TV — integrated the full feature set of the Apple TV app (yes, it is confusing that the box, app, service, and subscription offering are all variants of the name Apple TV).

    The idea of the Apple TV app (which again, is different than the traditional Apple TV interface, and is available not just on the Apple TV box but also iPhone, iPad, and Mac) is to be the overarching interface for all TV viewing. Different video providers might have their own apps, but the content within those apps is surfaced in the same interface.

    This is in certain respects a win for customers — instead of navigating multiple apps to find one show, they can simply go straight to the show they wish to watch — and definitely a win for Apple. They own the interaction point with the customer and are effectively commoditizing supply.

    Indeed, this is where most of the analysis of this deal has settled: both Amazon and Apple are getting what they want, and customers are benefitting as well — a win-win-win.

    View 3: A Win for Apple

    I think, though, this analysis is incomplete, because it does not incorporate the broader video strategies of Apple and Amazon. The inclination of most people looking at the space is to assume that everyone is trying to be Netflix: earn a monthly subscription revenue for a library of shows, some number of which are created in-house.

    The truth, though, is that Amazon has been making good money in video by doing what it does best: being a distributor; Variety reported back in 2018 that Amazon accounted for 55 percent of all direct-to-consumer video subscriptions. Moreover, Roku’s entire profit margin is predicated on the same business.

    I have long argued that this was the best way to understand Apple TV+: Apple has, since the days of iTunes, understood the power and profit that comes from being a digital storefront; moreover, this approach fit squarely within the company’s Services narrative, which not only was about increasing Services revenue, but also increasing margin.

    In other words, Apple TV+ was actually about Apple TV Channels: give customers a reason to use the Apple TV app, and then sell subscriptions to HBO, Showtime, Crunchyroll, etc. That is certainly the best way to understand Amazon Prime Video: there is not nearly enough content to seriously challenge Netflix, but there are shows worth watching, which makes the Amazon Fire TV boxes worth owning, and the Prime Video App worth using — and that means more subscriptions on which Amazon can take an ongoing percentage.

    In this view, then, Apple is the clear winner: Amazon just made the Apple TV interface better, which means that Apple can sell that many more Apple TV Channels subscriptions — presumably at the expense of Amazon. I think Amazon was willing to make this tradeoff for a few reasons (that I first detailed yesterday):

    • Apple’s best customers — of which owning an Apple TV and using the Apple TV app is likely a good proxy — are unlikely to ever buy an Amazon Fire or use Prime Video as their primary interface. Therefore they aren’t really in the Amazon Channels addressable market anyways.
    • Amazon now has much better access to Apple’s best customers, both in terms of pushing Amazon Prime Video and also selling and renting individuals shows and movies; moreover, it doesn’t have to share revenue with Apple. This is all additive.

    The third reason, though, suggests what I think is the real quid pro quo.

    View 4: Quid Pro Quo 2

    This isn’t the first surprising development in the Amazon-Apple relationship in recent years. In December of 2018 the two companies made an announcement that I found far more shocking than this one; from Bloomberg:

    Apple Inc. and Amazon.com Inc. announced their second partnership this month: the iPhone maker’s music-streaming service is coming to Amazon’s Echo devices in December. The move gets Apple Music onto the most-popular voice-controlled speakers, giving it distribution beyond Apple’s own devices. Subscribers will be able to control Apple Music with Amazon’s Alexa digital assistant, the first time Apple has opened up its music service to full voice control outside its own Siri technology. The decision pushes Apple’s music service into more living rooms at a time when its own internet-connected speaker, the HomePod, hasn’t sold as well as the competition. Given the breadth of Alexa-enabled speakers on the market, the move could also boost Apple’s own subscription numbers…

    Apple definitely made a trade-off here, effectively sacrificing the HomePod’s biggest selling point for greater distribution for Apple Music. The real winner, though, was Amazon, which views Alexa as a critical investment and, by virtue of adding Apple Music, gained a meaningful differentiator relative to Google Home.

    At the time I suspected that Prime Video on Apple TV was the quid pro quo; that seems even more certain now, particularly when you realize that Amazon has, in some respects, helped Apple compete against its own business.

    The Coronavirus Impact

    It is not enough, though, to only consider Apple and Amazon, particularly now, in the coronavirus crisis. There will certainly, both here and elsewhere, be plenty of missives about what will, and will not change because of the dramatic upheaval that is happening now; it is hard to say anything for certain, in part because of the uncertainty as to when the current crisis will pass.

    At the same time, I think there is a general rule of thumb that will hold true: the coronavirus crisis will not so much foment drastic changes as it will accelerate trends that were already happening. Changes that might have taken 10 or 15 years, simply because of the stickiness of the status quo, may now happen in far less time.

    It seems very plausible that cord-cutting will be a leading example. The number of households with MVPD subscriptions has been falling steadily, but the cable business has still been a profitable one for everyone involved, from distributors to networks. What happens, though, in a severe recession without live sports, the single best reason to subscribe to an MVPD in the first place?

    I have previously made my case for the long run: MVPDs become de facto sports and live news bundles, in which a far smaller rump of customers pay similar prices for a smaller number of channels, almost all of which are devoted to live events and charge significant carriage fees. Meanwhile, almost all of the other jobs TV does, including drama, escapism, and filler, would migrate to streaming services that provided a better experience by virtue of eliminating time as a constraint.

    It is the long run that is particulary interesting in the context of Apple and Amazon’s deal, because that inevitably bring Netflix into the picture. I wrote back in January 2016 about the company’s ladder strategy:

    Netflix started by using content that was freely available (DVDs) to offer a benefit — no due dates and a massive selection — that was orthogonal to the established incumbent (Blockbuster). This built up Netflix’s user base, brand recognition, and pocketbook

    Netflix then leveraged their user base and pocketbook to acquire streaming rights in the service of a model that was, again, orthogonal to incumbents (linear television networks). This expanded Netflix’s user base, transformed their brand, and continued to increase their buying power

    With an increasingly high-profile brand, large user base, and ever deeper pockets, Netflix moved into original programming that was orthogonal to traditional programming buyers: creators had full control and a guarantee that they could create entire seasons at a time

    Each of these intermediary steps was a necessary prerequisite to everything that followed, culminating in yesterday’s announcement: Netflix can credibly offer a service worth paying for in any country on Earth, thanks to all of the IP it itself owns. This is how a company accomplishes what, at the beginning, may seem impossible: a series of steps from here to there that build on each other. Moreover, it is not only an impressive accomplishment, it is also a powerful moat; whoever wishes to compete has to follow the same time-consuming process.

    This actually underreported Netflix’s approach: the company has further shifted from buying its own shows to producing its own shows, which required far more cash up front (thus the massive negative cash flow in recent years), but with far more upside down the road.

    Here’s the thing, though: I am not convinced that Netflix’s long-run optimal position is producing all of its own content. More importantly, I’m not sure this is the optimal position for content producers.

    Disney Versus Netflix

    Disney is, as it often is, the exception that proves the rule. I explained last year in Disney and the Future of TV:

    The best way to understand Disney+, which will cost only $7.99/month, starts with the name: this is a service that is not really about television, at least not directly, but rather about Disney itself. This famous chart created by Walt Disney himself remains as pertinent as ever:

    Walt Disney's Disney Map

    This is the only appropriate context in which to think about Disney+. While obviously Disney+ will compete with Netflix for consumer attention, the goals of the two services are very different: for Netflix, streaming is its entire business, the sole driver of revenue and profit. Disney, meanwhile, obviously plans for Disney+ to be profitable — the company projects that the service will achieve profitability in 2024, and that includes transfer payments to Disney’s studios — but the larger project is Disney itself.

    By controlling distribution of its content and going direct-to-consumer, Disney can deepen its already strong connections with customers in a way that benefits all parts of the business: movies can beget original content on Disney+ which begets new attractions at theme parks which begets merchandising opportunities which begets new movies, all building on each other like a cinematic universe in real life. Indeed, it is a testament to just how lucrative the traditional TV model is that it took so long for Disney to shift to this approach: it is a far better fit for their business in the long run than simply spreading content around to the highest bidder.

    Implied in this analysis is the fact that, absent alternative monetization mechanisms like theme parks or merchandise, content is indeed best sold to the highest bidder. Content has extremely high fixed costs, and zero marginal costs; it follows, then, that the fiscally responsible content producers sell that content to as many outlets as possible, in order to maximize their leverage on those fixed costs. The only reason to do otherwise is if, like Disney, you have superior money-making mechanisms internally.

    It follows, then, that endorsing Disney+ as a strategy is unique to Disney; nearly every other content company, without Disney’s ability to monetize content across properties and over time, ought to be pursuing maximum distribution for that content. And that means selling to Netflix: the streaming service has by far the largest customer base and, by implication, the greatest willingness to pay.

    And yet, more and more content producers are seeking to launch their own streaming servies, from HBO Max to NBC Peacock and multiple examples in between. The economics of all of these offerings are, at best, questionable:

    • First, these services are expensive in their own right. A company has to not only stand up its own streaming service, but also an entire infrastructure around payments and customer support.
    • Second, the cost of content is considerable: Hollywood rules (often flouted, to be fair) dictate that streaming services have to pay fair market rates for their own content, which are pricey; that is before considering the cost of content produced elsewhere.
    • Third, and most importantly, is the opportunity cost. Content on one’s own streaming service not only needs to be paid for, it also needs to not be sold to another streaming service, like Netflix.

    I am very skeptical that these costs will be sustainable for most streaming services; my prediction is that most fold within 5 years, and that Netflix is there to pick up the pieces.

    A Common Enemy

    This is the most compelling lens with which to view Apple and Amazon’s recent partnerships. Both, given their desire to be a platform for over-the-top services, are on the same side when it comes to a potential Netflix-dominated future: neither want it to happen. Netflix dominating means that shows are sold directly to Netflix; channels are pointless. Apple and Amazon both, though, want channels to exist, if only so that they can sell subscriptions to them.

    This, by extension, is a reason why Amazon might be willing to strengthen Apple’s platform, even as it competes with Amazon’s; it would also be a reason for a further quid pro quo — Apple offering access to its shows on Amazon’s devices. This remains to be seen. [Editor: it appears this has already happened.]

    Ultimately, though, I favor Netflix in the long run. Apple and Amazon’s strategy both entail replacing MVPDs with a streaming alternative that preserves the existing value chain; value chain transformation, though, inexorably alters the point of integration within the value chain. It seems safer to bet on the company that is predicated on a completely altered future than those hoping for mere substitutes.


  • Unmasking Twitter

    On March 16 Twitter posted An Update on Our Continuity Strategy During COVID-19 that included this bit on how the company was changing its policies about content moderation:

    Broadening our definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information. Rather than reports, we will enforce this in close coordination with trusted partners, including public health authorities and governments, and continue to use and consult with information from those sources when reviewing content.

    So what is the latest guidance from authorities? The World Health Organization (WHO) helpfully has a tweet:

    So does the CDC:

    So does the U.S. Surgeon General:

    What I thought was particularly noteworthy, though, was this tweet from the Surgeon General yesterday:

    Everyone is taking their guidance from the WHO, and that’s a problem.

    WHO and China

    The Wall Street Journal, in an article in February, explored why the WHO seems to act with such deference to China.

    When the World Health Organization declared a global public-health emergency at the end of last month, it praised China’s “extraordinary” efforts to combat the coronavirus epidemic and urged other countries not to restrict travel. “China is actually setting a new standard for outbreak response,” WHO Director-General Tedros Adhanom Ghebreyesus said. Many governments ignored the travel advice. Other public-health experts criticized his unqualified praise for China.

    Among the complaints directed at Dr. Tedros: He was bending to Beijing by lauding a Chinese response that included quarantining 60 million people—which many health experts see as inconsistent with WHO guidelines—while calling on other countries not to cut off travel and trade with China…By praising China’s response effusively, the WHO is compromising its own epidemic response standards, eroding its global authority, and sending the wrong message to other countries that might face future epidemics, they say…

    Dr. Mackenzie questioned why Chinese authorities appeared to delay reporting an increase in infections in the first half of January. Many health experts believe the outbreak spread more quickly early on because local authorities tried to cover it up, including by reprimanding a local doctor who sought to raise the alarm, and then were slow to announce it could pass person to person. “China is obviously an important player,” said Dr. Mackenzie. “So everything the WHO does has to keep that in mind. At the same time, you can be overly effusive.”

    The entire article is well worth a read, but the takeaway is this: at every step of this outbreak the WHO has sought to praise and accommodate China, despite the fact that news about the initial outbreak was forcibly suppressed, the fact that China violated WHO guidelines with the severity of its quarantines (which to be clear, appear to have been effective), the fact that China hid the transmission rate amongst health care workers from the WHO until February 14 and waited weeks to even allow the WHO into the country, and only then on carefully scripted and chaperoned tours.

    Those tours — which again, took weeks to negotiate, even as the coronavirus was spreading all over the globe — resulted in this report. It is, indeed, exceptionally effusive of the Chinese response, and contains no mention of China’s cover-up of human-to-human transmission in particular, which led to this tweet from the WHO:

    This was particularly unfortunate given that Taiwan had told the WHO on December 31 that there was human-to-human transmission.

    At the same time, much of the report is genuinely useful, particularly this warning to other countries:

    COVID-19 is spreading with astonishing speed; COVID-19 outbreaks in any setting have very serious consequences; and there is now strong evidence that non-pharmaceutical interventions can reduce and even interrupt transmission. Concerningly, global and national preparedness planning is often ambivalent about such interventions. However, to reduce COVID-19 illness and death, near-term readiness planning must embrace the large-scale implementation of high-quality, non-pharmaceutical public health measures. These measures must fully incorporate immediate case detection and isolation, rigorous close contact tracing and monitoring/quarantine, and direct population/community engagement.

    Had this been heeded by Western countries, all would be in far better shape than they are.

    Asymptomatic Transmission

    Even so, it might not have mattered, because of this paragraph:

    Asymptomatic infection has been reported, but the majority of the relatively rare cases who are asymptomatic on the date of identification/report went on to develop disease. The proportion of truly asymptomatic infections is unclear but appears to be relatively rare and does not appear to be a major driver of transmission.

    This is problematic in three ways:

    • First, it is at odds with evidence from the Diamond Princess (where every member of the population was tested), Iceland (where a statistically representative sample of the entire population has been tested), and South Korea (where testing was widely available even without symptoms); all show a high rate of asymptomatic infection.
    • Second, there are multiple | reports | from | China that asymptomatic carriers spread the virus.
    • Third, there is compelling statistical evidence that asymptomatic carriers drove the majority of the virus’s spread within China (and likely, by extenstion, around the world). This, notably, suggests that social distancing and travel bans are particularly effective.

    It seems likely this paragraph about the lack of asymptomatic transmission was strongly argued for by China. Caixin reported at the beginning of March about China’s push in this area:

    China’s decision to exclude individuals who carry the new coronavirus but show no symptoms from the country’s public tally of infections has drawn debate over whether this approach obscures the scope of the epidemic, with a document received by Caixin showing a significant proportion of one province’s cases show no symptoms. Since early February, the National Health Commission (NHC) has concluded that “asymptomatic infected individuals” can infect others and demanded local authorities to report those cases. However, the commission has also decided not to include these people in its statistics for “confirmed cases” or indeed to release data on asymptomatic cases.

    In an interview with Nature last week, Wu Zunyou, China’s chief epidemiologist, defended the country’s treatment of asymptomatic data. He told the magazine that a positive nucleic acid test — a genetic sequencing test used to detect the coronavirus — does not necessarily indicate an infection because viral genetic material detected through throat or nasal swabs does not confirm the virus has entered cells and begun to multiply. This notion was also echoed by Chinese representatives at the WHO.

    But this view has been challenged by both domestic and overseas experts, who said that a virus must have replicated to reach detectable levels.

    I am no expert, but given that a virus cannot replicate on its own, but rather must leverage the body’s cells to churn out copies of itself, it seems rather self-evident that if it is detectable it has entered cells. And yet, Director General Tedros Adhanom Ghebreyesus argued — on Twitter! — that asymptomatic carriers were not a concern:

    Again, an increasing amount of evidence is that this just isn’t true: asymptomatic carriers are a major problem.

    Mask Efficacy

    This is where masks come in. Much of the discussion of their efficacy has been focused on whether they keep you safe from the virus, and the evidence suggests that the answer is probably. SlateStarCodex has a comprehensive overview of the evidence here.

    Everyone agrees, though, that those who are sick should wear masks; as the Taiwan CDC puts it, “Masks are mainly used for preventing the spread of disease and protecting people around you.” This, though, highlights the shortcomings of the “Don’t wear masks if you’re not sick” recommendations:

    • First, people are terrible in general at estimating if they are sick, particularly if their symptoms are mild.
    • Second, as Zeynep Tufekci argued in the New York Times, saying that only sick people should wear them stigmatizes the sick and makes them less likely to wear them.
    • Third, and most importantly, asymptomatic transmission means you don’t even know if you are sick in the first place.

    This point was well-made by Sui Huang on Medium:

    There is no scientific support for the statement that masks worn by non-professionals are “not effective”. In contrary, in view of the stated goal to “flatten the curve”, any additional, however partial reduction of transmission would be welcome — even that afforded by the simple surgical masks or home-made (DIY) masks (which would not exacerbate the supply problem). The latest biological findings on SARS-Cov-2 viral entry into human tissue and sneeze/cough-droplet ballistics suggest that the major transmission mechanism is not via the fine aerosols but large droplets, and thus, warrant the wearing of surgical masks by everyone.

    This is where China’s push to exclude asymptomatic cases is so damaging: it excluded what may be the most important SARS-CoV-2 transmission vector, which resulted in the WHO not updating its guidelines, which may have resulted in far more people in the West getting sick than might have otherwise.

    The good news is that the authorities appear to be listening: the Washington Post is reporting that the CDC is considering revisiting its guidelines, and suggesting that people use nonmedical masks or cloth coverings (because N95 masks and surgical masks should be reserved for healthcare workers); Austria already made masks compulsory, joining Slovakia, the Czech Republic, and Bosnia-Herzegovina (masks are, of course, widespread in most of Asia, although, contrary to popular belief, not compulsory by law, although often enforced by private businesses). Germany is considering the same.

    To be very clear, N95 masks and even surgical masks, at least until they are widely available, should be saved for healthcare workers. That’s ok though: homemade masks work, and governments should be honest about that.

    Twitter’s Theoretical Value

    Twitter, in its guidelines, lists multiple examples of when it might enforce its new policy. The third one stood out:

    Description of harmful treatments or protection measures which are known to be ineffective, do not apply to COVID-19, or are being shared out of context to mislead people, even if made in jest, such as “drinking bleach and ingesting colloidal silver will cure COVID-19.”

    It sure seems like multiple health authorities — the experts Twitter is going to rely on — have told us that masks “are known to be ineffective”: is Twitter going to delete the many, many, many tweets — some of which informed this article — arguing the opposite?

    The answer, obviously, is that Twitter won’t, because this is another example of where Twitter has been a welcome antidote to “experts”; what is striking, though, is how explicitly this shows that Twitter’s policy is a bad idea, not just because it allows countries like China to indirectly influence its editorial decisions, but also because it limits the search for truth.

    You can think about the value of disagreeing with experts theoretically. Suppose that experts are correct 9 out of 10 times (and honestly, that’s probably low). However, if they are wrong, they have to pay out $100 (if they are right, they don’t get anything, because that is how the world works; the payout comes from being an expert in the first place). In this case, the expected cost of being an expert is:

    9 x $0 + 1 x $100 = $100

    -$100: yes, you may be right most of the time, but when you get it wrong, it is going to cost you.

    Now, suppose experts have to put up with Twitter and having people question them. It’s a real pain in the rear end, what with all of the trolls and misinformation. To that end, let’s suppose every episode now costs the expert $5 because they have to argue with people who aren’t experts. This suggests the cost is:

    10 x $5 + 1 x $100 = $150

    The problem is that this overlooks the possiblity that the non-experts are sometimes right, or, perhaps more realistically, that they force the experts to re-visit their assumptions and predictions. Suppose that 10% of the time they are actually useful; now the expected cost is:

    10 x $5 + 90%(1 x $100) = $140

    Better than the worst case scenario, but not great.

    That, though, isn’t quite right either, because it misses the fact that on a medium like Twitter, there are effectively infinite counter-arguments — indeed, that is why Twitter is so costly ($5) in the first place! The pay-off, though, is that the right argument is that much more likely to surface — let’s say 90% of the time. Now the expected cost is:

    10 x $5 + 10%(1 x $100) = $60

    That is a big improvement over the base case!

    These numbers are, obviously, completely made up, but frankly, I think they are conservative. The cost of the coronavirus crisis in particular is so astronomical that basically any amount of investment to have avoided it or to ameliorate it is well worth it. Masks, hopefully, will be a good example: if Twitter is right, and the CDC is wrong, and economies are able to open sooner than they might have otherwise, that will be well worth all of the misinformation and terrible takes that Twitter produced in the meantime.

    Internet Optimism

    There is a further, even more optimistic, takeaway. In the analog world, politicians and experts needed the media to reach the general population; debates happened between experts, and the media reported their conclusions. Today, though, politicians and experts can go direct to people — note that I used nothing but tweets from experts above. That should be freeing for the media in particular, to not see Twitter as opposition, but rather as a source to challenge experts and authority figures, and make sure they are telling the truth and re-visiting their assumptions.

    Indeed, while many on the right gripe that the media’s general opposition to Trump is driven by partisanship, I actually think it is a healthy approach to authority in general, particularly when that authority doesn’t need help going directly to people. Imagine if the media applied the same skepticism they give to Trump to the CDC or WHO, much less the Obama administration’s approach to the Great Financial Crisis or the Bush administration’s approach to Iraq (or, for that matter, Chinese data).

    As I have argued from the beginning of this site, the Internet is an amoral force: it is up to us to decide if it is for good for bad. The best way forward is embracing Internet assumptions and using the overwhelming amount of information and free access to anyone to make things better, not try and build a moat around what experts say is right or wrong.

    I wrote a follow-up to this article in this Daily Update.


  • Compaq and Coronavirus

    To live in a moment that will be in history books is not a particularly pleasant experience; history, though, has another cruelty: those that are not remembered at all.

    Compaq’s Impact

    Consider Compaq: it was one of the most important companies in tech history, and today it is all-but forgotten. For example, look at this brief history of the IBM PC I wrote in 2013:

    You’ve heard the phrase, “No one ever got fired for buying IBM.” That axiom in fact predates Microsoft or Apple, having originated during IBM’s System/360 heyday. But it had a powerful effect on the PC market.

    In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired…”

    IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS – DOS – could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft.

    The rest, as they say, is history.

    But wait, there was one critical part of this story that I excluded! IBM wasn’t completely stupid: while much of the IBM PC was outsourced, the BIOS — Basic Input/Output System, which was the firmware that that actually turned on the PC hardware and loaded the operating system — was copyrighted, and, IBM presumed, defensible in court. Compaq, though, figured out how to reverse-engineer the BIOS anyways. Rod Canion, who co-founded Compaq, explained on the Internet History Podcast:

    What our lawyers told us was that, not only can you not use it [the copyrighted code] anybody that’s even looked at it — glanced at it — could taint the whole project. (…) We had two software people. One guy read the code and generated the functional specifications. So, it was like, reading hieroglyphics. Figuring out what it does, then writing the specification for what it does. Then, once he’s got that specification completed, he sort of hands it through a doorway or a window to another person who’s never seen IBM’s code, and he takes that spec and starts from scratch and writes our own code to be able to do the exact same function…

    [We had] just a bull-headed commitment to making all the software run. We were shocked when we found out none of our competitors had done it to the same degree. We could speculate on why they had stopped short of complete compatibility: It was hard. It took a long time. And there was a natural rush to get to market. People wanted to be first. There was only one thing for us: we didn’t have a product if we couldn’t run the IBM-PC software. And if you didn’t run all of it, how would anyone be confident enough to buy your computer, if they didn’t know they were always going to be able to run new software? We took it very, very seriously.

    The result was a company that came to dominate the market; in fact, Compaq was the fastest startup to hit $100 million in revenue, then the youngest firm to break into the Fortune 500, then the fastest company to hit $1 billion in revenue. By 1994 Compaq was the largest PC maker in the world.

    Compaq’s Virtualization

    Canion was, by that point, long gone; the board had ousted him in 1991 when the company was struggling to compete with direct-to-consumer PC makers selling “good enough” computers that were not nearly as well-engineered as Compaqs, but were faster to market and much cheaper. New CEO Eckhard Pfeiffer introduced the low-cost Presario line, which leveraged cheaper parts to break the sub-$1,000 price point, leading to Compaq achieving that first place position. By 1996, though, growth was again slowing, and Pfeiffer needed a new plan. Part 1 was expanding into more markets; Bloomberg explains part 2:

    The second part of the formula — for producing profits along with growth — will involve wider use of outsourcing and partnership deals. That’s because the new financial yardstick — return on assets — will force the divisions to slash investment in assets such as plant, inventory, and overhead wherever possible. If the $3 billion home-PC business can cut its asset base, for instance, it can still deliver a 20% annual return to the company — even though price competition in home PCs will likely keep operating margins at around 2%.

    To get there, Compaq has already started “virtualizing” parts of its business. After cutting $57 off the cost of each home PC last year by building the chassis at its plant in Shenzhen, China, the company went a step further in cutting the cost of business desktop PCs: Instead of investing millions to expand the Shenzhen plant, Gregory E. Petsch, senior vice-president for operations, persuaded a Taiwanese supplier to build a new factory adjacent to Compaq’s to build the mechanicals for the business models. The best part of the deal: The Taiwanese supplier owns the inventory until it arrives at Compaq’s door in Houston. “This is the right way to do it,” says Sanford C. Bernstein & Co. computer analyst Vadim D. Zlotnikov.

    It worked for a time: Compaq’s stock price surged over the next two years as the company rode the Internet wave and outsourced not only the building of PCs and eventually their design, but also their new businesses:

    To compete in the big-iron business profitably, Compaq is counting on a series of relationships with other companies that can supply the kind of handholding that companies such as IBM are famous for. Instead of investing in legions of field technicians and programmers — and building up costly assets — the computer maker will use the resources of systems integrator Andersen Consulting and software maker SAP, among others. These companies have the personnel to install and maintain systems the way IBM or HP do. So Compaq gets to play in the big-iron market without incurring the costs of running its own services or software businesses. Using these partners, Compaq is already delivering packages of networks, servers, and services to big customers including General Motors, British Telecommunications, First Interstate Bancorp, and Deutsche Bundespost.

    Compaq, however, may not be able to play through their intermediaries forever. “The real solution is to create your own capability. It takes longer and is more painful, but ultimately, it is more successful,” says Graham Kemp, president of G2 Research Inc.

    Compaq never did bother; the engineering determination exemplified by Canion was long gone, and soon Compaq was as well: the company merged with HP in 2002 (resulting in a huge destruction in shareholder value), served as the badge for HP’s cheapest computers for a decade, and in 2012 was written down completely for $1.2 billion.

    And no one even noticed.

    Coronavirus Action

    Compaq’s demise was, to be fair, first and foremost about the value chain within which it competed. The entire reason Compaq could build the business it did was because as long as you had an IBM-compatible BIOS, an x86 processor, and a license for Windows, you could sell a PC that was compatible with all of the software out there. That, though, meant commoditization in the long-run, which is exactly what happened to Compaq and, it should be noted, basically all of its competitors.

    Still, while I could not ascertain exactly which Taiwanese manufacturer it was that Compaq persuaded to build its PCs and hold them on its balance sheet, I suspect there is a good chance it is still in business: companies like Quanta and Compal took over PC manufacturing in the 1990s, and PC design entirely in the 2000s. Brand names were simply that: names, and not much more. This, of course, made for a fantastic return on assets; it was not so great for long-term sustainable revenue and profits.

    It is at this point, 1400+ words in, that I must make what is probably an obvious analogy to the historical moment we are in. While there may have been an opportunity to stop SARS-CoV-2 late last year, by January (when the W.H.O. parroted China’s insistence that there was no human-to-human transmission), worldwide spread was probably inevitable; the New York Times brilliantly illustrated the travel patterns that explain why.

    Since then, though, there has been divergence between countries that acted and countries that talked. Taiwan, where I live, is perhaps the best example of the former; Dr. Jason Wang wrote an overview of Taiwan’s actions (and published a list of 124 action items), including:

    • Passengers on flights from Wuhan were screened for fever starting in December, and banned from entry in January; the rest of Hubei Province, and then China as a whole — including non-Chinese who had recently visited China — soon followed.
    • Data from the National Immigration Agency was integrated into the National Health Insuance Administration, allowing officials to quickly match-up COVID-19 symptoms with recent travel history; full access was given to hospitals in late February.
    • People designated for home quarantine are tracked via their smartphones, and fined heavily for any violations.

    What stood out to me was mask production; on January 23, the day that China locked down Wuhan, Taiwan had the capability of producing 2.44 million masks a day; this week Taiwan is expected to exceed 13 million masks a day, a sufficient number for not only medical workers but also the general public. The mobilization bridged government, industry, and workers, and is ongoing — the plan is for Taiwan to be able to export masks soon.

    The public has done its part as well: most restaurants and buildings check the temperature of anyone who enters, and far more people than usual are wearing said masks, which worked to stop the spread of SARS in 2003, and which are likely particularly effective in the case of asymptomatic carriers of SARS-CoV-2.

    The Great Resignation

    The contrast with Western countries is stark: to the extent government officials across the Western world were discussing the coronavirus a month ago, it was to express support for China or insist that life carry on as before; I already praised the role Twitter played in sounding the alarm — often in the face of downplaying from the media — but even that was, by definition, talk. What does not appear to have happened anywhere across the West is any sort of meaningful action until it was far too late.

    This has resulted in two problems: first, by the time Western governments acted, the only available option has been widespread lockdowns. Second, the talk itself is missing even the possibility of action. For example, over the last 48 hours there has been increasing discussion about trade-offs, specifically the trade-off between limiting the spread of the coronavirus and the halt in economic activity that is required to do so. Given how much I write about tradeoffs, I must surely consider this a good thing, no?

    In fact, I think it is incredibly tragic, but not for the reasons you might think. The fact of the matter is that we do make tradeoffs between human lives and economic activity all the time — speed limits are perhaps the most banal example. What is truly tragic is the utter lack of resolve and lack of a bias for action in this so-called tradeoff. The only options are to give up the economy or give in to the virus: the possibility of actually beating the damn thing is completely missing from the conversation. To put it another way, the West feels like Compaq in the 1990s, relying on its brand name and partnerships with other entities to do the actual work, forgetting that it was hard work and determination that made it great in the first place.

    The best overview of how actual hard work could make a difference was written by Tomas Pueyo in this article entitled The Hammer and the Dance; to briefly summarize, the idea is to lockdown now to stop the uncontrolled spread of SARS-CoV-2, and then leverage the same sort of epidemilogical tools that countries like Taiwan have, including aggressive quarantining of known infections and extensive contact tracing.

    This gets to the second reason why the current discussion of tradeoffs is so disappointing: not only is it debating a tradeoff that we don’t necessarily need to make, at least in the long run, it is also foreclosing discussions on tradeoffs we absolutely need to consider. For example, one of my neighbors just returned from America and the police were checking on his home quarantine. In fact, look more closely at what Taiwan has done to contain SARS-CoV-2 to-date — you can reframe everything in a far more problematic way:

    • Restrict international movement and close borders (including banning all non-resident foreigners this week)
    • Integrate and share private data across government agencies and with hospitals.
    • Track private individual movements via their smartphones.

    Even the mask production I praised required requisitioning private property by the government, and the refusal of local businesses to serve customers without masks or insist on taking their temperature is probably surprising to many in the West.

    And yet, life here is normal. Kids are in school, restaurants are open, the grocery stores are well-stocked. I would be lying if I didn’t admit that the rather shocking assertions of government authority and surveillance that make this possible, all of which I would have decried a few months ago, feel pretty liberating even as it is troubling. We need to talk about this!

    Policing Talk

    The first problem of being a society of talk, not action, is the inability to even consider hard work as a solution; the second is a blindness to the real trade-offs at play. The third, though, is the most sinister of all: if talk is all that matters, then policing talk becomes an end to itself.

    I know, for example, that I am going to get pushback on this Article, telling me to stick in my lane, and leave discussions of the coronavirus to the experts or government officials. Never mind that so many of those experts and officials have made mistake after mistake — it’s all in the memory hole now!

    This is not at all to say that non-experts have the answers either; as I wrote last week the amount of misinformation is exploding. Rather, the point is that this is a situation with an unmatched-in-my-lifetime combination of massive uncertainty with unfathomable stakes. It follows, then, that the likelihood of any one person or entity having the correct answer is low, while the imperative to allow the right answer to bubble up — or, more accurately, be discovered step-by-step, idea-after-discarded-idea — is high. There is more value than ever in verifying or disproving ideas and information, and far more danger than ever in policing them.

    Moreover, if the real tradeoffs to consider are about trading away civil liberties — which is exactly what has happened in Taiwan, at least to some extent — then the imperative to preserve debate about these matters is even more important. The most precious civil liberty of all is the ability to talk. Indeed, that is the terrible irony of losing the capability and will for action: it ultimately endangers the only thing we seem to be good at, and in this case, the potential writedown to too terrible to consider.


  • Defining Information

    Last Wednesday morning, I wrote a piece called Zero Trust Information, where I lauded social media generally and Twitter specifically for functioning as an early warning system for the impending coronavirus crisis. For weeks a motley collection of folks — some epidemiologists and public health officials, but many not — had been sounding the alarm on Twitter about the exponential spread of SARS-CoV-2 and the impact the resultant COVID-19 disease would have on health care systems, culminating in a member of the Seattle Flu Study tweeting the results of an illegal test showing community transmission in Washington State. As I wrote in that piece:

    Once we get through this crisis, it will be worth keeping in mind the story of Twitter and the heroic Seattle Flu Study team: what stopped them from doing critical research was too much centralization of authority and bureaucratic decision-making; what ultimately made their research materially accelerate the response of individuals and companies all over the country was first their bravery and sense of duty, and secondly the fact that on the Internet anyone can publish anything.

    Later that night, after a presidential address, the infection of Tom Hanks, and the suspension of the NBA, the rest of the country finally woke up, and along the way, something interesting happened: Twitter became a much worse source of information.

    Information Over Time

    The biggest complaint I received about Zero Trust Information was this graph, which some folks argued misrepresented the situation online:

    A drawing of The Implication of More Information

    While I used a normal distribution for illustrative purposes, not as an assertion about relative volumes, I can understand why some people took it literally; in fact, my only point was to show that an increase on the negative left side of the distribution — whatever that distribution ultimately looks like — was enabled by the exact same forces that allowed for an increase on the positive right side of the distribution.

    I would make two further observations: first, generally speaking, the left side of that distribution — again, whatever it looks like — is almost certainly larger in quantity than the right side; producing misinformation is cheap and can even be automated (i.e. misinformation bots on social media). At the same time, when you consider something like the coronavirus, the right side of the distribution is massively larger in impact.

    What I noticed over the last week, though, is how these things change over time. Consider some variations of the above graph — none of which, I must stress, are making specific assertions about quantities, but which I suspect are directionally correct.

    Here is what the coronavirus information graph might have looked like in early February:

    The information landscape for the coronavirus in early February

    There was a lot of valuable chatter on Twitter discussing the potential impact of the coronavirus, a bit of China-focused coverage in the media (which was largely focused on President Trump’s impeachment trial), and relatively little misinformation. Note also that the absolute amount of information was quite small.

    By late February it looked like this:

    The information landscape for the coronavirus in late February

    There was a huge amount of chatter on Twitter discussing the potential impact of the coronavirus, as well as an increasing number of people — and media — arguing it was “just the flu” (that is the part under misinformation); overall coverage was higher but still relatively muted.

    The first week of March looked like this:

    The information landscape for the coronavirus in early March

    Note how the total amount of information was rising significantly, particularly valuable information on Twitter as well as increased media coverage.

    Then came the events of last Wednesday, and the information graph exploded:

    The information landscape for the coronavirus last week

    Pretty much by definition the most growth in information happened on the left two-thirds of the graph. There were very few people who learned about the coronavirus last week who were offering meaningfully interesting new information on Twitter; there were plenty, though, that were passing along whatever information they could get their hands on without much care as to whether it was accurate or not.

    (Computer) Viruses

    If you will permit a digression about a very different type of virus, back in the 2000s one of the eternal debates on message boards and comment threads was the relative security of Windows versus the Mac. Apple would advertise that Macs had far fewer viruses (brace yourself for a startling lack of social distancing):

    Back in 2006, when this commercial was released, there were several aspects of Unix-based Macs that were more secure than pre-Vista Windows, including a better security model and privilege escalation checks, enforced filesystem permissions, better browser sandboxing, etc. Just as important, though, was the fact there just weren’t that many Macs, relatively speaking.

    A virus is, after all, a program, which means that someone needs to write it, debug it, and distribute it. Given that over 90% of the PCs in the world ran Windows, writing a virus for Windows offered a far higher return on investment for hackers that were primarily looking to make money.

    Notably, though, if your motivation was something other than money — status, say — you attacked the Mac. That is what earned headlines:

    Hacking a Mac made headlines

    I suspect we see the same sort of dynamic with information on social media in particular; there is very little motivation to create misinformation about topics that very few people are talking about, while there is a lot of motivation — money, mischief, partisan advantage, panic — to create misinformation about very popular topics.

    In other words, the utility of social media as a news source is inversely correlated to how many people are interested in a given topic:

    Utility versus interest on social media

    This makes intuitive sense: social networks are often about friends and family, which are intensely important to you but not to anyone else, because they care about their own friends and family. Needless to say, Macedonian teens aren’t spreading rumors about Aunt Virgina or Uncle Robert.

    They also weren’t talking about the coronavirus — but people who cared were.

    Information Types

    The title Zero Trust Information was an analogy to Zero Trust Networking, which authenticates at the level of the individual, instead of relying on the castle-and-moat model which cared whether or not a device is behind a firewall. Generally an individual has to have both a valid password and a verified device to access sensitive information and applications from anywhere on the Interent — including on the corporate network. My argument is that information verification also has to happen at the level of the individual, but what is the equivalent of a password and verified device?

    I think an understanding of the the different types of information and how it is distributed gives some helpful heuristics:

    • For emergent information, like the coronavirus in February, you need a high degree of sensitivity and a high tolerance for uncertainty.
    • For facts, like the coronavirus right now, you need a much lower degree of sensitivity and a much lower tolerance of uncertainty: either something is verifiably known or it isn’t.

    You could even make a two-by-two:

    Sensitivity, uncertainty, facts, and emergent information

    It is interesting, by the way, to consider what fits in the other two corners:

    Four types of information

    Narratives around ongoing stories rely on a high degree of sensitivity (in an attempt to find the narrative thread) and a low tolerance for uncertainty (in an attempt to sell the narrative). History, on the other hand, requires a low degree of sensitivity (record what matters) and a high tolerance of uncertainty (we weren’t there).

    Information Business Models

    There is also a business model aspect to these different types of information. To return to The Internet and the Third Estate:

    The economics of printing books was fundamentally different from the economics of copying by hand. The latter was purely an operational expense: output was strictly determined by the input of labor. The former, though, was mostly a capital expense: first, to construct the printing press, and second, to set the type for a book. The best way to pay for these significant up-front expenses was to produce as many copies of a particular book that could be sold.

    How, then, to maximize the number of copies that could be sold? The answer was to print using the most widely used dialect of a particular language, which in turn incentivized people to adopt that dialect, standardizing language across Europe. That, by extension, deepened the affinities between city-states with shared languages, particularly over decades as a shared culture developed around books and later newspapers.

    This model was ideal for information that required a low degree of sensitivity — facts and history. It required a fair bit of expense upfront to create a newspaper or a book, and the way to gain maximum leverage on that expense was to produce things that were valuable to the most people possible.

    The Internet, though, changed the cost equation on the production side too:

    What makes the Internet different from the printing press? Usually when I have written about this topic I have focused on marginal costs: books and newspapers may have been a lot cheaper to produce than handwritten manuscripts, but they are still not-zero. What is published on the Internet, meanwhile, can reach anyone anywhere, drastically increasing supply and placing a premium on discovery; this shifted economic power from publications to Aggregators.

    Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world.

    This is what made both emergent information and narratives not just financially viable, but in fact more lucrative than facts or history. Emergent information can come from anywhere, which is another way of saying anyone can publish, and most of what people have to say is really only interesting to a small circle of friends and family. That, though, scales perfectly with the Internet’s free distribution, capturing the attention of everyone individually, which can then be sold to advertisers.

    As for narratives, at their best they appeal to the innate human desire for stories and our desire to make sense of the world; at their worst they appeal to people’s confirmation bias and tribal instincts. Either way, they tend to be polarizing, which is bad news in a world of fixed up-front costs, but exactly what you want when production is cheap and attention is scarce.

    Again, neither emergent information nor narratives are inherently bad. Both, though, can lead to bad outcomes: emergent information can be easily overwhelmed by misinformation, particularly when the incentives are wrong, and narratives can themselves corrupt facts. Or, as I narrated last week, they can reveal valuable information that would not otherwise be published.

    The Clarifying Coronavirus

    In some respects this discussion feels besides the point; there are lot of people suffering right now, and everyone is scared. Some will get COVID-19, some will die, and everyone will have their lives disrupted.

    Perhaps, though, that is why the coronavirus seems so clarifying when it comes to defining information. Emergent information was critical, both in terms of being censored in China, and in how it helped sound the alarm in the U.S. That success, though, was met by the failure of allowing narratives to obscure facts, whether those narratives were “just the flu”, or a suggestion of a media conspiracy, or mocking excitable tech bros on Twitter. And, looming over it all, is the reality that this moment will make it into the history books.


  • Zero Trust Information

    Yesterday Google ordered its entire North American staff to work from home as part of an effort to limit the spread of SARS-CoV-2, the virus that leads to COVID-19. It is an appropriate move for any organization that can do so; furthermore, Google, along with the other major tech companies, also plans to pay its army of contractors that normally provide services for those employees.

    Google’s larger contribution, though, happened five years ago when the company led the move to zero trust networking for its internal applications, which has been adopted by most other tech companies in particular. While this wasn’t explicitly about working from home, it did make it a lot easier to pull off on short notice.

    Zero Trust Networking

    In 1974 Vint Cerf, Yogen Dalal, and Carl Sunshine published a seminal paper entitled “Specification of Internet Transmission Control Program”; it was important technologically because it laid out the specifications for the TCP protocol that undergirds the Internet, but just as notable, at least from a cultural perspective, is that it coined the term “Internet.” The name feels like an accident; most of the paper refers to the “internetwork” Transmission Control Program and “internetwork” packets, which makes sense: networks already existed, the trick was figuring out how to connect them together.

    Networks came first commercially as well. In the 1980s Novell created a “network operating system” that consisted of local servers, ethernet cards, and PC software, to enable local area networks that ran inside of large corporations, enabling the ability to share files, printers, other resources. Novell’s position was eventually undermined by the inclusion of network functionality in client operating systems, commoditized ethernet cards, channel mismanagement, and a full-on assault from Microsoft, but the model of the corporate intranet enabling shared resources remained.

    The problem, though, was the Internet: connecting any one computer on the local area network to the Internet effectively connected all of the computers and servers on the local area network to the Internet. The solution was perimeter-based security, aka the “castle-and-moat” approach: enterprises would set up firewalls that prevented outside access to internal networks. The implication was binary: if you were on the internal network, you were trusted, and if you were outside, you were not.

    Castle and Moat Network Security

    This, though, presented two problems: first, if any intruder made it past the firewall, they would have full access to the entire network. Second, if any employee were not physically at work, they were blocked from the network. The solution to the second problem was a virtual private network, which utilized encryption to let a remote employee’s computer operate as if it were physically on the corporate network, but the larger point is the fundamental contradiction represented by these two problems: enabling outside access while trying to keep outsiders out.

    These problems were dramatically exacerbated by the three great trends of the last decade: smartphones, software-as-a-service, and cloud computing. Now instead of the occasional salesperson or traveling executive who needed to connect their laptop to the corporate network, every single employee had a portable device that was connected to the Internet all of the time; now, instead of accessing applications hosted on an internal network, employees wanted to access applications operated by a SaaS provider; now, instead of corporate resources being on-premises, they were in public clouds run by AWS or Microsoft. What kind of moat could possibly contain all of these use cases?

    The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

    Zero Trust Networking

    In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

    • If there is no internal network, there is no longer the concept of an outside intruder, or remote worker
    • Individual-based authentication scales on the user side across devices and on the application side across on-premises resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

    In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

    Castles and Moats

    Castle-and-moat security is hardly limited to corporate information; it is the way societies have thought about information generally from, well, the times of actual castles-and-moats. I wrote last fall in The Internet and the Third Estate:

    In the Middle Ages the principal organizing entity for Europe was the Catholic Church. Relatedly, the Catholic Church also held a de facto monopoly on the distribution of information: most books were in Latin, copied laboriously by hand by monks. There was some degree of ethnic affinity between various members of the nobility and the commoners on their lands, but underneath the umbrella of the Catholic Church were primarily independent city-states.

    With castles and moats!

    The printing press changed all of this. Suddenly Martin Luther, whose critique of the Catholic Church was strikingly similar to Jan Hus 100 years earlier, was not limited to spreading his beliefs to his local area (Prague in the case of Hus), but could rather see those beliefs spread throughout Europe; the nobility seized the opportunity to interpret the Bible in a way that suited their local interests, gradually shaking off the control of the Catholic Church.

    This resulted in new gatekeepers:

    Just as the Catholic Church ensured its primacy by controlling information, the modern meritocracy has done the same, not so much by controlling the press but rather by incorporating it into a broader national consensus.

    Here again economics play a role: while books are still sold for a profit, over the last 150 years newspapers have become more widely read, and then television became the dominant medium. All, though, were vehicles for the “press”, which was primarily funded through advertising, which was inextricably tied up with large enterprise…More broadly, the press, big business, and politicians all operated within a broad, nationally-oriented consensus.

    The Internet, though, threatens second estate gatekeepers by giving anyone the power to publish:

    Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

    People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

    It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

    The current gatekeepers are sure it is a disaster, especially “misinformation.” Everything from Macedonian teenagers to Russian intelligence to determined partisans and politicians are held up as existential threats, and it’s not hard to see why: the current media model is predicated on being the primary source of information, and if there is false information, surely the public is in danger of being misinformed?

    The Implication of More Information

    The problem, of course, is that focusing on misinformation — which to be clear, absolutely exists — is to overlook the other part of the “everyone is a publisher” equation: there has been an explosion in the amount of information available, true or not. Suppose that all published information followed a normal distribution (I am using a normal distribution for illustrative purposes only, not claiming it is accurate; obviously in sheer volume, given the ease with which it is generated, there is more misinformation):

    The normal distribution of information

    Before the Internet, the total amount of misinformation would be low in relative and absolute terms, because the total amount of information would be low:

    Less information means less misinformation

    After the Internet, though, the total amount of information is so much greater that even if the total amount of misinformation remains just as low relatively speaking, the absolute amount will be correspondingly greater:

    More information = more misinformation

    It follows, then, that it is easier than ever to find bad information if you look hard enough, and helpfully, search engines are very efficient in doing just that. This makes it easy to write stories like this New York Times article on Sunday:

    As the coronavirus has spread across the world, so too has misinformation about it, despite an aggressive effort by social media companies to prevent its dissemination. Facebook, Google and Twitter said they were removing misinformation about the coronavirus as fast as they could find it, and were working with the World Health Organization and other government organizations to ensure that people got accurate information.

    But a search by The New York Times found dozens of videos, photographs and written posts on each of the social media platforms that appeared to have slipped through the cracks. The posts were not limited to English. Many were originally in languages ranging from Hindi and Urdu to Hebrew and Farsi, reflecting the trajectory of the virus as it has traveled around the world…The spread of false and malicious content about the coronavirus has been a stark reminder of the uphill battle fought by researchers and internet companies. Even when the companies are determined to protect the truth, they are often outgunned and outwitted by the internet’s liars and thieves. There is so much inaccurate information about the virus, the W.H.O. has said it was confronting a “infodemic.”

    As I noted in the Daily Update on Monday:

    The phrase “a search by The New York Times” is the tell here: the power of search in a world defined by the abundance of information is that you can find whatever it is you wish to; perhaps unsurprisingly, the New York Times wished to find misinformation on the major tech platforms, and even less surprisingly, it succeeded.

    A far more interesting story, to my mind, is about the other side of that distribution. Sure, the implication of the Internet making everyone a publisher is that there is far more misinformation on an absolute basis, but that also suggests there is far more valuable information that was not previously available:

    More information = more valuable information

    It is hard to think of a better example than the last two months and the spread of COVID-19. From January on there has been extensive information about SARS-CoV-2 and COVID-19 shared on Twitter in particular, including supporting blog posts, and links to medical papers published at astounding speed, often in defiance of traditional media. In addition multiple experts including epidemiologists and public health officials have been offering up their opinions directly.

    Moreover, particularly in the last several weeks, that burgeoning network has been sounding the alarm about the crisis hitting the U.S. Indeed, it is only because of Twitter that we knew that the crisis had long since started (to return to the distribution illustration, in terms of impact the skew goes in the opposite direction of the volume).

    The Seattle Flu Study Story

    Perhaps the single most important piece of information about the COVID-19 crisis in the United States was this March 1 tweet thread from Trevor Bedford, a member of the Seattle Flu Study team:

    You can draw a direct line from this tweet thread to widespread social distancing, particularly on the West Coast: many companies are working from home, traveling has plummeted, conferences are being canceled. Yes, there should absolutely be more, but every little bit helps; information that came not from authority figures or gatekeepers but rather Twitter is absolutely going to save lives.

    What is remarkable about these decisions, though, is that they were made in an absence of official data. The President has spent weeks downplaying the impending crisis, and the CDC and FDA have put handcuffs on state and private labs even as they have completely dropped the ball on test kits that would show what is surely a significant and rapidly growing number of cases. Incredibly, as this New York Times story documents, those handcuffs were quite explicitly applied to Bedford’s team:

    [In late January] the Washington State Department of Health began discussions with the Seattle Flu Study already going on in the state. But there was a hitch: The flu project primarily used research laboratories, not clinical ones, and its coronavirus test was not approved by the Food and Drug Administration. And so the group was not certified to provide test results to anyone outside of their own investigators…

    C.D.C. officials repeatedly said it would not be possible [to test for coronavirus]. “If you want to use your test as a screening tool, you would have to check with F.D.A.,” Gayle Langley, an officer at the C.D.C.’s National Center for Immunization and Respiratory Disease, wrote back in an email on Feb. 16. But the F.D.A. could not offer the approval because the lab was not certified as a clinical laboratory under regulations established by the Centers for Medicare & Medicaid Services, a process that could take months.

    The Seattle Flu Study, led by Dr. Helen Y. Chu, finally decided to ignore the CDC:

    On the other side of the country in Seattle, Dr. Chu and her flu study colleagues, unwilling to wait any longer, decided to begin running samples. A technician in the laboratory of Dr. Lea Starita who was testing samples soon got a hit…

    “What we were allowed to do was to keep it to ourselves,” Dr. Chu said. “But what we felt like we needed to do was to tell public health.” They decided the right thing to do was to inform local health officials…

    Later that day, the investigators and Seattle health officials gathered with representatives of the C.D.C. and the F.D.A. to discuss what happened. The message from the federal government was blunt. “What they said on that phone call very clearly was cease and desist to Helen Chu,” Dr. Lindquist remembered. “Stop testing.”

    Still, the troubling finding reshaped how officials understood the outbreak. Seattle Flu Study scientists quickly sequenced the genome of the virus, finding a genetic variation also present in the country’s first coronavirus case.

    And thus came Bedford’s tweetstorm, and the response from private companies and individuals that, while weeks later than it should have been, was still far earlier than it might have been in a world of gatekeepers.

    The Internet and Individual Verification

    The Internet, famously, grew out of a Department of Defense project called ARPANET; that was the network Cerf, Dalal, and Sunshine developed TCP for. Contrary to popular myth, though, the goal was not to build a communications network that could survive a nuclear attack, but something more prosaic: there were a limited number of high-powered computers available to researchers, and the Advanced Research Projects Agency (ARPA) wanted to make it easier to access them.

    There is a reason that the nuclear war motive has stuck, though: for one, that was the motivation for the theoretical work around packet switching that became the TCP/IP protocol. Two is the fact that the Internet is in fact so resilient: despite the best efforts of gatekeepers, information of all types flows freely.1 Yes, that includes misinformation, but it also includes extremely valuable information as well; in the case of COVID-19 it will prove to have made a very bad problem slightly better.

    This is not to say that the Internet means that everything is going to be ok, either in the world generally or the coronavirus crisis specifically. But once we get through this crisis, it will be worth keeping in mind the story of Twitter and the heroic Seattle Flu Study team: what stopped them from doing critical research was too much centralization of authority and bureaucratic decision-making; what ultimately made their research materially accelerate the response of individuals and companies all over the country was first their bravery and sense of duty, and secondly the fact that on the Internet anyone can publish anything.

    To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

    We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

    A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

    Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.


    1. China is an obvious exception; I addressed the contrast in the aforelinked “The Internet and the Third Estate”.