Stratechery Plus Update

  • Zero Trust Information

    Yesterday Google ordered its entire North American staff to work from home as part of an effort to limit the spread of SARS-CoV-2, the virus that leads to COVID-19. It is an appropriate move for any organization that can do so; furthermore, Google, along with the other major tech companies, also plans to pay its army of contractors that normally provide services for those employees.

    Google’s larger contribution, though, happened five years ago when the company led the move to zero trust networking for its internal applications, which has been adopted by most other tech companies in particular. While this wasn’t explicitly about working from home, it did make it a lot easier to pull off on short notice.

    Zero Trust Networking

    In 1974 Vint Cerf, Yogen Dalal, and Carl Sunshine published a seminal paper entitled “Specification of Internet Transmission Control Program”; it was important technologically because it laid out the specifications for the TCP protocol that undergirds the Internet, but just as notable, at least from a cultural perspective, is that it coined the term “Internet.” The name feels like an accident; most of the paper refers to the “internetwork” Transmission Control Program and “internetwork” packets, which makes sense: networks already existed, the trick was figuring out how to connect them together.

    Networks came first commercially as well. In the 1980s Novell created a “network operating system” that consisted of local servers, ethernet cards, and PC software, to enable local area networks that ran inside of large corporations, enabling the ability to share files, printers, other resources. Novell’s position was eventually undermined by the inclusion of network functionality in client operating systems, commoditized ethernet cards, channel mismanagement, and a full-on assault from Microsoft, but the model of the corporate intranet enabling shared resources remained.

    The problem, though, was the Internet: connecting any one computer on the local area network to the Internet effectively connected all of the computers and servers on the local area network to the Internet. The solution was perimeter-based security, aka the “castle-and-moat” approach: enterprises would set up firewalls that prevented outside access to internal networks. The implication was binary: if you were on the internal network, you were trusted, and if you were outside, you were not.

    Castle and Moat Network Security

    This, though, presented two problems: first, if any intruder made it past the firewall, they would have full access to the entire network. Second, if any employee were not physically at work, they were blocked from the network. The solution to the second problem was a virtual private network, which utilized encryption to let a remote employee’s computer operate as if it were physically on the corporate network, but the larger point is the fundamental contradiction represented by these two problems: enabling outside access while trying to keep outsiders out.

    These problems were dramatically exacerbated by the three great trends of the last decade: smartphones, software-as-a-service, and cloud computing. Now instead of the occasional salesperson or traveling executive who needed to connect their laptop to the corporate network, every single employee had a portable device that was connected to the Internet all of the time; now, instead of accessing applications hosted on an internal network, employees wanted to access applications operated by a SaaS provider; now, instead of corporate resources being on-premises, they were in public clouds run by AWS or Microsoft. What kind of moat could possibly contain all of these use cases?

    The answer is to not even try: instead of trying to put everything inside of a castle, put everything in the castle outside the moat, and assume that everyone is a threat. Thus the name: zero-trust networking.

    Zero Trust Networking

    In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

    • If there is no internal network, there is no longer the concept of an outside intruder, or remote worker
    • Individual-based authentication scales on the user side across devices and on the application side across on-premises resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

    In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

    Castles and Moats

    Castle-and-moat security is hardly limited to corporate information; it is the way societies have thought about information generally from, well, the times of actual castles-and-moats. I wrote last fall in The Internet and the Third Estate:

    In the Middle Ages the principal organizing entity for Europe was the Catholic Church. Relatedly, the Catholic Church also held a de facto monopoly on the distribution of information: most books were in Latin, copied laboriously by hand by monks. There was some degree of ethnic affinity between various members of the nobility and the commoners on their lands, but underneath the umbrella of the Catholic Church were primarily independent city-states.

    With castles and moats!

    The printing press changed all of this. Suddenly Martin Luther, whose critique of the Catholic Church was strikingly similar to Jan Hus 100 years earlier, was not limited to spreading his beliefs to his local area (Prague in the case of Hus), but could rather see those beliefs spread throughout Europe; the nobility seized the opportunity to interpret the Bible in a way that suited their local interests, gradually shaking off the control of the Catholic Church.

    This resulted in new gatekeepers:

    Just as the Catholic Church ensured its primacy by controlling information, the modern meritocracy has done the same, not so much by controlling the press but rather by incorporating it into a broader national consensus.

    Here again economics play a role: while books are still sold for a profit, over the last 150 years newspapers have become more widely read, and then television became the dominant medium. All, though, were vehicles for the “press”, which was primarily funded through advertising, which was inextricably tied up with large enterprise…More broadly, the press, big business, and politicians all operated within a broad, nationally-oriented consensus.

    The Internet, though, threatens second estate gatekeepers by giving anyone the power to publish:

    Just as important, though, particularly in terms of the impact on society, is the drastic reduction in fixed costs. Not only can existing publishers reach anyone, anyone can become a publisher. Moreover, they don’t even need a publication: social media gives everyone the means to broadcast to the entire world. Read again Zuckerberg’s description of the Fifth Estate:

    People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences.

    It is difficult to overstate how much of an understatement that is. I just recounted how the printing press effectively overthrew the First Estate, leading to the establishment of nation-states and the creation and empowerment of a new nobility. The implication of overthrowing the Second Estate, via the empowerment of commoners, is almost too radical to imagine.

    The current gatekeepers are sure it is a disaster, especially “misinformation.” Everything from Macedonian teenagers to Russian intelligence to determined partisans and politicians are held up as existential threats, and it’s not hard to see why: the current media model is predicated on being the primary source of information, and if there is false information, surely the public is in danger of being misinformed?

    The Implication of More Information

    The problem, of course, is that focusing on misinformation — which to be clear, absolutely exists — is to overlook the other part of the “everyone is a publisher” equation: there has been an explosion in the amount of information available, true or not. Suppose that all published information followed a normal distribution (I am using a normal distribution for illustrative purposes only, not claiming it is accurate; obviously in sheer volume, given the ease with which it is generated, there is more misinformation):

    The normal distribution of information

    Before the Internet, the total amount of misinformation would be low in relative and absolute terms, because the total amount of information would be low:

    Less information means less misinformation

    After the Internet, though, the total amount of information is so much greater that even if the total amount of misinformation remains just as low relatively speaking, the absolute amount will be correspondingly greater:

    More information = more misinformation

    It follows, then, that it is easier than ever to find bad information if you look hard enough, and helpfully, search engines are very efficient in doing just that. This makes it easy to write stories like this New York Times article on Sunday:

    As the coronavirus has spread across the world, so too has misinformation about it, despite an aggressive effort by social media companies to prevent its dissemination. Facebook, Google and Twitter said they were removing misinformation about the coronavirus as fast as they could find it, and were working with the World Health Organization and other government organizations to ensure that people got accurate information.

    But a search by The New York Times found dozens of videos, photographs and written posts on each of the social media platforms that appeared to have slipped through the cracks. The posts were not limited to English. Many were originally in languages ranging from Hindi and Urdu to Hebrew and Farsi, reflecting the trajectory of the virus as it has traveled around the world…The spread of false and malicious content about the coronavirus has been a stark reminder of the uphill battle fought by researchers and internet companies. Even when the companies are determined to protect the truth, they are often outgunned and outwitted by the internet’s liars and thieves. There is so much inaccurate information about the virus, the W.H.O. has said it was confronting a “infodemic.”

    As I noted in the Daily Update on Monday:

    The phrase “a search by The New York Times” is the tell here: the power of search in a world defined by the abundance of information is that you can find whatever it is you wish to; perhaps unsurprisingly, the New York Times wished to find misinformation on the major tech platforms, and even less surprisingly, it succeeded.

    A far more interesting story, to my mind, is about the other side of that distribution. Sure, the implication of the Internet making everyone a publisher is that there is far more misinformation on an absolute basis, but that also suggests there is far more valuable information that was not previously available:

    More information = more valuable information

    It is hard to think of a better example than the last two months and the spread of COVID-19. From January on there has been extensive information about SARS-CoV-2 and COVID-19 shared on Twitter in particular, including supporting blog posts, and links to medical papers published at astounding speed, often in defiance of traditional media. In addition multiple experts including epidemiologists and public health officials have been offering up their opinions directly.

    Moreover, particularly in the last several weeks, that burgeoning network has been sounding the alarm about the crisis hitting the U.S. Indeed, it is only because of Twitter that we knew that the crisis had long since started (to return to the distribution illustration, in terms of impact the skew goes in the opposite direction of the volume).

    The Seattle Flu Study Story

    Perhaps the single most important piece of information about the COVID-19 crisis in the United States was this March 1 tweet thread from Trevor Bedford, a member of the Seattle Flu Study team:

    You can draw a direct line from this tweet thread to widespread social distancing, particularly on the West Coast: many companies are working from home, traveling has plummeted, conferences are being canceled. Yes, there should absolutely be more, but every little bit helps; information that came not from authority figures or gatekeepers but rather Twitter is absolutely going to save lives.

    What is remarkable about these decisions, though, is that they were made in an absence of official data. The President has spent weeks downplaying the impending crisis, and the CDC and FDA have put handcuffs on state and private labs even as they have completely dropped the ball on test kits that would show what is surely a significant and rapidly growing number of cases. Incredibly, as this New York Times story documents, those handcuffs were quite explicitly applied to Bedford’s team:

    [In late January] the Washington State Department of Health began discussions with the Seattle Flu Study already going on in the state. But there was a hitch: The flu project primarily used research laboratories, not clinical ones, and its coronavirus test was not approved by the Food and Drug Administration. And so the group was not certified to provide test results to anyone outside of their own investigators…

    C.D.C. officials repeatedly said it would not be possible [to test for coronavirus]. “If you want to use your test as a screening tool, you would have to check with F.D.A.,” Gayle Langley, an officer at the C.D.C.’s National Center for Immunization and Respiratory Disease, wrote back in an email on Feb. 16. But the F.D.A. could not offer the approval because the lab was not certified as a clinical laboratory under regulations established by the Centers for Medicare & Medicaid Services, a process that could take months.

    The Seattle Flu Study, led by Dr. Helen Y. Chu, finally decided to ignore the CDC:

    On the other side of the country in Seattle, Dr. Chu and her flu study colleagues, unwilling to wait any longer, decided to begin running samples. A technician in the laboratory of Dr. Lea Starita who was testing samples soon got a hit…

    “What we were allowed to do was to keep it to ourselves,” Dr. Chu said. “But what we felt like we needed to do was to tell public health.” They decided the right thing to do was to inform local health officials…

    Later that day, the investigators and Seattle health officials gathered with representatives of the C.D.C. and the F.D.A. to discuss what happened. The message from the federal government was blunt. “What they said on that phone call very clearly was cease and desist to Helen Chu,” Dr. Lindquist remembered. “Stop testing.”

    Still, the troubling finding reshaped how officials understood the outbreak. Seattle Flu Study scientists quickly sequenced the genome of the virus, finding a genetic variation also present in the country’s first coronavirus case.

    And thus came Bedford’s tweetstorm, and the response from private companies and individuals that, while weeks later than it should have been, was still far earlier than it might have been in a world of gatekeepers.

    The Internet and Individual Verification

    The Internet, famously, grew out of a Department of Defense project called ARPANET; that was the network Cerf, Dalal, and Sunshine developed TCP for. Contrary to popular myth, though, the goal was not to build a communications network that could survive a nuclear attack, but something more prosaic: there were a limited number of high-powered computers available to researchers, and the Advanced Research Projects Agency (ARPA) wanted to make it easier to access them.

    There is a reason that the nuclear war motive has stuck, though: for one, that was the motivation for the theoretical work around packet switching that became the TCP/IP protocol. Two is the fact that the Internet is in fact so resilient: despite the best efforts of gatekeepers, information of all types flows freely.1 Yes, that includes misinformation, but it also includes extremely valuable information as well; in the case of COVID-19 it will prove to have made a very bad problem slightly better.

    This is not to say that the Internet means that everything is going to be ok, either in the world generally or the coronavirus crisis specifically. But once we get through this crisis, it will be worth keeping in mind the story of Twitter and the heroic Seattle Flu Study team: what stopped them from doing critical research was too much centralization of authority and bureaucratic decision-making; what ultimately made their research materially accelerate the response of individuals and companies all over the country was first their bravery and sense of duty, and secondly the fact that on the Internet anyone can publish anything.

    To that end, instead of trying to fight the Internet — to try and build a castle and moat around information, with all of the impossible tradeoffs that result — how much more value might there be in embracing the deluge? All available evidence is that young people in particular are figuring out the importance of individual verification; for example, this study from the Reuters Institute at Oxford:

    We didn’t find, in our interviews, quite the crisis of trust in the media that we often hear about among young people. There is a general disbelief at some of the politicised opinion thrown around, but there is also a lot of appreciation of the quality of some of the individuals’ favoured brands. Fake news itself is seen as more of a nuisance than a democratic meltdown, especially given that the perceived scale of the problem is relatively small compared with the public attention it seems to receive. Users therefore feel capable of taking these issues into their own hands.

    A previous study by Reuters Institute also found that social media exposed more viewpoints relative to offline news consumption, and another study suggested that political polarization was greatest amongst older people who used the Internet the least.

    Again, this is not to say that everything is fine, either in terms of the coronavirus in the short term or social media and unmediated information in the medium term. There is, though, reason for optimism, and a belief that things will get better, the more quickly we embrace the idea that fewer gatekeepers and more information means innovation and good ideas in proportion to the flood of misinformation which people who grew up with the Internet are already learning to ignore.


    1. China is an obvious exception; I addressed the contrast in the aforelinked “The Internet and the Third Estate”. 


  • Email Addresses and Razor Blades

    At first glance, the proposed (and now-withdrawn) acquisition of Harry’s Razors by Edgewell Personal Care Co. — the makers of Schick — and Intuit’s announced acquisition of Credit Karma don’t appear to have much in common. There is, though, a common thread: digital advertising, and the dominance of Facebook and Google.

    FTC Sues to Block Harry’s Acquisition

    Start with Harry’s: the acquisition, which was announced last May, is off the table after the FTC filed a suit to block the acquisition. Interestingly, despite the fact that Harry’s is associated with direct-to-consumer (DTC), the reasoning the FTC used was Harry’s presence in brick-and-mortar retail. From the FTC complaint:

    Harry’s and Dollar Shave Club quickly succeeded in — and largely filled — the previously untapped online space. But the successful entry by Harry’s and Dollar Shave Club with their online Direct to Consumer (“DTC”) models did not stop the price increases by P&G and Edgewell, both of which sold their products primarily through brick-and-mortar retailers.

    Significant change came when Harry’s made the first — and, to date, only — successful jump from an online DTC platform into brick-and-mortar retail. In August 2016, Harry’s launched exclusively at Target with suggested retail prices several dollars below the most comparable Schick and Gillette products, a significant discount. Harry’s arrival in Target made a substantial impact, with Harry’s immediately winning customers from Edgewell and P&G. Edgewell described Harry’s trajectory as one of “[REDACTED]” and observed that Harry’s took “[REDACTED].”

    Harry’s entry at Target ended the long-standing practice of reciprocal price increases by Gillette and Edgewell. Shortly after Harry’s successful launch at Target, P&G implemented a “[REDACTED]” price reduction across its portfolio of razors, reversing course on its practice of leading yearly price increases. Edgewell changed course as well, abandoning its strategy of being a “[REDACTED]” of Gillette’s pricing actions. Rather than match Gillette’s price decrease, Edgewell began tracking Harry’s growth and increased promotional spend (funding for discounts and other promotions) [REDACTED]. Edgewell hoped that this effort would “[REDACTED],” [REDACTED].

    The FTC went on to note:

    Harry’s significant entry into brick-and-mortar retail transformed the wet shave razor market from a comfortable duopoly to a competitive battleground. Edgewell, in particular, has found itself fighting the threat that Harry’s poses to both its branded products and its private label offerings (i.e., razors manufactured by Edgewell for a retailer partner, to be sold under the retailer’s brand). Consumers benefited from the resulting price discounts and the introduction of additional Edgewell branded and private label choices.

    The Proposed Acquisition is likely to result in significant harm by eliminating competition between important head-to-head competitors. The Proposed Acquisition also will harm competition by removing a particularly disruptive competitor from the marketplace at a time when that competitor is currently expanding into additional retailers.

    Finally, the FTC concluded that the acquisition was presumptively illegal using the Herfindahl-Hirschman Index, which measures concentration in a given market:

    Under the 2010 U.S. Department of Justice and Federal Trade Commission Horizontal Merger Guidelines (“Merger Guidelines”), a post-acquisition market concentration level above 2,500 points, as measured by the Herfindahl-Hirschman Index (“HHI”), and an increase in HHI of more than 200 points renders an acquisition presumptively unlawful. Transactions in highly concentrated markets—markets with an HHI above 2,500 points — with an HHI increase of more than 100 points potentially raise significant competitive concerns and warrant scrutiny. The HHI is calculated by totaling the squares of the market shares of every firm in the relevant market pre- and post-acquisition.

    The market for the manufacture and sale of wet shave razors in the United States is already highly concentrated, with an HHI of over 3,000. The Proposed Acquisition increases the concentration in this market by more than 200 points and is therefore presumptively illegal…Changes in HHI based on current market shares understate the competitive significance of the Proposed Acquisition because Harry’s continues to expand into additional brick-and-mortar retailers. Recognizing that the Proposed Acquisition will arrest Harry’s independent expansion, it is appropriate to analyze Harry’s competitive significance by using prior entry events to project future competitive significance. Moreover, current market shares especially understate the competitive significance of Harry’s in markets that include sales of women’s razors because Harry’s Flamingo product launched very recently.

    Schick abandoned the deal a few days after the FTC filed suit; Harry’s, surprisingly, did not negotiate a breakup fee.

    Acquisitions and Incentives

    I quoted fairly extensively from the FTC’s complaint because, frankly, it’s quite compelling. Harry’s emergence led to lower prices for consumers, and Edgewell was almost certainly looking to relieve said downward pressure on prices, along with other more upstanding motivations like gaining new management expertise and its own DTC channel.

    At the same time, you can see some of the problematic incentives inherent in blocking a merger that I discussed two weeks ago. First, given that most CPG categories are dominated by a small number of incumbents (given the scale advantages necessary to compete for shelf space globally), investors have to be increasingly wary of investing in the space given that the precedent is that an acquisition will be ruled to be anticompetitive; much will rest on Harry’s ability to fulfill the FTC’s faith in its ability to be a standalone competitor (of which I am dubious, for reasons I will explain). Second, Harry’s was in some respects punished by its leap from online-only sales to bricks-and-mortar sales; as noted in an excerpt above, Harry’s success in online sales didn’t have any appreciable impact on the bricks-and-mortar market. Is the lesson for other DTC companies to stick with online sales alone for fear of foreclosing the possibility of being acquired?

    That drives to a broader question: why did Harry’s feel the need to pursue the brick-and-mortar market at all?

    The Conservation of Attractive Consumer Packaged Goods

    Much of the excitement around DTC was about the potential of eliminating the middleman; the margin taken by retailers could instead be devoted to a better product, lower prices, and better margins for the brand in question. The problem is that value chain transformation is far more dynamic than that. Go back to The Conservation of Attractive Profits, which I wrote about in the context of Netflix in 2015:

    The Law of Conservation of Attractive Profits1 was first explained by Clayton Christensen in his 2003 book The Innovator’s Solution:

    Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and de-commoditization, commoditization, that exists in order to optimize the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

    That’s a bit of a mouthful, but the example that follows in the book shows how powerful this observation is:

    If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computer’s architecture had to be modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one-size-fits-all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

    Did you catch that? That was Christensen, a full four years before the iPhone, explaining why it was that Intel was doomed in mobile even as ARM would become ascendent.2 When the basis of competition changed away from pure processor performance to a low-power system the chip architecture needed to switch from being integrated (Intel) to being modular (ARM), the latter enabling an integrated BlackBerry then, and an integrated iPhone four years later.3

    The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces.
    The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces. Do note that this is a drastically simplified illustration.

    More broadly, breaking up a formerly integrated system — commoditizing and modularizing it — destroys incumbent value while simultaneously allowing a new entrant to integrate a different part of the value chain and thus capture new value.

    Commoditizing an incumbent's integration allows a new entrant to create new integrations -- and profit -- elsewhere in the value chain.
    Commoditizing an incumbent’s integration allows a new entrant to create new integrations — and profit — elsewhere in the value chain.

    This is exactly what is happening with Airbnb, Uber, and Netflix too.

    The old value chain in consumer packaged goods (CPG) looked like this:

    CPG Value Chain

    CPG companies like P&G harvested most of the value by integrating research and development, manufacturing, marketing, and shelf space; raw materials, retail, and logistics were modularized and commoditized.

    DTC companies, meanwhile, saw research and development as increasingly unnecessary in overserved markets (as I noted in the context of Dollar Shave Club, razors are a particularly salient example of overserving), and shelf space on the Internet was effectively infinite. Their goal was to integrate marketing, retail, and manufacturing:

    Theoretical DTC Value Chain

    The problem, though, is that marketing on the Internet was entirely different than the analog marketing that previously dominated the CPG industry. There being good at advertising, whether it be coupons in the Sunday paper or television ads during the evening news, was mostly a matter of the ability to spend, which was itself a matter of scale. Digital marketing, though, didn’t really work at scale, at least relative to TV; in fact, it only made sense if you could target consumers with advertising and track how it performed.

    On one hand, this was another critical factor in making DTC companies viable. The advantage of targeted advertising is that it takes a lot less money relative to TV to reach customers who are actually interested in your product; the problem, though, is that getting good at targeted advertising requires massive amounts of both research and development to buid the capability, and inventory across a sufficiently large customer base to make the effort worthwhile. In the end, no DTC company was actually good at marketing; they outsourced it to Google and Facebook, which both had the inventory and the capability to spend the billions necessary to develop sophisticated targeted advertising.

    The problem is that in the process of depending on Google and Facebook for marketing, the DTC companies gave up their planned integration in the value chain, and the associated profits, to Facebook and Google:

    Actual DTC Value Chain

    The actual integrated players — Google and Facebook — integrate customers and research and development to dominate marketing; DTC may have online retail operations, but that is a modularized — and thus commoditized — part of the value chain (and meanwhile, Amazon was in the process of integrating retail and logistics). I wrote last week when Brandless folded:

    Here is the problem for DTC companies: Facebook really is better at finding them customers than anyone else. That means that the best return-on-investment for acquiring customers is on Facebook, where DTC companies are competing against all of the other DTC companies and mobile game developers and incumbent CPG companies and everyone else for user attention. That means the real winner is Facebook, while DTC companies are slowly choked by ever-increasing customer acquisition costs. Facebook is the company that makes the space work, and so it is only natural that Facebook is harvesting most of the profitability from the DTC value chain.

    To be fair to the DTC companies, they are hardly the first to make this mistake: way back when the world wide web first started publishers looked at the Internet and only saw the potential of reaching new customers; they didn’t consider that because every other publisher in the world could now reach those exact same customers, the integration that drove their business — publishing and distribution in a unique geographic area — had disintegrated. It is a lesson that can be taken broadly: if some part of the value chain becomes free, that is not simply an opportunity but also a warning that the entire value chain is going to be transformed.

    Harry’s Difficult Road

    This takes us back to Harry’s, and the decision to pursue bricks-and-mortar retail in the first place. It’s a choice that doesn’t make much sense in the theoretical value chain I sketched out above, where DTC companies integrate marketing and retail. However, once it became apparent that Facebook and Google squeezed far more value out of the online value chain than offline, the only option left was to pursue some sort of low-end disruption in the old value chain. Or, to put it in blunter terms, be cheaper.

    This, though, resulted in two problems: first, there was no technologically-based reason that Harry’s razors should be cheaper than Schick’s or Gillette’s; that meant that Schick and Gillette responded by lowering prices to match Harry’s. Secondly, because price as a proxy for consumer welfare is the most important factor driving regulatory review of acquisitions, Harry’s actually closed off their most viable exit. The fact of the matter is that Schick and Gillette specifically, and large CPG companies broadly, are unsurprisingly better suited to compete in the bricks-and-mortar markets they were built to dominate. Harry’s, despite its factory in Germany and omnichannel distribution strategy, faces a long road to actually achieving that $1.37 billion valuation on their own.

    Credit Karma and Acquiring Customers

    Harry’s outcome seems paritcularly unfair in light of yesterday’s news that Intuit is buying Credit Karma for $7.1 billion. Credit Karma doesn’t have a factory in Germany. Indeed, they don’t make any money from customers at all. Rather, Credit Karma offers free services that attract users to their site, and monetizes those users by directing them to credit cards and other financial products that pay an affiliate fee.

    Here there is one angle where this deal looks a bit like Harry’s: one of Credit Karma’s free offerings is a free tax filing service; that is obviously a threat to TurboTax, Intuit’s biggest money-maker (which has a free version it hopes you never find). It is even possible that the FTC seeks to block the deal on these grounds. I suspect, though, that Credit Karma and Intuit will simply agree to spin off the tax filing unit, because that is not Credit Karma’s true value.

    What is actually valuable are Credit Karma’s users — 90 million of them in the U.S. alone, 50% of whom are millennials. Those 90 million users don’t just visit Credit Karma directly, they have already shared substantial amounts of their personal financial data, and have consented to receiving emails about their credit scores. They are, in other words, the best possible customer acquisition channel for a company like Intuit, and for all of the reasons I just recounted, customer acquisition is the most valuable part of the digital value chain. Intuit will gladly suffer a tax filing competitor as long as it has the best possible channel to acquire the next generation of tax filers.

    This gets at the real commonality between Harry’s and Credit Karma: Harry’s is less valuable than it might have been because of Facebook and Google’s dominance of digital advertising; Credit Karma is more valuable than it might seem because they offer a way to acquire customers without depending on Facebook and Google. This is a particularly notable insight given the FTC’s involvement in the Harry’s acquisition, and potential involvement in Credit Karma: one potential outcome of the greater competition that may have arisen in digital advertising absent Facebook’s acquisition of Instagram and Google’s acquisition of DoubleClick would be increased viability for DTC companies, and decreased value for simply aggregating an audience with no direct business model.

    Still, I wouldn’t take the counter-factual too far: DTC makes far more sense with radically lower cost structures; if you are going to take advantage of the Internet transforming one part of the value chain, you had best ensure you are anticipating the transformations in the other parts as well. And, on the flipside, in a world of abundance being able to aggregate demand is more valuable than being able to create supply; it may offend our analog sensibilities that 90 million email addresses are more valuable than real-world factories, but such is the transformative nature of the Internet.


    1. Later renamed the Law of Conservation of Modularity. 

    2. I have my differences with Christensen about the iPhone, but as I’ve said repeatedly my criticism comes from an attempt to build on his brilliant work, not tear it down. 

    3. As I’ve noted, the iPhone is in fact modular at the component level; the integration is between the completed phone and the software. Not appreciating that the point of integration (or modularity) can be anywhere in the value chain is, I believe, at the root of a lot of mistaken analysis about the iPhone in particular 


  • The Daily Update Podcast

    Today Stratechery is launching a new product: the Daily Update Podcast.

    What is the Daily Update Podcast?

    The Daily Update Podcast is the audio version of the Daily Update. The Daily Update consists of three subscriber-only posts that, in addition to the free Weekly Article, arrive in your inbox every morning. Now, you can not only read the Daily Update via email or the web, you can also choose to listen to the Daily Update (and the Weekly Article) in your favorite podcast player.

    Who reads the Daily Update Podcast?

    Most days, I will read the Daily Update, with an assist from Daman Rangoola (he reads the blockquotes, to make it easier to follow). If I am traveling or otherwise unable to record, then Daman will record the Daily Update Podcast.

    As for who listens to the Daily Update Podcast (which, to be clear, includes the free Weekly Article), it is Daily Update subscribers only.

    When does the Daily Update Podcast come out?

    The Daily Update podcast will come out a few hours after the Daily Update email. It takes some time to edit the podcast, including adding all of the cool features that make this a particularly unique podcast.

    For example, the show notes for every podcast contain the full Daily Update post, so you can easily find links and illustrations. The Daily Update podcast also supports chapters, which correspond to the different sections of the Daily Update or Weekly Article. Plus, some podcast players, like Overcast, show Stratechery illustrations as cover art at precisely the right moments:

    Stratechery Daily Update features

    And yes, future Stratechery interviews will be more than just transcripts.

    Where do I listen to the Daily Update Podcast?

    The Daily Update Podcast can be played in any podcast player that supports the open ecosystem of podcasting (unfortunately this does not include Spotify, Google Podcasts, or Stitcher; to be very clear, this is not my choice). You can also read the Daily Update in any RSS Reader.

    Why did you launch the Daily Update Podcast?

    The Daily Update Podcast has been one of the most requested features since I launched the Daily Update. Many subscribers have long commutes and would like to listen to the Daily Update in the car, on the train, or while walking.

    How does the Daily Update Podcast work?

    Every Daily Update subscriber has access to their own individual feed. Stratechery makes it easy for you to add that feed to your favorite podcast player.

    Start by visiting the Daily Update Podcast page.

    If you are on your phone or tablet:

    • On iOS, simply tap the icon of your favorite podcast player and follow the prompts:

    • On Android, tap ‘Android’ and choose your preferred player:1

    If you are on your PC or Mac, and wish to listen on your phone:

    • Choose your favorite podcast player, then scan the corresponding QR code with your phone’s camera. The Daily Update podcast feed will be added to your chosen player.

    • Or enter your phone number, and Stratechery will text you a link to the Daily Update Podcast page.

    If you are on your PC or Mac, and wish to listen there:

    • If you are on macOS Catalina, simply click on the Apple Podcasts icon:

    • If you are older versions of macOS or Windows, simply click on the iTunes icon:

    If you have another podcast player, or wish to read the Daily Update in your RSS Reader:

    • Copy your custom URL and paste it into your podcast player or RSS Reader

    • Please note that this will be the only way to read the Stratechery Daily Update via RSS; previous member-only RSS feeds are depracated.

    To be very clear, the Daily Update is not going anywhere; after all, it is the source material for the Daily Update Podcast! To that end, you don’t need to subscribe to one or the other — it’s the same content, just in two different forms.

    Going forward, there will be a Daily Update Podcast for every Daily Update; for now the feed includes two Weekly Articles and two Daily Updates that demonstrate some of the cool features of the Daily Update Podcast.

    I am extremely excited about the Daily Update Podcast: it is a product I have been hoping to launch for a very long time. I want to thank Daman Rangoola for project managing the development of this new feature, my good friends at ModelRocket for building it, and most of all, my subscribers for giving me a reason to get it done.

    To learn more about the Stratechery Daily Update, please visit the updated Daily Update page. If you are already a subscriber, you can get started with the Daily Update Podcast here.


    1. Android’s approach is much better than iOS’s! 


  • First, Do No Harm

    While primum non nocere — Latin for “First, do no harm” — is commonly associated with the Hippocratic Oath taken by physicians, its actual provenance is uncertain; the phrase likely originated with the English doctor Thomas Sydenham in the 1600s, but didn’t appear in writing until 1860. The uncertainty is just as well: core to the idea of primum non nocere is the danger of unintended consequences; sometimes it is better for a doctor to not do anything than to risk causing more harm than good.

    I was reminded of this phrase yesterday when the FTC announced it was requesting data from the big tech companies about small scale acquisitions made over the last decade. From the Financial Times:

    The Federal Trade Commission has demanded information from the five largest US companies — Alphabet, Amazon, Apple, Facebook and Microsoft — about acquisitions of smaller companies as part of a review into possible anti-competitive behaviour in the technology sector. The US antitrust enforcement agency wants to know if the tech groups bought start-ups in deals during the past 10 years that may have been anti-competitive but which were too small to be reportable under a federal merger notification law…

    Under a US law called Hart-Scott-Rodino, companies are required to notify the FTC and the Department of Justice about mergers and other acquisitions above a certain size threshold. The threshold for 2020 is $94m. The FTC said the orders were made under a rule designed to enable studies separate from antitrust enforcement investigations. “These orders are not being issued for law enforcement purposes,” Mr Simons told reporters in a conference call. “This is a research and policy project.”

    I am glad for that clarification: I am all for the Federal Trade Commission being better informed about the tech industry; what increasingly concerns me is the potential unintended consequences of the government getting involved in tech acquisitions, particularly the small-scale ones implicated in this investigation.

    Facebook and Instagram

    In 2018 I declared at the Code Conference that “Facebook’s acquisition of Instagram was the greatest regulatory failure of the past decade”; it’s a line that I both believe and also regret, simply because there is a tremendous amount of nuance involved.

    First, regulators didn’t actually make a mistake, at least according to the law; at the time of the acquisition Instagram had 30 million users and $0 of revenue. While it may have been clear to some that Facebook was acquiring a competitor, that required forecasting well into the future with a fair degree of uncertainty.

    Second is the FTC announcement above, and other campaigns to limit acquisitions by large tech companies in particular: I tend to think that acquisitions in tech are largely good for everyone, from users to startups to tech companies, and I am increasingly concerned that focusing on one deal obscures that fact.

    To back up for a moment, the problem in my eyes with Facebook acquiring Instagram was as follows:

    • First, the power of an Aggregator comes from the number of users that are voluntarily on their platform; it follows, then, that an Aggregator acquiring a company with its own distinct user base, particularly one driven by network effects, is increasing its power.
    • Second, while the user experience of Facebook and Instagram are distinct, the underlying monetization engine is shared. That means that for an advertiser already on Facebook, it became much easier to advertise on Instagram.
    • The combination of Facebook and Instagram provides a one-stop shop for advertisers who wish to reach any demographic, limiting the monetization potential of competing products like Snapchat and Twitter, and limiting investment in the consumer space broadly as Google and Facebook consume the majority of advertising dollars.

    I explored an alternate reality where Facebook did not acquire Instagram in this Daily Update:

    The monetization point is equally straightforward. It’s not just that, as I noted yesterday, Instagram didn’t need to build all of the infrastructure of an ad business, a considerable undertaking. The company also didn’t need to acquire any advertisers — a task that is just as if not more difficult than acquiring customers — because the advertisers were all already on Facebook. Moreover, converting these advertisers from Facebook to Instagram was in some respects even easier than converting Facebook users: not only did Instagram and Facebook share both a salesforce as well as a self-service back-end, Instagram ads could be bundled with Facebook ads, making it not just easy to try Instagram but also ROI-positive in a way that a standalone Instagram offering at a similar stage of development wouldn’t be. And, of course, the ad unit was the same, reducing creative costs.

    This is where it is critical to consider the entire ecosystem. Had Instagram continued as a standalone company I do believe it would have been successful in building out an advertising business; it just would have taken a lot more time and effort…What is more important, though, is that an independent Instagram would have been the best possible thing that could have happened to Snapchat. The fundamental problem facing Snapchat is that it wasn’t enough for the company to have higher usage or deeper engagement with teens and young adults, demographic groups advertisers are desperate to reach. As long as Instagram was using Facebook’s ad infrastructure, it would always be more cost effective to reach those groups using Facebook’s ad engine.

    On the other hand, an independent Instagram, combined with Facebook’s relative weakness with those demographic groups, would have forced advertisers to diversify, and once an advertiser is building products for two different platforms, it’s much easier to add another — or, if they only wanted two, to perhaps choose Snap ads instead of Instagram ones. Indeed, those that argue that an independent Instagram would be like Snap may be right, not because Instagram would be as weak as Snap appears to be, but because Snap would be far stronger.

    Facebook would argue that this undersells the degree to which it helped Instagram grow, or the fact that relatively unsophisticated advertisers that primarily use Facebook in fact benefit from Instagram reaching different demographics. What is certain, though, is that for whatever advances Snapchat and the rest of the consumer ad ecosystem are able to scratch out, they do not and will not operate at the same scale as Facebook, and given that, how much appetite is there to invest in future networks?

    That noted, the Instagram acquisition is arguably the exception that proves the rule: most other acquisitions, particularly the small-scale ones that the FTC wants to look into, are a win for everyone.

    Acquisition Benefits

    The first group that benefits from large tech company acquisitions is end users. The fastest possible way for a new technology or feature to be diffused to users broadly is for it to be incorporated by one of the large platforms or Aggregators. Suddenly, instead of reaching a few thousand or even a few million people, a new technology can reach billions of people. It’s difficult to overstate how compelling this point is from a consumer welfare perspective: banning acquisitions means denying billions of people access to a particular technology for years, if not forever.

    The second group that benefits from large tech company acquisitions is investors. If one of their startups creates something useful, that investment can earn a return even if said startup does not have a clear business model or user acquisition strategy. To put it another way, investors have the freedom to be more speculative in their investments, and pay more attention to technological breakthroughs and less to monetization, because there is always the possibility of exiting via acquisition. This benefit accrues broadly: more money going to more initiatives is ultimately good for society.

    The third group that benefits from large tech company acquisitions is entrepreneurs and startup employees. Trying to build something new is difficult and draining; it makes the effort — which will likely fail — far more palatable knowing that if it doesn’t work out an acquihire acquisition is a likely outcome. Sure, it might have been easier to simply apply for a job at Google or Facebook, but being handed one because you worked for a failed startup reduces the risk of going to work for that startup in the first place.

    It’s important to note that the sort of acquisitions the FTC is looking at almost certainly fall predominantly in this third group. Acquihires of failed startups is arguably the most tangible way that big tech companies contribute to Silicon Valley’s durable startup culture: there is more reason for entrepreneurs, early employees, and investors to take a chance on new ideas because the big tech companies provide a backstop.

    Another way to consider these benefits, meanwhile, is to think about a world where acquisitions by large tech companies are severely constricted or banned:

    • New technology would be diffused far more slowly (as the new startup scales), if at all (if the startup goes out of business).
    • The amount of investment in risky technologies without obvious avenues to go-to-market would decrease, simply because it would be far less likely that investors would earn a return even if the technology worked.
    • The risk of working for a startup would increase significantly, both because the startup would be less likely to succeed and also because the failure scenario is unemployment.

    Still, isn’t this all worth it to have new competitors to the biggest tech companies?

    The Implications of The End of the Beginning

    Last month I wrote in The End of the Beginning:

    What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

    The implication of this view should at this point be obvious, even if it feels a tad bit heretical: there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

    In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started. Indeed, this is exactly what we see in consumer startups in particular: few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given.

    What if this is right, and regulators severely limit acquisitions anyway? That would mean we would be limited to whatever innovations the biggest players come up with on their own, without the benefit of the creativity and ability to try new things inherent to startups.

    Worse, a no acquisition strategy would actually make it even more difficult to challenge the biggest tech companies. Take Snapchat for example: many of the services’ innovative features from Lenses (Looksery) to SnapCode (ScanMe) to Bitmoji (Bitstrips) came from acquisitions. Any sort of policy that would limit Snapchat from getting better would hurt competition, not help it.

    Threading the Needle

    There is, at least in theory, a finely tailored approach to merger review that could get this right: any company that derives dominant market power from the size of its user base should not be allowed to acquire a company that has a significant and growing user base. The problem is that almost every part of that sentence gets really complicated really quickly when you start trying to make it specific. What is dominant market power? What is a significant user base, and how does that number change over time? How do you forecast the ultimate ceiling of a startup?

    One possibility is retroactive enforcement, which the FTC has suggested is a possible outcome of this review. This, though, is problematic for all sorts of reasons.

    • First, it weakens the idea of the rule of law: if the rules can be changed retroactively, then how can you trust the rules in the first place?
    • Second, it almost certainly discounts the very real impact of the acquiring company contributing expertise, capital, management, etc. to the startup in question. Instagram is a good example in this regard: the service is far larger and far more profitable than it would have been on its own because of Facebook’s contributions, and it would be fundamentally misguided for regulators to effectively penalize Facebook for its contributions to Instagram’s success.
    • Third, these first two reasons would chill investment in startups, for all of the reasons noted above. Suddenly acquisitions would be riskier, not just in the moment, but for years in the future. Adding regulatory uncertainty into a space defined by uncertainty seems like a very poor idea.

    This, then, is how I arrive at primum non nocere: as much as I regret that the Instagram acquisition happened, I am deeply concerned about upending how Silicon Valley operates in response. The way things work now has massive consumer benefit and even larger benefits to the United States: regulators should be exceptionally careful to “First, do no harm” while pursuing the perfect over the good.


  • Facebook’s Platform Opportunity

    George Soros is famous for his timing.

    In 1992 Soros built a massive short position in pound sterling, betting that the United Kingdom had entered European Exchange Rate Mechanism (ERM) at an unsustainably high rate, particularly given British inflation and interest rates relative to Germany; when the pound fell below the minimum level allowed by the ERM, Soros pounced, selling so much sterling that the government could not prop up the currency. The United Kingdom withdrew from the ERM, the pound plummeted, and Soros pocketed over £1 billion in profit.

    That, though, was then; last week Soros penned an opinion column in the New York Times that basically stated that Facebook CEO Mark Zuckerberg was actively working to re-elect President Trump. Much of it reads like a conspiracy theory — and the part on Section 230 is so mistaken it is, ironically enough, bordering on disinformation — but what was particularly striking was how poor the timing was; Soros concluded:

    I repeat and reaffirm my accusation against Facebook under the leadership of Mr. Zuckerberg and Ms. Sandberg. They follow only one guiding principle: maximize profits irrespective of the consequences. One way or another, they should not be left in control of Facebook.

    In fact, Facebook reported its financial results two days before Soros’ op-ed, and the stock lost 10% of its value. The general consensus was concern about the ongoing slowdown in profit growth, which decelerated even more last quarter — traditionally the quarter with the most growth:

    Facebook's Revenue and Operating Profit

    The issue is costs, which have outgrown revenue for each of the last seven quarters:

    Facebook's Revenue and Costs

    To be perfectly honest, the slowdown in revenue growth was just as likely to be a factor in the stock’s slide, especially because Facebook’s costs have been growing so rapidly. Whatever the cause, if Zuckerberg’s only guiding principle is maximizing profits, he is extremely bad at it.

    Facebook’s Security Investments

    The fact of the matter is that Facebook, more than any other tech company, has put its money where its mouth is as far as security is concerned. Zuckerberg said on the company’s Q3 2017 earnings call:

    I’ve directed our teams to invest so much in security on top of the other investments we’re making that it will significantly impact our profitability going forward, and I wanted our investors to hear that directly from me. I believe this will make our society stronger, and in doing so will be good for all of us over the long term. But I want to be clear about what our priority is. Protecting our community is more important than maximizing our profits.

    Nine months later was when the growth rate of Facebook’s costs exceeded the growth rate of the company’s revenues for the first time, leading to the largest one-day loss by any company in U.S. stock market history; CFO Dave Wehner said on that 2Q 2018 earnings call:

    Turning now to expenses; we continue to expect that full-year 2018 total expenses will grow in the range of 50% to 60% compared to last year…Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019. Over the next several years, we would anticipate that our operating margins will trend towards the mid-30s on a percentage basis.

    That is exactly what has happened.1 Facebook has spent on security (i.e. more people) more quickly than it has increased revenue — and it has increased revenue quite a bit! Vice President Andrew Bosworth expressed confidence in an internal memo2 that the money will prove to be well-spent; after noting that the role of foreign interference and misinformation on Facebook was extremely small relative to the content people saw over the last election cycle (a fair thing to note), Bosworth wrote:

    Most of the information floating around that is widely believed isn’t accurate. But who cares? It is certainly true that we should have been more mindful of the role both paid and organic content played in democracy and been more protective of it. On foreign interference, Facebook has made material progress and while we may never be able to fully eliminate it I don’t expect it to be a major issue for 2020.

    Misinformation was also real and related but not the same as Russian interference. The Russians may have used misinformation alongside real partisan messaging in their campaigns, but the primary source of misinformation was economically motivated. People with no political interest whatsoever realized they could drive traffic to ad-laden websites by creating fake headlines and did so to make money. These might be more adequately described as hoaxes that play on confirmation bias or conspiracy theory. In my opinion this is another area where the criticism is merited. This is also an area where we have made dramatic progress and don’t expect it to be a major issue for 2020.

    Bosworth went on to note that President Trump ran a far superior digital advertising operation last campaign,3 and that he is worried that he will win in 2020 by doing the same. It is an admittedly self-serving but still crucial point to make: the effectiveness of Facebook’s expenditures should be based on the extent of illicit activity on Facebook, not the results of the next presidential election.

    That, of course, is unlikely to happen, particularly if President Trump does indeed win re-election: Facebook will be almost certainly be held responsible because they are the easiest target, even if there is no meaningful foreign interference or disinformation campaigns. Critics will point to the company’s refusal to fact-check politicians, even if it is right on principle, and all of those expenses won’t make up for it.

    Again, how is this profit-maximizing? If anything this is an argument for founder control: Facebook is spending billions of dollars and taking regular hits in the stock market for something they will almost certainly get no credit for, primarily because Zuckerberg believes it is the right thing to do.

    That noted, Zuckerberg may not be entirely altruistic.

    Facebook’s Missing Platform

    At the beginning of the year I wrote The End of the Beginning, where I posited that the current tech giants would likely be dominant for some time to come:

    There may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

    In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.

    Careful readers would have noted that I left out one of the tech giants — Facebook. The reason is straightforward: Facebook isn’t a platform, but rather an Aggregator. I explained the differences in A Framework for Regulating Competition on the Internet:

    The name “platform” is a descriptive one: it is the foundation on which entire ecosystems are built. The most famous example of a platform — one with which regulators are intimately familiar — is Microsoft Windows. Windows provided an operating system for personal computers, a set of APIs for developers, and a user interface for end users, to the benefit of all three groups: developers could write applications that made personal computers useful to end users, thanks to the Windows platform tying everything together…

    “Aggregator” is also descriptive: Aggregators collect a critical mass of users and leverage access to those users to extract value from suppliers. The best example of an Aggregator is Google. Google offered a genuine technological breakthrough with Google Search that made the abundance of the Internet accessible to users; as more and more users began their Internet sessions with Google, suppliers — in this case websites — competed to make their results more attractive to and better suited to Google, the better to acquire end users from Google, which made Google that much better and more attractive to end users.

    This excerpt raises a fair question: why did I include Google as a foundational company and not Facebook if they are both Aggregators? Three reasons:

    1. Google controls the largest mobile platform in the world (Android).
    2. While Google Search is not essential to connecting users and websites, both users and websites behave as if it is, suggesting the sort of multi-sided network effects that are strong moats.
    3. Google has an ad platform that supports not only Google properties, but also YouTube and websites across the Internet.

    This last point is a crucial one: the word “platform” tends to evoke developers, but Google plays the same role connecting consumers, advertisers, and websites (both its own and 3rd-party) that Windows played connecting users, developers, and OEMs.

    Google's Ad Platform

    Facebook, meanwhile, has always been much more of a closed garden. Its most important content comes not from 3rd-parties, but rather its own users. Similarly, its advertising uses Facebook data on Facebook properties. This self-containment helped protect Facebook from Google and made it into the giant that it is, but it is a fundamentally more fragile position than the other big tech companies.

    This is also why investing in security is, in the long-run, not simply altruistic. Facebook depends on users using Facebook properties because they choose to use Facebook properties; Facebook connects advertisers to those users, the advertisers are not a reason to stay on the platform. Absent 3rd-parties that make Facebook essential, Facebook has to do whatever it takes to ensure users don’t leave the platform.

    This is also why Facebook has invested so heavily in virtual and augmented reality. Zuckerberg knows the importance of platforms — remember, the entire reason Facebook ended up in the Cambridge Analytica scandal was in an attempt to make Facebook into a platform — and is betting that being early to the next paradigm will secure the company’s position.

    In fact, though, Facebook has a much larger opportunity.

    FAN’s Failed Promise

    Facebook Audience Network is Facebook’s ad platform for 3rd-party mobile apps, mobile websites, and video. It quite obviously exists, but it definitely doesn’t seem to be getting much attention internally: the last time it was mentioned on an earnings call was in Q1 2018, and then only in a passing comment about increased transparency; the last substantive discussion was way back in Q2 2016.

    It’s easy to figure out why: any company, even one the size of Facebook, has to choose what to spend resources on; doing one thing means not doing another. And, in a competition between Facebook’s own ad products and Facebook Audience Network, it was inevitable that Facebook Audience Network would lose:

    • First and foremost, Facebook Audience Network ads have lower margins. That is because Facebook has to share revenue with the site or app that shows an ad.
    • Secondly, Facebook Audience Network ads have lower revenues because the best ad units are on Facebook properties! Any advertiser, for the same price, would rather advertise in Facebook’s feed than on a 3rd-party app or site, because it performs better; that means the ads are never the same price.
    • Third, the nature of digital advertising is such that Facebook has effectively unlimited inventory on its own properties, particularly with the explosion of Stories. That means the first two factors are always true.

    You can see a similar dynamic at Google: DoubleClick, its 3rd-party advertising business, was an acquisition, not home-grown, and even still the percentage of revenue generated on Google’s own properties continues to grow. It’s hard to resist focusing efforts on the ad products that make more money with better margins!

    Still, it is DoubleClick that, more than anything, makes Google into an ad platform. DoubleClick introduces a third stakeholder — 3rd party websites and apps — into the equation, making Google that much stickier and essential. This is exactly what Facebook should do with Audience Network.

    Facebook’s Opportunity

    The relative worth of investing in Facebook Audience Network relative to Facebook’s own ads will never change; there are, though, good reasons for Facebook to invest anyways.

    First, privacy regulation like GDPR or California’s CCPA is much more challenging for 3rd-party advertising networks that rely on collecting user information across non-owned-and-operated sites than Facebook or Google. Facebook and Google already have superior targeting capabilities, and that advantage is only going to increase.

    Second, Facebook’s data is much better for display advertising; Google is superior at identifying and capitalizing on purchase intent, particularly through search but also re-targeting, but Facebook excels at building brands and surfacing things you didn’t know you wanted. These categories are likely to be much more effective in most website or apps which are not necessarily about immediate conversions.

    Third, the biggest reason to be bullish on Facebook is its dominance in digital advertising. As long as it has access to most customers, it will always be the default choice for advertisers; spending more time and attention on extending its advertising to 3rd-parties also extends the responsibility of attracting customers. Yes, this costs margin, but the payback is an even better moat.

    The reason to bring this up now is the pressure Facebook is under, from PR to politics to the stock market:

    • A meaningful investment in the Facebook Audience Network would mean lower margins in the long run, so best to make the investment when investors are already grumpy about margins.
    • So much of the media only sees Facebook as a competitor; Facebook is uniquely placed to be their benefactor.
    • The PR angle is not obvious, but I do think that Facebook in the long run is likely to be recognized as the company that has made the greatest investment in security. This will make regulators more comfortable with Facebook being one of the few companies constructed to leverage user data for advertising.

    This article is not, by the way, my opinion on what is best for the world; rather, despite all of the company’s bad press, my point is that Facebook is better positioned for the future than it appears. More privacy regulation, more attention on security issues, more concerns about Google leveraging its own position: all of these are opportunities for Facebook. The question is if it will leverage investor discontent to make the sort of shift that gives up margin to build moats. Facebook can finally have its platform — the timing is right — if it is willing to take the risk.


    1. Note that the above charts show operating margin; net margin is indeed 35%. 

    2. That, to be fair, seemed prepared for external consumption 

    3. Notably, Facebook offered help to both campaigns; Hillary Clinton declined, which Soros turns into evidence that Facebook is pro-Trump 


  • The Tragic iPad

    From The Verge:

    Steve Jobs stepped onstage 10 years ago today to introduce the world to the iPad. It was, by his own admission, a third category of device that sits somewhere between a smartphone and a laptop. Jobs unveiled the iPad just days after the annual Consumer Electronics Show ended in Las Vegas and at a time when netbooks were dominating personal computing sales…

    Apple had an answer to the netbook: a 9.7-inch tablet that allowed you to hold the internet in your hands…Apple was also looking to create a third category of device that was better at certain tasks than a laptop or smartphone. The iPad was designed to be better at web browsing, email, photos, video, music, games, and ebooks. “If there’s going to be a third category of device it’s going to have to be better at these kinds of tasks than a laptop or a smartphone, otherwise it has no reason for being,” said Jobs.

    Stratechery wasn’t my first (or second) blog; back in 2010 I had a Tumblr and I imported some of the posts to Stratechery, including this piece that I wrote when the iPad was announced:

    What the iPad does is give Apple a product that offers a superior experience in every dimension of the mobile experience, namely, content creation, content consumption and mobility.

    Apple's mobile device offerings

    The reason this matters is that the vast majority of users are primarily content consumers. These are the people buying netbooks as their primary computers, or simply avoiding computers as much as possible. They simply want to go on Facebook, check their email, watch YouTube, and at most, upload pictures. Apple’s value proposition to these customers is: The iPad is a superior content consumption experience with sufficient creation capabilities to meet your needs. That is why iWork figured so prominently into the Keynote — it was reassurance that the iPad can pass as your only computer (more on iWork in just a moment).

    The post holds up pretty well, if I might say so myself, but it is where it is wrong that is the most interesting.

    The iPad Disappointment

    John Gruber is disappointed in the current state of the iPad:

    Ten years later, though, I don’t think the iPad has come close to living up to its potential. By the time the Mac turned 10, it had redefined multiple industries. In 1984 almost no graphic designers or illustrators were using computers for work. By 1994 almost all graphic designers and illustrators were using computers for work. The Mac was a revolution. The iPhone was a revolution. The iPad has been a spectacular success, and to tens of millions it is a beloved part of their daily lives, but it has, to date, fallen short of revolutionary…

    Software is where the iPad has gotten lost. iPadOS’s “multitasking” model is far more capable than the iPhone’s, yes, but somehow Apple has painted it into a corner in which it is far less consistent and coherent than the Mac’s, while also being far less capable. iPad multitasking: more complex, less powerful. That’s quite a combination.

    I could not agree more with Gruber’s critique. In my opinion, multi-tasking on the iPad is an absolute mess, and it has ruined the entire interface; I actively dislike using the iPad now, and use it exclusively to watch video and make the drawings for Stratechery. Its saving grace is that it is hard to discover.

    What is fascinating — and, in my opinion, tragic, in both the literal and literary sense — is how the iPad arrived in its current state. That initial announcement featured Jobs reclining on a couch — it wasn’t very difficult to come up with the “content consumption” angle! Still, you could see the potential for something more. I wrote at the end of that piece:

    It’s the long-term picture that is particularly fascinating, and gets back to my contention at the beginning of this post. For while the laptop has all but reached it’s potential — the consumption experience will never improve beyond what it is now — the creation experience on the iPad will only get better with time. In fact, I believe the iPad will be looked back upon as the pioneer of what will become the default way of interacting with computers just like the Macintosh.

    Go back and watch the Keynote again, especially the iWork demonstration that begins 57 minutes in. The iPad doesn’t just let you create documents. It lets you create documents in a way that is simply impossible on a normal computer. It is so much more natural, so much more intuitive, that users accustomed to a keyboard-and-mouse will adapt quickly, and more importantly, users accustomed to multitouch will never understand the attachment to a mouse. I truly believe my two year-old daughter, who has already taught herself to use my iPhone, will never seriously use a mouse.

    For the record, my now 12 year-old daughter still doesn’t use a mouse, but that is because she has a laptop and uses a touchpad. That was a clear miss by me. A year later, though, when Steve Jobs, in his second-to-last keynote, announced the iPad 2, the future I envisioned looked like it was right on track. The most amazing part of the launch was GarageBand:

    This is the entire demo, but the most important part is Steve Jobs reaction to the demo — jump to 12:30 if you don’t have time or inclination to watch the whole thing:

    Jobs look of wonderment says more than his words:

    I’m blown away with this stuff. Playing your own instruments, or using the smart instruments, anyone can make music now, in something that is this thick and weights 1.3 pounds. It’s unbelievable…this is no toy. This is something you can use for real work.

    GarageBand, even more than iWork the year before, was the sort of app that was only possible on an iPad. Sure, it shared a name with its Mac counterpart, but the magic came from the fact that it had little else in common.

    And then Jobs died, and I’ve never been able to shake the sense that this particular vision of the iPad died with him.

    iPad’s Missing Ecosystem

    There was one final part of that GarageBand introduction that, in retrospect, was an inauspicious sign for the future:

    GarageBand for iPad's launch price

    It’s tempting to dwell on the Jobs point — I really do think the iPad is the product that misses him the most — but the truth is that the long-term sustainable source of innovation on the iPad should have come from 3rd-party developers. Look at Gruber’s example for the Mac of graphic designers and illustrators: while MacPaint showed what was possible, the revolution was led by software from Aldus (PageMaker), Quark (QuarkXPress), and Adobe (Illustrator, Photoshop, Acrobat). By the time the Mac turned 10, Apple was a $2 billion company, while Adobe was worth $1 billion.

    There are, needless to say, no companies built on the iPad that are worth anything approaching $1 billion in 2020 dollars, much less in 1994 dollars, even as the total addressable market has exploded, and one big reason is that $4.99 price point. Apple set the standard that highly complex, innovative software that was only possible on the iPad could only ever earn 5 bucks from a customer forever (updates, of course, were free).

    This remains one of Apple’s biggest mistakes; in 2015, when Apple first released the iPad Pro, I wrote in From Products to Platforms

    When it comes to the iPad Apple’s product development hammer is not enough. Cook described the iPad as “A simple multi-touch piece of glass that instantly transforms into virtually anything that you want it to be”; the transformation of glass is what happens when you open an app. One moment your iPad is a music studio, the next a canvas, the next a spreadsheet, the next a game. The vast majority of these apps, though, are made by 3rd-party developers, which means, by extension, 3rd-party developers are even more important to the success of the iPad than Apple is: Apple provides the glass, developers provide the experience.

    That, then, means that Cook’s conclusion that Apple could best improve the iPad by making a new product isn’t quite right: Apple could best improve the iPad by making it a better platform for developers. Specifically, being a great platform for developers is about more than having a well-developed SDK, or an App Store: what is most important is ensuring that said developers have access to sustainable business models that justify building the sort of complicated apps that transform the iPad’s glass into something indispensable.

    That simply isn’t the case on iOS. Note carefully the apps that succeed on the iPhone in particular: either the apps are ad-supported (including the social networks that dominate usage) or they are a specific type of game that utilizes in-app purchasing to sell consumables to a relatively small number of digital whales. Neither type of app is appreciably better on an iPad than on an iPhone; given the former’s inferior portability they are in fact worse.

    A very small number of apps are better on the iPad though: Paper, the app used to create the illustrations on this blog, is a brilliantly conceived digital whiteboard that unfortunately makes no money; its maker, FiftyThree, derives the majority of its income from selling a physical stylus called the Pencil (now eclipsed in both name and function by Apple’s new stylus). Apple’s apps like Garageband and iMovie are spectacular, but neither has the burden of making money.

    The situation has improved slightly since then, primarily with the addition of subscription pricing for apps. Still, that is far inferior from a customer perspective to the previous “Pay for Version 2” model that sustained developers on the Mac for decades; we never did get upgrade pricing or time-limited trial functionality for regular paid apps.

    Instead, as Apple is so wont to do, it tried to fix the problem itself, by making the iPad into an inferior Mac. Thus the multi-tasking disaster Gruber decries, which not only is hard-to-use for consumers, but also dramatically ups the difficulty for developers, making the chances of earning a positive return-on-investment for an iPad app even more remote. Indeed, the top two developers making in-depth iPad apps are Microsoft and Adobe, in service to their own subscription models; the tragedy of the iPad is that their successors were never given the space to be born, which ultimately has limited the iPad from truly succeeding the Mac.


    To be fair, would that we all could “fail” like the iPad; it was a $21 billion business last fiscal year, nearly as much as the Mac’s $26 billion.1 That, though, is why I did not call it a failure: the tragedy of the iPad is not that it flopped, it is that it never did, and likely never will, reach that potential so clearly seen ten years ago.


    1. This sentence originally included a revenue number that was totally wrong due to a complete brain fart on my end 


  • Visa, Plaid, Networks, and Jobs

    Before the network, there is the job.

    In the case of Visa, or, to be more exact, the Bank Americard that would eventually be spun out and renamed Visa, the job for consumers was obvious: instant credit for anything, without the need for a merchant-specific account or a visit to the bank for a personal loan. And so, when Bank of America dropped 60,000 Bank Americards on its customers in Fresno, California, in 1958, they had an immediate reason to give this new-fangled financial product a try.

    What may be less obvious is why Fresno’s merchants might have been interested, particularly since Bank of America planned to charge them 6% of sales. Remember, this is before the network: it was not at all obvious, as it is today, that the increase in sales enabled by the convenience of credit cards would more than make up for credit cards’ attendant fees. However, it turned out that for small merchants in particular there was a major job-to-be-done; Joe Nocera explained in his 1994 book, A Piece of the Action:

    Fresno’s shop owners knew for a fact that, on the day the program began, some 60,000 people would be holding BankAmericards. That was a powerful number, and it had its intended effect. Merchants began to sign on. Not the big merchants, like Sears, which had its own proprietary credit card and saw the bank’s entry into the credit card business as a form of poaching. Rather, it was the smaller merchants who first came around. Larkin remembers visiting a drug store in Bakersfield, hoping to persuade its owner to accept BankAmericard. “When I explained the concept of our credit card,” he says, “the man almost knelt down and kissed my feet. ‘You’ll be the savior of my business,’ he said. We went into his back office,” Larkin continues. “He had three girls working on Burroughs bookkeeping machines, each handling 1,000 to 1,500 accounts. I looked at the size of the accounts: $4.58. $12.82. And he was sending out monthly bills on these accounts. Then the customers paid him maybe three or four months later. Think of what this man was spending on postage, labor, envelopes, stationery! His accounts receivables were dragging him under.”

    A store owner who accepted the credit card was, in effect, handing his back office headaches over to the Bank of America. The bank would guarantee him payment — within days instead of months — and would take over the role of collecting from the customers. As for the bank, in addition to taking its 6 percent cut, the card was a way to get its hooks into businessmen who were not yet Bank of America customers.

    It’s easy to forget just how many things a business that takes credit cards does not need to do: it does not need to extend credit, it does not need to collect payment, it does not need to handle excess amounts of cash. It does not, as Nocera noted, need to have much back office functionality at all. Instead banks provide the credit, Visa provides the infrastructure, and merchants pay around 3% of their sales.

    Visa’s Network

    Not that merchants have much choice in the matter these days. Credit cards are perhaps the best possible example of the power of a multi-sided network; Visa sits in the middle of banks, consumers, and merchants:

    The Visa network

    Everyone benefits from each other:

    Customers and Banks:

    • Customers want to have always-available credit
    • Banks want to provide credit with high fees and interest rates

    Customers and Merchants:

    • Customers want to have a card that works everywhere
    • Merchants want to be able to accept payments from anyone

    Merchants and Banks:

    • Merchants want to be able to sell on credit while still getting their money immediately
    • Banks want to collect a fee on every purchase in exchange for managing credit and pooling risk

    Visa and Mastercard, the other major credit card network,1 sit in the middle of each of these relationships, across billions of customers, millions of merchants, and thousands of banks, collecting a network fee — on top of the interchange fee paid to banks — on every purchase (about 0.05%). The total revenue collected — $20.6 billion in 2018 — is rather small, particularly relative to Visa’s market cap of $420 billion, but that multiple is a testament to just how durable Visa’s position is in the network it created.

    Plaid’s Network

    There are some obvious parallels to be drawn between Visa’s network, particularly in its earliest days, and Plaid, the fintech startup Visa acquired yesterday. From the Wall Street Journal:

    Visa Inc. said Monday it would buy Plaid Inc. for $5.3 billion, as part of an effort by the card giant to tap into consumers’ growing use of financial-technology apps and noncard payments. More consumers over the past decade have been using financial-services apps to manage their savings and spending, and Plaid sits in the middle of those relationships, providing software that gives the apps access to financial accounts. Venmo, PayPal Holdings Inc.’s money-transfer service, is one of privately held Plaid’s biggest customers.

    Visa is the largest U.S. card network, handling $3.4 trillion of credit, debit and prepaid-card transactions in the first nine months of 2019, according to the Nilson Report. Its clients are largely comprised of banks that issue credit and debit cards, but the company is looking to expand its presence in the burgeoning field of electronic payments, where trillions of dollars are sent by wire transfer or between bank accounts globally each year.

    Plaid has its own three-sided network, but it operates a bit differently than Visa’s:

    The Plaid network

    The benefits to some parts of the network are more obvious than others:

    • Developers are able to immediately connect to their customers’ bank accounts without having to implement custom integrations with thousands of banks or waiting several days for traditional verification methods (making two deposits of less than a dollar and having the customer report how much).
    • Consumers are able to use new fin tech apps like Venmo immediately, without having to wait several days.
    • Banks…well this is where it gets messy.

    Plaid’s Product

    To understand how the banks fit in this network it is important to understand what exactly Plaid does. Many banks in the U.S. do not have APIs (Application Programming Interfaces) that offer a programmatic means of accessing a particular account; those that do are not consistent with each other in either implementation or in features. Plaid gets around this by effectively acting as a deputy for consumers: the latter give Plaid their username and password for their bank account, and Plaid utilizes that to basically log in to a bank’s website on the user’s behalf.

    If this sounds a bit shady, well, it kind of is! Bank login information is among the most sensitive credentials consumers have, and apparently one in four people in the U.S. with a bank account have shared those credentials with Plaid. Nearly all did so without knowing any better; here, for example, is the interface Betterment offers when you try and add a bank account:

    Betterment's add bank account flow

    That is not an interface for Chase; it is Plaid,2 effectively training end users to enter their bank credentials in an app and/or on a site that is not their bank! Oh, and because this is very much a hack, Plaid fails between 5 and 10 percent of the time.

    Developers, it should be noted, aren’t particularly bothered by this: in fact, they are paying Plaid for every successful log-in. Users, meanwhile, are likely unaware about just how much access and data they are giving away, but at the same time, have a real desire to access new financial services that require a connection to their bank account. The big problem is that the banks aren’t too sure if they want to participate.

    The reticence is understandable. There is, for example, the fact that many banks’ technical infrastructure is ancient and built around assumptions that did not include APIs for 3rd-parties. More importantly, though, is the power of inertia: as long as it is hard to move money around, the more likely it is that that money will stay in the bank, collecting minuscule interest; or, if customers need value-added services, the path of lowest resistance will be simply getting them from their bank.

    An API-based world could change this dramatically: suddenly consumers could commission robo-advisors to move their cash to whoever is offering the best rates, or to automatically refinance debt. Value-added services from multiple vendors would be equally easy to access, meaning they would have to compete on price or terms. In other words, much like the open Internet, banks fear that profits will be rapidly transformed into consumer benefit.

    Jobs for Banks

    This is where Visa can potentially make a difference and, by extension, pay for this acquisition. $5.3 billion is very steep — around 50x revenue — and Plaid’s business model is not particularly attractive: the company makes the majority of its money when users connect their bank accounts, which for most applications is a one-time event; contrast that to Visa’s credit card network, which earns a fee on every single transaction.

    UPDATE: Plaid makes money every time a user accesses their account for a transaction; for example, every time you pay someone in Venmo. The company says that revenue from these recurring transactions now exceeds revenue earned from bank verification.

    What Visa needs to do is figure out what jobs it can do for banks that makes it worthwhile for them to build out the necessary APIs. The most obvious one is security; as a U.S. Treasury Report on Nonbank Financials, Fintech, and Innovation noted:

    The practice of using login credentials for screen-scraping poses significant security risks, which have been recognized for nearly two decades. Screen-scraping increases cybersecurity and fraud risks as consumers provide their login credentials to access fintech applications. During outreach meetings with Treasury, there was universal agreement among financial services companies, data aggregators, consumer fintech application providers, consumer advocates, and regulators that the sharing of login credentials constitutes a highly risky practice.

    A second job Visa can do is being the devil banks know; that same Treasury report highlighted the fact that the United Kingdom and European Union have initiatives requiring API access to bank accounts, but recommended a private solution. Visa, after this acquisition, is well-placed to leverage Plaid’s widespread use in fintech application and its relationship with banks to come up with a standard that will likely be more favorable to the banks than one imposed by the government.

    What is most necessary, though, is selling banks on the idea that they no longer need the equivalent of three people working on bookkeeping machines; hand over those customer service headaches to companies that will specialize in them, over rails Visa will provide.

    Visa’s Optionality

    This hints at the best case scenario for Visa from this acquisition: a new financial network, with Visa at its center, transforming the consumer financial services industry just as the credit card transformed the consumer retail industry. If that happens, it’s not out of the question that such a network will be so superior to today’s means of moving financial information and data that the company will be able to charge an ongoing toll, instead of simply a set-up fee (and, perhaps, share it with the banks).

    The worst case scenario, meanwhile, will see Plaid’s creaky approach deliver barely good enough service to fintech applications in the U.S., with nothing near the reliability or profitability of Visa’s credit card network. Which, from Visa’s perspective, is not a problem either!

    Visa will also be able to help Plaid expand internationally, including to more favorable markets like the U.K. and E.U. At first glance, open banking might seem to be a problem for Plaid, but the truth is that screen-scraping is not a long-term solution, and developers will still prefer to use one well-built API that abstracts away thousands of financial institutions instead of re-inventing the integration wheel.

    And, most importantly from Visa’s perspective, the credit card business is not going anywhere — if anything, it’s getting stronger. Companies like Stripe are making credit cards more useful in more places, while Apple is making it even easier to use credit cards both online and offline. It is tempting to look at how payments work in countries like China, but that ignores the path dependency of one market using cash until recently, and the other receiving unsolicited Bank Americards 61 years ago. Once a job is done — and credit cards do their jobs very well — it takes a 10x improvement to get users to switch, and, in a three-sided network, that 10x is 10^3.

    I wrote a follow-up to this article in this Daily Update.


    1. American Express and Discover, the other major credit card companies, integrate the banking and network components; they still facilitate a network between merchants and consumers 

    2. Well technically Quovo, which Plaid bought last year in a smart move to consolidate its position 


  • The End of the Beginning

    The first American automobile maker, Duryea Motor Wagon Company, was founded in 1895; 34 more auto-makers would be founded in the U.S. in the following five years.1 Then, an explosion: an incredible 233 additional automobile makers were founded in the first decade of the 20th century, and a further 168 between 1910 and 1919. The pace from that point on continued to slow:

    New American Car Companies in the 20th Century

    On a practical level, that “0” in the 1980’s could be applied to the entire list: by 1920 automobile manufacturing was already dominated by GM, Ford, and Chrysler. AMC, a combination of several smaller brands, was a brief challenger in the 1950s and 1960s, but the “Big Three” mostly had the market to themselves, at least until imports started showing up in the 1970s.

    Just because the proliferation of new car companies ground to a halt, though, does not mean that the impact of the car slowed in the slightest: indeed, it was primarily the second half of the century where the true impact of the automobile was felt in everything from the development of suburbs to big box retailers and everything in between. Cars were the foundation of society’s transformation, but not necessarily car companies.

    Tech’s Story of Disruption

    The story tech most loves to tell about itself is the story of disruption: sure, companies may appear dominant today, but it is only a matter of time until they are usurped by the next wave of startups. And indeed, that is exactly what happened half a century ago: IBM’s mainframe monopoly was suddenly challenged by minicomputers from companies like DEC, Data General, Wang Laboratories, Apollo Computer, and Prime Computers. And then, scarcely a decade later, minicomputers were disrupted by personal computers from companies like MITS, Apple, Commodore, and Tandy.

    The most important personal computer, though, came from IBM, with an operating system from Microsoft. The former provided a massive distribution channel that immediately established the IBM PC as the most popular personal computer, particularly in the enterprise; the latter provided the APIs that created a durable two-sided network that made Microsoft the most powerful company in the industry for two decades.

    That reality, though, was not permanent: first the Internet shifted the most important application environment from the operating system to the web, and then mobile shifted the most important interaction environment from the desk to the pocket. Suddenly it was Google and Apple that mattered most in the consumer space, while Microsoft refocused on the cloud and a new competitor, Amazon.

    Dominance Epochs

    Any discussion of dominance in tech touches on three epochs: IBM, Microsoft, and the present day. In this telling, companies like Google and Apple may be dominant now, but so were IBM and Microsoft, and, just as their days of IBM and Microsoft’s dominance passed, so too will today’s companies be eclipsed. Benedict Evans made this argument in a blog post:

    The tech industry loves to talk about ‘moats’ around a business – some mechanic of the product or market that forms a fundamental structural barrier to competition, so that just having a better product isn‘t enough to break in. But there are several ways that a moat can stop working. Sometimes the King orders you to fill in the moat and knock down the walls. This is the deus ex machina of state intervention – of anti-trust investigations and trials. But sometimes the river changes course, or the harbour silts up, or someone opens a new pass over the mountains, or the trade routes move, and the castle is still there and still impregnable but slowly stops being important. This is what happened to IBM and Microsoft. The competition isn’t another mainframe company or another PC operating system — it’s something that solves the same underlying user needs in very different ways, or creates new ones that matter more. The web didn’t bridge Microsoft’s moat — it went around, and made it irrelevant. Of course, this isn’t limited to tech — railway and ocean liner companies didn’t make the jump into airlines either. But those companies had a run of a century — IBM and Microsoft each only got 20 years.

    None of this is an argument against regulation per se of any specific issue in tech. If a company is abusing dominance today, it is not an argument against intervention to point out that it will lose that dominance in a decade or two — as Keynes says, ‘in the long term we’re all dead’. The same applies to regulation of issues that have little or nothing to do with market dominance, such as privacy (though people sometime fail to understand this distinction). Rather, the problem comes when people claim that somehow these companies are immortal — to say that is to reject all past evidence, and to claim that somehow there will never be another generational change in tech, which seems unwise.

    In this understanding of tech dominance, the driver of generational change is a paradigm shift: from mainframes to personal computers, from desktop applications to the web, first on personal computers, and then on mobile. Each shift brought a new company to dominance, and when the next shift arrives, so will new companies rise to prominence.

    What, though, is the next shift?

    Paradigm Shifts

    There is an implication in the “generational change is inevitable” argument that paradigm shifts are sui generis. The personal computer was a discrete event, the Internet another, and mobile a third. Now we are simply waiting to see what is next — perhaps augmented reality, or voice assistants.

    In fact, I would argue the opposite: the critical paradigm shifts in technology, which drove the generational changes that Evans wrote about, are part of a larger pattern.

    Start with the mainframe: the primary interaction model was punched cards; to execute a program you had to insert your cards into a card reader and wait for the computer to read the program into memory, execute it, and give you the results. Computing was done in batches, because the I/O layer was directly linked to the application and data layer.

    This explains why personal computers were so revolutionary: instead of one large shared computer for which you had to wait your turn, a user could access their own computer on their own desk whenever they wanted. Still, the personal computer, particularly in a corporate environment, lived alongside not just mainframes but increasingly servers on an intranet. The I/O layer and application and data layers were being pulled apart, but both were destinations: you had to go to your desk and be on the network to compute.

    This last point gets at why the cloud and mobile, which are often thought of as two distinct paradigm shifts, are very much connected: the cloud meant applications and data could be accessed from anywhere; mobile made the I/O layer available anywhere. The combination of the two make computing continuous.

    The evolution of computing from the mainframe to cloud and mobile

    What is notable is that the current environment appears to be the logical endpoint of all of these changes: from batch-processing to continuous computing, from a terminal in a different room to a phone in your pocket, from a tape drive to data centers all over the globe. In this view the personal computer/on-premises server era was simply a stepping stone between two ends of a clearly defined range.

    The End of the Beginning

    The implication of this view should at this point be obvious, even if it feels a tad bit heretical: there may not be a significant paradigm shift on the horizon, nor the associated generational change that goes with it. And, to the extent there are evolutions, it really does seem like the incumbents have insurmountable advantages: the hyperscalers in the cloud are best placed to handle the torrent of data from the Internet of Things, while new I/O devices like augmented reality, wearables, or voice are natural extensions of the phone.

    In other words, today’s cloud and mobile companies — Amazon, Microsoft, Apple, and Google — may very well be the GM, Ford, and Chrysler of the 21st century. The beginning era of technology, where new challengers were started every year, has come to an end; however, that does not mean the impact of technology is somehow diminished: it in fact means the impact is only getting started.

    Indeed, this is exactly what we see in consumer startups in particular: few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.

    I wrote a follow-up to this article in this Daily Update.


    1. These numbers are from this Wikipedia article, supplemented with this Wikipedia article; I did not count steam-based automobile makers, motorcycle makers, buggies, or tractor makers 


  • The 2019 Stratechery Year in Review

    2019 was a transition year for me personally, and by extension, Stratechery. The nadir was an Article I regret — The WeWork IPO — where, despite not believing in the company, I wrote the contrarian take because it seemed more interesting (my full mea culpa is here). It was not a healthy approach, but, six years after starting Stratechery, and five years of it being my full-time job, it was perhaps an understandable one.

    The turning point was The China Cultural Clash; writing about the crisis surrounding the NBA in China and the implications for the technology industry reminded me of something I had started to forget in my attempt to be even-handed and dispassionate in my analysis: values matter, and there is a freedom that comes from recognizing and articulating those values. Indeed, honesty about values makes analysis better, because underlying assumptions are pushed to the forefront, instead of fading to the background — and, I’d add, it is invigorating! On to 2020!

    A drawing of Teams and The Enterprise Growth Framework

    This year I wrote 139 Daily Updates (including tomorrow) and 36 Weekly Articles, and, as per tradition, today I summarize the most popular and most important posts of the year.

    You can find previous Stratechery Years in Review here: 2018 | 2017 | 2016 | 2015 | 2014 | 2013

    A drawing of The Three-Way Market of a Super-Aggregator

    Here is the 2019 list.

    The Five Most-Viewed Articles

    1. The WeWork IPO — This, to be perfectly frank, is brutal: I not only regret this article for being insufficiently negative (although, for the record, I was clear I would not invest in WeWork), but now also have to face the fact it was my most popular post of the year. Related: Uber Questions, which took the boring skeptical approach to an S-1 I should have repeated.
    2. The Google Squeeze — Google, the real Aggregator, is squeezing OTAs, which acted like Aggregators while depending on Google for demand. It’s easy to say Google is being unfair, but this may be better for consumers.
    3. Shopify and the Power of Platforms — It is all but impossible to beat an Aggregator head-on, as Walmart is trying to do with Amazon. The solution instead is to build a platform like Shopify.
    4. Disney and the Future of TV — TV is moving from a world where distribution dictates business models to one where business models need to fit the jobs consumers want done. That is the best way to understand Disney’s latest announcement.
    5. AWS, MongoDB, and the Economic Realities of Open Source — Amazon’s latest offering highlights the economic challenges facing open source companies — and Amazon should pay attention.
    A drawing of The Shopify Ecosystem

    Lessons Learned

    The good thing about making mistakes is that it is an opportunity to learn; two of these articles are directly connected to WeWork.

    • Neither, and New: Lessons from Uber and Vision Fund — Uber represents something new: a company that is different than incumbents because of technology, yet not itself a tech company — just like the Vision Fund is not a VC.
    • What is a Tech Company? — The question of “What is a tech company” comes down to how much software and its unique characteristics affects the company’s core business.
    • The Value Chain Constraint — Companies succeed or fail not based on technology but rather according to their ability to integrate within their value chains.
    A drawing of Amazon's Value Chain

    Values and Society

    Almost all decisions involve trade-offs; the most difficult are those that require understanding and prioritizing our values.

    • The Internet and the Third Estate — Mark Zuckerberg suggested that social media is a “Fifth Estate”; in fact, social media is a means by which the Third Estate — commoners — can seize political power. Here history matters. Related: Tech and Liberty and the policing of political speech.
    • The China Cultural Clash — The NBA controversy in China highlights a culture clash that both tech companies and the U.S. government need to take to heart. Plus, why Tiktok being Chinese is increasingly a problem. Related: China, Leverage, and Values, about the trade war.
    • Privacy Fundamentalism — The current privacy debate is making things worse by not considering trade-offs, the inherent nature of digital, or the far bigger problems that come with digitizing the offline world.
    A drawing of The Synergy Between Tech Companies and Venture Capitalists

    Regulation and Antitrust

    While regulation was also a theme in 2018, this year I tried to get much more specific about how to think about the challenges presented by the Internet.

    • Where Warren’s Wrong — Senator Warren’s proposal about how to regulate tech is wrong about history, the source of tech giant’s power, and the fundamental nature of technology itself. That doesn’t mean there aren’t real problems — and potential solutions — though.
    • Tech and Antitrust — A review of the potential antitrust cases against Google, Apple, Facebook, and Amazon suggests that only Google is vulnerable.
    • Three Frameworks:
    A drawing of The Regulatory Framework for the Internet

    The Big Tech Companies

    Tech’s continued centralization means that the biggest companies — Microsoft, Apple, Google, Amazon, and Facebook — receive the largest scrutiny.

    A drawing of Google's Ambient Computing

    Media and Technology

    The most important development of the year in media was the launch of Disney+; I already linked to Disney and the Future of TV. Also:

    • Netflix Flexes — Netflix is an Aggregator, with a value chain that lets it drive demand, raise prices, and dismiss competition.
    • Spotify’s Podcast Aggregation Play — Spotify is making a major move into podcasts, where it appears to have clear designs to be the sort of Aggregator it cannot be when it comes to music.
    • The BuzzFeed Lesson — The lesson of BuzzFeed is that dominant Aggregators like Facebook have no incentive to act against their self interest and support suppliers. Related: The Cost of Apple News.
    A drawing of The Music Value Chain Versus the Podcast Value Chain

    The Year in Daily Updates

    This year the Daily Update not only continued the trend towards single topics, but often became the place where new ideas and future Weekly Articles were first presented and fleshed out. I’m really proud of this evolution — this was a hard list to cull. Some of my favorites:

    A drawing of Facebook and Amazon's Approach To Take On Apple and Google

    I also conducted six interviews for The Daily Update:

    Netflix' value chain

    I can’t say it enough: I am so grateful to Stratechery’s readers and especially subscribers for making all of these posts possible. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2020!


  • A Framework for Regulating Competition on the Internet

    A prompt for writing this piece is a conference I will be participating in tomorrow entitled Antitrust in Times of Upheaval — A Global Conversation; you can livestream the conference here.

    I opened 2017’s Defining Aggregators by stating:

    Aggregation Theory describes how platforms (i.e. aggregators) come to dominate the industries in which they compete in a systematic and predictable way. Aggregation Theory should serve as a guidebook for aspiring platform companies, a warning for industries predicated on controlling distribution, and a primer for regulators addressing the inevitable antitrust concerns that are the endgame of Aggregation Theory.

    This Article is explicitly related to that piece: unlike most Stratechery items, this Article is not based on a specific news event that happened recently, but is rather an attempt to collect a number of ideas and thoughts I have expressed in different Articles, Daily Updates, and Podcasts about Aggregators, platforms, and regulation. I will link to several of those Articles throughout.

    Platform And Aggregator Structure

    The most important place to start is by pointing out the excerpt above makes what I now believe is a critical mistake: it conflates platforms and Aggregators. In fact, I believe platforms and Aggregators are fundamentally different entities, and understanding how and why they are different is the single most important task facing would-be regulators.

    I explored these differences in 2018’s Tech’s Two Philosophies, The Moat Map, and The Bill Gates Line. This is how I illustrated platforms:

    A diagram of a platform

    The name “platform” is a descriptive one: it is the foundation on which entire ecosystems are built. The most famous example of a platform — one with which regulators are intimately familiar — is Microsoft Windows. Windows provided an operating system for personal computers, a set of APIs for developers, and a user interface for end users, to the benefit of all three groups: developers could write applications that made personal computers useful to end users, thanks to the Windows platform tying everything together.

    What is critical to note about Windows, though — and this extends to newer platforms like iOS and Android — is that it was essential for the ecosystem to function. Developers could not write applications for another operating system if they wanted to reach users, and users could not use a different operating system if they wanted to use popular applications.

    Aggregators are different. This is how I illustrate them:

    A diagram of an Aggregator

    “Aggregator” is also descriptive: Aggregators collect a critical mass of users and leverage access to those users to extract value from suppliers. The best example of an Aggregator is Google. Google offered a genuine technological breakthrough with Google Search that made the abundance of the Internet accessible to users; as more and more users began their Internet sessions with Google, suppliers — in this case websites — competed to make their results more attractive to and better suited to Google, the better to acquire end users from Google, which made Google that much better and more attractive to end users.

    Notably, unlike platforms, Google is not essential for either end users or 3rd party websites. There is no “Google API” that makes 3rd party websites functional, and there are alternative search engines or simply the URL bar for users to go directly to 3rd party websites. That Google is so influential and profitable is, first and foremost, because end users continue to prefer it.1

    Here is a way to visualize the difference:

    • Platforms facilitate a relationship between users and 3rd-party developers:

      A platform value chain is interdependent

    • Aggregators intermediate the relationship between users and 3rd-party developers:

      An Aggregator intermediates supply and demand

    To be clear, both roles can be beneficial — platforms make the relationship between users and 3rd-parties possible, and Aggregators helps users find 3rd-parties in the first place — and both roles can also be abused.

    Platform and Aggregator Abuse

    The potential impacts on competition by Platforms and Aggregators are broadly similar, differing mostly by degree:

    Vertical foreclosure: Platforms can make it impossible for 3rd-parties to function on their platform, either through technological means or, in the case of smartphone platforms, by leveraging their gatekeeper role in terms of App Stores.

    Aggregators can also ban 3rd-parties — Google can remove a site from search, or Facebook can remove links from the News Feed — but they cannot force that site to simply not exist. Users can still reach those sites via other search engines, links, or by typing in the URL.

    Rent-Seeking: Instead of blocking third-parties, platforms can simply extract money; this often works in conjunction with the threat of foreclosure. Apple, for example, does not allow apps that have their own payment processor, or that even link to a website with payment functionality; they are, however, happy to offer their own payment processor, complete with 30% fee to Apple.

    Aggregators, meanwhile, make it increasingly difficult to reach end users without paying for advertising; Google and its expansion of vertical search categories like travel is a perfect example. However, Aggregators do not have as strong of a vertical foreclosure stick as do platforms.

    Tying/Bundling: Platforms can include additional functionality or applications that have nothing to do with the platform, but rather leverage the platform to gain market share. The most famous example of this is Windows bundling Internet Explorer, the legality of which was never settled in the United States; the Appeals Court remanded the case to the District Court because of concern a per se application would chill innovation (and in this case Microsoft has been proven right: all operating systems come with browsers now).

    This is arguably the true Aggregator stick when it comes to rent-seeking: to return to the travel example, online travel agents may not be happy about paying to be a part of Google Search’s travel module and the hit on margins that entails, but it is better than Google launching its own OTA.

    Self-Dealing: Platforms can give their own products advantages, often through special APIs that are not available to competitive products. For example, real-time co-authoring of Microsoft Office documents only works in the Office desktop apps if the documents were opened from OneDrive, but not if opened from Dropbox or Box.

    For Aggregators, this advantage is more about putting their competitive product in front of users. Google local results, for example, are not even listed in the Google index, yet they are inserted at the top of the search engine results page (SERP).

    The question for regulators is when these abuses should lead to action.

    Principles of Regulation

    That certain companies have advantages or earn sustainable profits is not a sufficient reason for regulators to act; innovation deserves its reward. Moreover, the presence of abuses like those detailed above is also not necessarily a sufficient reason for regulators to act: regulation can have a chilling effect on innovation, it can be ineffective, and it can incur significant opportunity costs on both companies and regulators. Regulators should focus their attention and resources on abuses for which there is no other recourse than regulation.

    This is where the distinction between platforms and Aggregators is critical. Platforms are the most powerful economic and innovation engines in technology: they create the possibility for products that never existed previously, and are the foundation for huge amounts of innovation. It is in the interest of society that there be more and larger platforms, not fewer and smaller.

    At the same time, the danger of platform abuse is significantly greater, because users and 3rd-party developers have no other alternative. That means that not only are anticompetitive actions unfair to products that already exist, they also foreclose the creation of an untold number of new products. To that end, regulators should simultaneously encourage the formation of new platforms while ensuring those platforms do not abuse their position.

    From a practical standpoint, this means that platforms should have significant latitude in mergers and acquisitions, but significant scrutiny in terms of vertical foreclosure, rent-seeking, bundling, and self-dealing.

    Apple is the pre-eminent example here: the iPhone specifically and the entire iPhone ecosystem generally has benefitted tremendously not only from Apple’s internally-created innovations but also from acquisitions like P.A. Semi, which led to the creation of Apple’s A-series of chips. However, the combination of Apple’s total control over 3rd-party app installation and rent-seeking on in-app payments has, in my estimation, stunted innovation and opportunity in the app ecosystem.

    Aggregators are different. Yes, they provide value to end users and to third-parties, at least for a time, but the incentives are warped from the beginning: 3rd-parties are not actually incentivized to serve users well, but rather to make the Aggregator happy. The implication from a societal perspective is that the economic impact of an Aggregator is much more self-contained than a platform, which means there is correspondingly less of a concern about limiting Aggregator growth.

    For the same reason, though, Aggregators are less of a problem. Third parties can — and should! — go around Aggregators to connect to consumers directly; the presence of an Aggregator is just as likely to spur innovation on the part of a third party in an attempt to attract consumers without having to pay an Aggregator for access. Moreover, there is a Sisyphean aspect to regulating power predicated on consumer choice: look no further than the European Union, where regulators are frustrated that remedies for the Google shopping case aren’t working, even though those same regulators were happy with the remedies in theory; the problem was trying to regulate consumer choice in the first place.

    It follows, then, that regulatory priorities should be the opposite of platforms: given that Aggregator power comes from controlling demand, regulators should look at the acquisition of other potential Aggregators with extreme skepticism. At the same time, whatever an Aggregator chooses to do on its own site or app is less important, because users and third parties can always go elsewhere, and if they don’t, that is because they are satisfied.

    Here Facebook is a useful example: the company’s competitive position would be considerably shakier — and the consumer ad-supported ecosystem considerably healthier — if it had not acquired Instagram and WhatsApp, two other consumer-facing apps. At the same time, Facebook’s specific policies around what does or does not appear on its apps, or how it organizes its feed, has no reason to be a regulatory concern; I would argue the same thing when it comes to Google’s search results.

    There is one additional area where regulators should focus: advertising. Advertising is a core component of Super-Aggregators like Facebook and Google because the incentives are perfectly aligned: Facebook and Google want to serve everyone, which means they want to be free, while advertisers want to have access to everyone, which means coalescing around the largest Aggregators.

    First, this results in a type of market failure when it comes to problematic content, as I detailed in A Framework for Regulating Content on the Internet. Second, the sheer scale of the core Aggregator gives a massive scale advantage that can be applied elsewhere; Google in particular is much more of an inescapable platform when it comes to advertising on third party sites; this is where regulatory scrutiny should be focused.


    There is one final component to this analysis: if platforms and Aggregators are to be treated differently, regulators need a more flexible way of considering when is the correct time to step in.

    Consider the three regulatory issues that I implicitly suggested deserve more attention in this piece: Apple’s App Store policies, Facebook’s acquisitions, and Google’s third-party advertising offerings. None of them fit under a popular conception of a monopoly: Apple sells a minority of smart phones, Facebook acquired Instagram when it had only 30 million users, and the advertising market is both not consumer-facing and has infinite supply.

    That doesn’t mean harms don’t exist, though: the apps and services that aren’t created, the advertising-based consumer services that are under-monetized (Snapchat and Twitter) or that aren’t even being funded, and the multitude of websites that can’t realistically even try innovations that entail going around Google.

    That, though, is why I am writing this piece. Cases like the European Commission Google Shopping case are excellent examples of how a lack of clear standards lead to sub-optimal outcomes that don’t actually change anything; at the same time, the European Commission’s investigation of Apple’s App Store will surely benefit from increased flexibility in defining relative markets.

    Ideally, there would be stricter adherence to better rules, instead of finger-crossing that Brussels gets it right. That, though, likely requires new laws. Indeed, while that seems like a slog, it should, in my estimation, be the focus of those interested in a future where we direct tech’s innovation towards making a larger pie for everyone, as opposed to cutting off slices because it makes us feel better, even if only temporarily.

    I wrote a follow-up to this article in this Daily Update.


    1. Google also, to be clear, pays significant amounts of money to ensure it is the default on iOS, and basically built Android to ensure it is the default everywhere else. The European Commission correctly found Google’s contractual moves to secure its position on Android illegal