Tulips, Myths, and Cryptocurrencies

Everyone knows about the Tulip Bubble, first documented by Charles Mackay in 1841 in his book Extraordinary Popular Delusions and the Madness of Crowds:

In 1634, the rage among the Dutch to possess [tulips] was so great that the ordinary industry of the country was neglected, and the population, even to its lowest dregs, embarked in the tulip trade. As the mania increased, prices augmented, until, in the year 1635, many persons were known to invest a fortune of 100,000 florins in the purchase of forty roots. It then became necessary to sell them by their weight in perits, a small weight less than a grain. A tulip of the species called Admiral Liefken, weighing 400 perits, was worth 4400 florins; an Admiral Van der Eyck, weighing 446 perits, was worth 1260 florins; a Childer of 106 perits was worth 1615 florins; a Viceroy of 400 perits, 3000 florins, and, most precious of all, a Semper Augustus, weighing 200 perits, was thought to be very cheap at 5500 florins. The latter was much sought after, and even an inferior bulb might command a price of 2000 florins. It is related that, at one time, early in 1636, there were only two roots of this description to be had in all Holland, and those not of the best. One was in the possession of a dealer in Amsterdam, and the other in Harlaem [sic]. So anxious were the speculators to obtain them, that one person offered the fee-simple of twelve acres of building-ground for the Harlaem tulip. That of Amsterdam was bought for 4600 florins, a new carriage, two grey horses, and a complete suit of harness.

Mackay goes on to recount other tall tales; I’m partial to the sailor who thought a Semper Augustus bulb was an onion, and stole it for his breakfast; “Little did he dream that he had been eating a breakfast whose cost might have regaled a whole ship’s crew for a twelvemonth.”


Anyhow, we all know how it ended:

At first, as in all these gambling mania, confidence was at its height, and everybody gained. The tulip-jobbers speculated in the rise and fall of the tulip stocks, and made large profits by buying when prices fell, and selling out when they rose. Many individuals grew suddenly rich. A golden bait hung temptingly out before the people, and one after the other, they rushed to the tulip-marts, like flies around a honey-pot. Every one imagined that the passion for tulips would last for ever…

At last, however, the more prudent began to see that this folly could not last for ever. Rich people no longer bought the flowers to keep them in their gardens, but to sell them again at cent per cent profit. It was seen that somebody must lose fearfully in the end. As this conviction spread, prices fell, and never rose again. Confidence was destroyed, and a universal panic seized upon the dealers…The cry of distress resounded every where, and each man accused his neighbour. The few who had contrived to enrich themselves hid their wealth from the knowledge of their fellow-citizens, and invested it in the English or other funds. Many who, for a brief season, had emerged from the humbler walks of life, were cast back into their original obscurity. Substantial merchants were reduced almost to beggary, and many a representative of a noble line saw the fortunes of his house ruined beyond redemption.

Thanks to Mackay’s vivid account, tulips are a well-known cautionary tale, applied to asset bubbles of all types; here’s the problem, though: there’s a decent chance Mackay’s account is completely wrong.

The Truth About Tulips

In 2006, UCLA economist Earl Thompson wrote a paper entitled The Tulipmania: Fact or Artifact?1 that includes this chart that looks like Mackay’s bubble:

Screen Shot 2017-05-23 at 7.16.24 PM

However, as Thompson wrote in the paper, “appearances are sometimes quite deceiving.” A much more accurate chart looks like this:

Screen Shot 2017-05-23 at 7.17.16 PM

Mackay was right that there were insanely high prices: those prices, though, were for options; if the actual price of tulips were lower on the strike date for the options, then the owner of the option only needed to pay a small percentage of the contract price (ultimately 3.5%). Meanwhile, though, actual spot prices and futures (that locked in a price) stayed flat.

The broader context comes from this chart:

Screen Shot 2017-05-23 at 7.17.32 PM

As Thompson explains, tulips in fact were becoming more popular, particularly in Germany, and, as the first phase of the 30 Years War wound down, it looked like Germany would be victorious, which would mean a better market for tulips. In early October, 1636, though, Germany suffered an unexpected defeat, and the tulip price crashed, not because it was irrationally high, but because of an external shock.

As Thompson recounts, that October crash was in fact a financial disaster for many, including some public officials who had bought tulip futures on a speculative basis; to get themselves out of trouble, said officials retroactively decreed that futures were in fact options. These deliberations were well-publicized throughout the winter of 1636 and early 1637, but not made official until February 24th; the dramatic rise in options, then, is explained as a longshot bet that the conversion would not actually take place: when it did, the price of the options naturally dropped to the spot price.2

By Thompson’s reckoning, Mackay’s entire account was a myth.

Myths and Humans

Early on in Sapiens: A Brief History of Humankind, Yuval Noah Harari explains the importance of myth:

Once the threshold of 150 individuals is crossed, things can no longer work [on the basis of intimate relations]…How did Homo sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.

Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe that God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights – and the money paid out in fees. Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.

The implication of Harari’s argument3 is pretty hard to wrap one’s head around.4 Take the term “tulip bubble”: everyone knows it is in reference to a speculative mania that will end in a crash, even those like me — and now you — that have learned about what actually happened in the Netherlands in the winter of 1636. Like I said, it’s a myth — and myths matter.

The Rise in Cryptocurrencies

The reason I mention the tulip bubble at all is probably obvious:

Screen Shot 2017-05-23 at 5.07.24 PM

This is the total market capitalization of all cryptocurrencies. To date that has mostly meant Bitcoin, but over the last two months Bitcoin’s share of cryptocurrency capitalization has actually plummeted to less than 50%, thanks to the sharp rise of Ethereum and Ripple in particular:


As you might expect, the tulip is having a renaissance, or to be more precise, our shared myth of the tulip bubble. This tweet summed up the skeptics’ sentiment well:

To be perfectly clear, this random twitterer may very well be correct about an impending crash. And, in the grand scheme of things, it is mostly true today that cryptocurrencies don’t have meaningful “industrial [or] consumer use except as a medium of exchange.” What he is the most right about, though, is that cryptocurrencies have no intrinsic value.

Compare cryptocurrencies to, say, the U.S. dollar. The U.S. dollar is worth, well, a dollar because…well, because the United States government says it is.5 And because currency traders have established it as such, relative to other currencies. And the worth of those currencies is based on…well, like the dollar, they are based on a mutual agreement of everyone that they are worth whatever they are worth. The dollar is a myth.

Of course this isn’t a new view: there are still those that believe it was a mistake to move the dollar off of the gold standard: that was a much more concrete definition. After all, you could always exchange one dollar for a fixed amount of gold, and gold, of course, has intrinsic value because…well, because us humans think it looks pretty, I guess. In fact, it turns out gold — at least the idea that it is of intrinsically more worth than another mineral — is another myth.

I would argue that cryptocurrency broadly, and Bitcoin especially, are no different. Bitcoin has been around for eight years now, it has captured the imagination, ingenuity, and investment of a massive number of very smart people, and it is increasingly trivial to convert it to the currency of your choice. Can you use Bitcoin to buy something from the shop down the street? Well, no, but you can’t very well use a piece of gold either, and no one argues that the latter isn’t worth whatever price the gold market is willing to bear. Gold can be converted to dollars which can be converted to goods, and Bitcoin is no different. To put it another way, enough people believe that gold is worth something, and that is enough to make it so, and I suspect we are well past that point with Bitcoin.

The Utility of Blockchains

To be fair, there is an argument that gold is valuable because it does have utility beyond ornamentation (I, of course, would argue that that is a perfectly valuable thing in its own right): for example, gold is used in electronics and dentistry. An argument based on utility, though, applies even moreso to cryptocurrencies. I wrote back in 2014:

The defining characteristic of anything digital is its zero marginal cost…Bitcoin and the breakthrough it represents, broadly speaking, changes all that. For the first time something can be both digital and unique, without any real world representation. The particulars of Bitcoin and its hotly-debated value as a currency I think cloud this fact for many observers; the breakthrough I’m talking about in fact has nothing to do with currency, and could in theory be applied to all kinds of objects that can’t be duplicated, from stock certificates to property deeds to wills and more.

One of the big recent risers, Ethereum, is exactly that: Ethereum is based on a blockchain,6 like Bitcoin, which means it has an attached currency (Ether) that incentivizes miners to verify transactions. However, the protocol includes smart contract functionality, which means that two untrusted parties can engage in a contract without a 3rd-party enforcement entity.7

One of the biggest applications of this functionality is, unsurprisingly, other cryptocurrencies. The last year in particular has seen an explosion in Initial Coin Offerings (ICOs), usually on Ethereum. In an ICO a new blockchain-based entity is created, with the initial “tokens” — i.e. currency — being sold (for Ether or Bitcoin). These initial offerings are, at least in theory, valuable because the currency will, if the application built on the blockchain is successful, increase in value over time.

This has the potential to be particularly exciting for the creation of decentralized networks. Fred Ehrsam explained on the Coinbase blog:

Historically it has been difficult to incentivize the creation of new protocols as Albert Wenger points out. This has been because 1) there had been no direct way to monetize the creation and maintenance of these protocols and 2) it had been difficult to get a new protocol off the ground because of the chicken and the egg problem. For example, with SMTP, our email protocol, there was no direct monetary incentive to create the protocol — it was only later that businesses like Outlook, Hotmail, and Gmail started using it and made a real business on top of it. As a result we see very successful protocols and they tend to be quite old. (Editor: and created when the Internet was government-supported)

Now someone can create a protocol, create a tokens that is native to that protocol, and retain some of that token for themselves and for future development. This is a great way to incentivize creators: if the protocol is successful, the token will go up in value…In addition, tokens help solve the classic chicken and the egg problem that many networks have…the value of a network goes up a lot when more people join it. So how do you get people to join a brand new network? You give people partial ownership of the network…


These two incentives are amazing offsets for each other. When the network is less populated and useful you now have a stronger incentive to join it.

This is a huge deal, and probably the most viable way out from the antitrust trap created by Aggregation Theory.

Party Like It’s 1999

The problem, of course, is that while blockchain applications make sense in theory, the road to them becoming a reality is still a long one. That is why I suspect the better analogy for blockchain-based applications and their associated cryptocurrencies is not tulips but rather the Internet itself, specifically the 1990s. Marc Andreessen is fond of observing, most recently on this excellent podcast with Barry Ritholtz, that all of the dot-com failures turned out to be viable businesses: they were just 15 years too early (the most recent example: Chewy.com, the spiritual heir of famed dot-com bust Pets.com, acquired earlier this year for $3.35 billion).

As the aphorism goes, being early (or late) is no different than being wrong, and that’s true in a financial sense. As I noted above, I would not be surprised if the ongoing run-up in cryptocurrency prices proves to be, well, a bubble. However, bubbles of irrationality and bubbles of timing are fundamentally different: one is based on something real (the latter), and one is not. That is to say, one is a myth, and one is merely a fable — and myths can lift an entire species.

Consistent with my ethics policy, I do not own any Bitcoin or any other cryptocurrency; that said, the implication of this article is that comparing Bitcoin or any other cryptocurrencies to stock in an individual company probably doesn’t make much sense

  1. This link requires payment; there is an uploaded version of the paper here []
  2. Thompson’s take is not without its critics: see Brad DeLong’s takedown here []
  3. If you’re religious, please apply the point about “gods” to other religions — the point still stands! []
  4. I first encountered this sort of thinking in an Introduction to Constitutional Law course in university, when my professor contended that the U.S. Constitution was simply a shared myth, dependent on the mutual agreement of Americans and its leaders that it mattered. It’s a lesson that has served me well []
  5. And, as @nosunkcosts notes, said claim, via taxes, is backed by military might []
  6. A useful overview of how cryptocurrencies work is here []
  7. What happened with The Dao will not be covered here! []

Boring Google

My favorite part of keynotes is always the opening. That is the moment when the CEO comes on stage, not to introduce new products or features, but rather to create the frame within which new products and features will be introduced.

This is why last week’s Microsoft keynote was so interesting: CEO Satya Nadella spent a good 30 minutes on the framing, explaining a new world where the platform that mattered was not a distinct device or a particular cloud, but rather one that ran on all of them. In this framing Microsoft, freed from a parochial focus on its own devices, could be exactly that; the problem, as I noted earlier this week, is that platforms come from products, and Microsoft is still searching for an on-ramp other than Windows.

The opening to Google I/O couldn’t have been more different. There was no grand statement of vision, no mind-bending re-framing of how to think about the broader tech ecosystem, just an affirmation of the importance of artificial intelligence — the dominant theme of last year’s I/O — and how it fit in with Google’s original vision. CEO Sundar Pichai said in his prepared remarks:

It’s been a very busy year since last year, no different from my 13 years at Google. That’s because we’ve been focused ever more on our core mission of organizing the world’s information. And we are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale. That approach has served us very, very well. This is what has allowed us to scale up seven of our most important products and platforms to over a billion users…It’s a privilege to serve users at this scale, and this is all because of the growth of mobile and smartphones.

But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.

Honestly, it was kind of boring.

Google’s Go-to-Market Problem

After last year’s I/O I wrote Google’s Go-To-Market Problem, and it remains very relevant. No company benefited more from the open web than Google: the web not only created the need for Google search, but the fact that all web pages were on an equal footing meant that Google could win simply by being the best — and they did.

Mobile has been much more of a challenge: while Android remains a brilliant strategic move, its dominance is rooted more in its business model than in its quality (that’s not to denigrate its quality in the slightest, particularly the fact that Android runs on so many different kinds of devices at so many different price points). The point of Android — and the payoff today — is that Google services are the default on the vast majority of phones.

The problem, of course, is iOS: Apple has the most valuable customers (from a monetization perspective, to be clear), who mostly don’t bother to use different services than the default Apple ones, even if they are, in isolation, inferior. I wrote in that piece:

Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.

To that end, I thought there were three product announcements yesterday that suggested Google is on the right track:

Google Assistant

Google Assistant was first announced last year, but it was only available through the Allo messenger app, Google’s latest attempt to build a social product; the company also pre-announced Google Home, which would not ship until the fall, alongside the Pixel phone. You could see Google’s thinking with all three products:

  • Given that the most important feature of a messaging app is whether or not your friends or family also use it, Google needed a killer feature to get people to even download Allo. Enter Google Assistant.

  • Thanks to the company’s bad bet on Nest, Google was behind Amazon in the home. Google Assistant being smarter than Alexa was the best way to catch up.

  • A problem for Google with voice computing is that it is not clear what the business model might be; one alternative would be to start monetizing through hardware, and so the high-end Pixel phone was differentiated by Google Assistant.

All three approaches suffered from the same flaw: Google Assistant was the means to a strategic goal, not the end. The problem, though, is that unlike search, Google Assistant was not yet established as something people should jump through hoops to get: driving Google Assistant usage needs to be the goal; only then can it be leveraged for something else.

To that end Google has significantly changed its approach over the last 12 months.

  • Google Assistant is now available as its own app, both on Android and iOS. No unwanted messenger app necessary.

  • The Google Assistant SDK will allow Google Assistant to be built in to just about anything. Scott Huffman, the VP of Google Assistant said:

    We think the assistant should be available on all kinds of devices where people might want to ask for help. The new Google Assistant SDK allows any device manufacturer to easily build the Google Assistant into whatever they’re building, speakers, toys, drink-mixing robots, whatever crazy device all of you think up now can incorporate the Google Assistant. We’re working with many of the world’s best consumer brands and their suppliers so keep an eye out for the badge that says “Google Assistant Built-in” when you do your holiday shopping this year.

    This is the exact right approach for a services company.

  • That leads to the Pixel phone: earlier this year Google finally added Google Assistant to Android broadly — built-in, not an app — after having insisted just a few months earlier it was a separate product. The shifting strategy was a big mistake (as, arguably, is the entire program), but at least Google has ended up where they should be: everywhere.

Google Photos

Google Assistant has a long ways to go, but there is a clear picture of what success will look like: Google Photos. Launched only two years ago, Pichai bragged that Photos now has over 500 million active users who upload 1.2 billion photos a day. This is a spectacular number for one very simple reason: Google Photos is not the default photo app for Android1 or iOS. Rather, Google has earned all of those photos simply by being better than the defaults, and the basis of that superiority is Google’s machine learning.

Moreover, much like search, Photos gets better the more data it gets, creating a virtuous cycle: more photos means more data which means a better experience which means more users which means more photos. It is already hard to see other photo applications catching up.

stratechery Year One - 263

Yesterday Google continued to push forward, introducing suggested sharing, shared libraries, and photo books. All utilize vision recognition (for example, you can choose to automatically share pictures of your kids with your significant other) and all make Photos an even better app, which will lead to new users, which will lead to more data.

What is particularly exciting from Google’s perspective is that these updates add a social component: suggested sharing, for example, is self-contained within Google Photos, creating ad hoc private networks with you and your friends. Not only does this help spread Google Photos, it is also a much more viable and sustainable approach to social networking than something like Google Plus. Complex entities like social networks are created through evolution, not top-down design, and they must rely on their creator’s strengths, not weaknesses.

Google Lens

Google Lens was announced as a feature of Google Assistant and Google Photos. From Pichai:

We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.

How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.

As you can see, we are beginning to understand images and videos. All of Google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.

The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.

My one concern is that Google is repeating its previous mistake: that is, seeking to use a new product as a means instead of an end. Limiting Google Lens to Google Assistant and Google Photos risks handicapping Lens’ growth; ideally Lens will be its own app — and thus the foundation for other applications — sooner rather than later.

Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization. Google Assistant requires you to open an app instead of using what is built-in (although the Android situation should improve going forward), Photos requires a download instead of the default photos app, and Lens sits on top of both. It’s a far cry from simply setting Google as the home page of your browser, and Google making more money the more people used the Internet.

All three apps, though, are leaning into Google’s strengths:

  • Google Assistant is focused on being available everywhere
  • Google Photos is winning by being better through superior data and machine learning
  • Google Lens is expanding Google’s utility into the physical world

There were other examples too: Google’s focus with VR is building a cross-device platform that delivers an immersive experience at multiple price points, as opposed to Facebook’s integrated high-end approach that makes zero sense for a social network. And, just as Apple invests in chips to make its consumer products better, Google is investing in chips to make its machine learning better.

The Beauty of Boring

This is the culmination of a shift that happened two years ago, at the 2015 Google I/O. As I noted at the time,2 the event was two keynotes in one.

[The first hour was] a veritable smorgasbord of features and programs that [lacked a] unifying vision, just a sense that Google should do them. An operating system for the home? Sure! An Internet of Things language? Bring it on! Android Wear? We have apps! Android Pay? Obviously! A vision for Android? Not necessary!

None of these had a unifying vision, just a sense that Google ought to do them because they’re a big company that ought to do big things.

What was so surprising, though was that the second hour of that keynote was completely different. Pichai gave a lengthy, detailed presentation about machine learning and neural nets, and tied it to Google’s mission, much like he did in yesterday’s introduction. After quoting Pichai’s monologue I wrote:

Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do…The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.

This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.

That is the best place to be, for a person and for a company.

  1. Google Photos was default for the Pixel, and for more and more Android phones that have come out in the past few months []
  2. This is a Daily Update but I have made it publicly available []

WannaCry About Business Models

For the users and administrators of the estimated 200,000 computers affected by “WannaCry” — a number that is expected to rise as new variants come online (the original was killed serendipitously by a security researcher) — the answer to the question implied in the name is “Yes.”


WannaCry is a type of malware called “ransomware”: it encrypts a computers’ files and demands payment to decrypt them. Ransomware is not new; what made WannaCry so destructive was that it was built on top of a computer worm — a type of malware that replicates itself onto other computers on the same network (said network, of course, can include the Internet).

Worms have always been the most destructive type of malware — and the most famous: even non-technical readers may recognize names like Conficker (an estimated $9 billion of damage in 2008), ILOVEYOU (an estimated $15 billion of damage in 2000), or MyDoom (an estimated $38 billion of damage in 2004). There have been many more, but not so many the last few years: the 2000s were the sweet spot when it came to hundreds of millions of computers being online with an operating system — Windows XP — that was horrifically insecure, operated by users given to clicking and paying for scams to make the scary popups go away.

Over the ensuing years Microsoft has, from Windows XP Service Pack 2 on, gotten a lot more serious about security, network administrators have gotten a lot smarter about locking down their networks, and users have at least improved in not clicking on things they shouldn’t. Still, as this last weekend shows, worms remain a threat, and as usual, everyone is looking for someone to blame. This, time, though, there is a juicy new target: the U.S. government.

The WannaCry Timeline

Microsoft President and Chief Legal Officer Brad Smith didn’t mince any words on The Microsoft Blog (“WannaCrypt” is an alternative name for WannaCry”):

Starting first in the United Kingdom and Spain, the malicious “WannaCrypt” software quickly spread globally, blocking customers from their data unless they paid a ransom using Bitcoin. The WannaCrypt exploits used in the attack were drawn from the exploits stolen from the National Security Agency, or NSA, in the United States. That theft was publicly reported earlier this year. A month prior, on March 14, Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments, and computers at homes were affected.

Smith mentions a number of key dates, but it’s important to get the timeline right, so let me summarize it as best as I understand it:1

  • 2001: The bug in question was first introduced in Windows XP and has hung around in every version of Windows since then
  • 2001–2015: At some point the NSA (likely the Equation Group, allegedly a part of the NSA) discovered the bug and built an exploit named EternalBlue, and may or may not have used it
  • 2012–2015: An NSA contractor allegedly stole more than 75% of the NSA’s library of hacking tools
  • August, 2016: A group called “ShadowBrokers” published hacking tools they claimed were from the NSA; the tools appeared to come from the Equation Group
  • October, 2016: The aforementioned NSA contractor was charged with stealing NSA data
  • January, 2017: ShadowBrokers put a number of Windows exploits up for sale, including a SMB zero day exploit — likely the “EternalBlue” exploit used in WannaCry — for 250 BTC (around $225,000 at that time)
  • March, 2017: Microsoft, without fanfare, patched a number of bugs without giving credit to whomever discovered them; among them was the EternalBlue exploit, and it seems very possible the NSA warned them
  • April, 2017: ShadowBrokers released a new batch of exploits, including EternalBlue, perhaps because Microsoft had already patched them (dramatically reducing the value of zero day exploits in particular)
  • May, 2017: WannaCry, based on the EternalBlue exploit, was released and spread to around 200,000 computers before its kill switch was inadvertently triggered; new versions have already begun to spread

It is axiomatic to note that the malware authors bear ultimate responsibility for WannaCry; hopefully they will be caught and prosecuted to the full extent of the law.

After that, though, it gets a bit murky.

Spreading Blame

The first thing to observe from this timeline is that, as with all Windows exploits, the initial blame lies with Microsoft. It is Microsoft that developed Windows without a strong security model for networking in particular, and while the company has done a lot of work to fix that, many fundamental flaws still remain.

Not all of those flaws are Microsoft’s fault: the default assumption for personal computers has always been to give applications mostly unfettered access to the entire computer, and all attempts to limit that have been met with howls of protest. iOS created a new model, in which applications were put in a sandbox and limited to carefully defined hooks and extensions into the operating system; that model, though, was only possible because iOS was new. Windows, in contrast, derived all of its market power from the established base of applications already in the market, which meant overly broad permissions couldn’t be removed retroactively without ruining Microsoft’s business model.

Moreover, the reality is that software is hard: bugs are inevitable, particularly in something as complex as an operating system. That is why Microsoft, Apple, and basically any conscientious software developer regularly issues updates and bug fixes; that products can be fixed after the fact is inextricably linked to why they need to be fixed in the first place!

To that end, though, it’s important to note that Microsoft did fix the bug two months ago: any computer that applied the March patch — which, by default, is installed automatically — is protected from WannaCry; Windows XP is an exception, but Microsoft stopped selling that operating system in 20082 and stopped supporting it in 2014 (despite that fact, Microsoft did release a Windows XP patch to Fix the bug on Friday night). In other words, end users and the IT organizations that manage their computers bear responsibility as well. Simply staying up-to-date on critical security patches would have kept them safe.

Still, staying up-to-date is expensive, particularly in large organizations, because updates break stuff. That “stuff” might be critical line-of-business software, which may be from 3rd-party vendors, external contractors, or written in-house: that said software is so dependent on one particular version of an OS is itself a problem, so you can blame those developers too. The same goes for hardware and its associated drivers: there are stories from the UK’s National Health Service of MRI and X-ray machines that only run on Windows XP, critical negligence by the manufacturers of those machines.

In short, there is plenty of blame to go around; how much, though, should go into the middle part of that timeline — the government part?

Blame the Government

Smith writes in that blog post:

This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017. We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.

This comparison, frankly, is ridiculous, even if you want to stretch and say that the impact of WannaCry on places like hospitals may actually result in physical harm (albeit much less than a weapon of war!).

First, the U.S. government creates Tomahawk missiles, but it is Microsoft that created the bug (even if inadvertently). What the NSA did was discover the bug (and subsequently exploit it), and that difference is critical. Finding bugs is hard work, requiring a lot of money and effort. It’s worth considering why, then, the NSA was willing to do just that, and the answer is right there in the name: national security. And, as we’ve seen through examples like Stuxnet, these exploits can be a powerful weapon.

Here is the fundamental problem: insisting that the NSA hand over exploits immediately is to effectively demand that the NSA not find the bug in the first place. After all, a patched (and thus effectively published) bug isn’t worth nearly as much, both monetarily as ShadowBrokers found out, or militarily, which means the NSA would have no reason to invest the money and effort to find them. To put it another way, the alternative is not that the NSA would have told Microsoft about EternalBlue years ago, but that the underlying bug would have remained un-patched for even longer than it was (perhaps to be discovered by other entities like China or Russia; the NSA is not the only organization searching for bugs).

In fact, the real lesson to be learned with regard to the government is not that the NSA should be Microsoft’s QA team, but rather that leaks happen: that is why, as I argued last year in the context of Apple and the FBI, government efforts to weaken security by fiat or the insertion of golden keys (as opposed to discovering pre-existing exploits) are wrong. Such an approach is much more in line with Smith’s Tomahawk missile argument, and given the indiscriminate and immediate way in which attacks can spread, the country that would lose the most from such an approach would be the one that has the most to lose (i.e. the United States).

Blame the Business Model

Still, even if the U.S. government is less to blame than Smith insists, nearly two decades of dealing with these security disasters suggests there is a systematic failure happening, and I think it comes back to business models. The fatal flaw of software, beyond the various technical and strategic considerations I outlined above, is that for the first several decades of the industry software was sold for an up-front price, whether that be for a package or a license.

This resulted in problematic incentives and poor decision-making by all sides:

  • Microsoft is forced to support multiple distinct code bases, which is expensive and difficult and not tied to any monetary incentives (thus, for example, the end of support for Windows XP).
  • 3rd-party vendors are inclined to view a particular version of an operating system as a fixed object: after all, Windows 7 is distinct from Windows XP, which means it is possible to specify that only XP is supported. This is compounded by the fact that 3rd-party vendors have no ongoing monetary incentive to update their software; after all, they have already been paid.
  • The most problematic impact is on buyers: computers and their associated software are viewed as capital costs, which are paid for once and then depreciated over time as the value of the purchase is realized. In this view ongoing support and security are an additional cost divorced from ongoing value; the only reason to pay is to avoid a future attack, which is impossible to predict both in terms of timing and potential economic harm.

The truth is that software — and thus security — is never finished; it makes no sense, then, that payment is a one-time event.

SaaS to the Rescue

Four years ago I wrote about why subscriptions are better for both developers and end users in the context of Adobe’s move away from packaged software:


That article was about the benefit of better matching Adobe’s revenue with the value gained by its users: the price of entry is lower while the revenue Adobe extracts over time is more commensurate with the value it delivers. And, as I noted, “Adobe is well-incentivised to maintain the app to reduce churn, and users always have the most recent version.”

This is exactly what is necessary for good security: vendors need to keep their applications (or in the case of Microsoft, operating systems) updated, and end users need to always be using the latest version. Moreover, pricing software as a service means it is no longer a capital cost with all of the one-time payment assumptions that go with it: rather, it is an ongoing expense that implicitly includes maintenance, whether that be by the vendor or the end user (or, likely, a combination of the two).

I am, of course, describing Software-as-a-service, and that category’s emergence, along with cloud computing generally (both easier to secure and with massive incentives to be secure), is the single biggest reason to be optimistic that WannaCry is the dying gasp of a bad business model (although it will take a very long time to get out of all the sunk costs and assumptions that fully-depreciated assets are “free”).3 In the long run, there is little reason for the typical enterprise or government to run any software locally, or store any files on individual devices. Everything should be located in a cloud, both files and apps, accessed through a browser that is continually updated, and paid for with a subscription. This puts the incentives in all the right places: users are paying for security and utility simultaneously, and vendors are motivated to earn it.

To Microsoft’s credit the company has been moving in this direction for a long time: not only is the company focused on Azure and Office 365 for growth, but even its traditional software has long been monetized through subscription-like offerings. Still, implicit in this cloud-centric model is a lot less lock-in and a lot more flexibility in terms of both devices and services: the reality is that for as much of a headache Windows security has been for Microsoft, those headaches are inextricably tied up with the reasons that Microsoft has been one of the most profitable companies of all time.

The big remaining challenge will be hardware: the business model for software-enabled devices will likely continue to be upfront payment, which means no incentives for security; the costs are externalities to be borne by the targets of botnets like Mirai. Expect little progress and lots of blame, the hallmark of the sort of systematic breakdown that results from a mismatched business model.

  1. I will update this section as necessary and note said updates in this footnote []
  2. Windows XP was still available for Netbooks until 2010 []
  3. The answer for individual security is encryption []

The Local News Business Model

It’s hardly controversial to note that the traditional business model for most publishers, particularly newspapers, is obsolete. Absent the geographic monopolies formerly imposed by owning distribution, newspapers have nothing to offer advertisers: the sort of advertising that was formerly done in newspapers, both classified and display, is better done online. And, contra this rather fanciful suggestion by New York Times media columnist Jim Rutenberg that advertisers prop up newspapers for the good of democracy, nothing is going to change that.

I already explained the problems with Rutenberg’s idea in yesterday’s Daily Update: advertisers are (rightly) motivated by what is best for their business, plus there is a collective action problem. I added, though, mostly in passing, that the future of “local news” would almost certainly be subscription, not advertising-based.

I think it’s worth expounding on that point. What most, including Rutenberg, fail to understand about newspapers is that it is not simply the business model that is obsolete: rather, everything is obsolete. Most local newspapers are simply not worth saving, not because local news isn’t valuable, but rather because everything else in your typical local newspaper is worthless (from a business perspective). That is why I was careful in my wording: subscriptions will not save newspapers, but they just might save local news, and the sooner that distinction is made the better.

The Unnecessary Newspaper

To be clear, I agree with Rutenberg when he states that “A vibrant free press…keeps government honest and voters informed.” Local government needs oversight, which is another way of saying local news is necessary for a well-functioning democracy. The problem is that assuming oversight must be provided by a newspaper is akin to suggesting that a tank be used to kill a fly: sure, it may get the job done, but there is a lot of equipment, ordnance, and personnel that is really not necessary when a flyswatter would not only be sufficient, but actually more effective.

For newspapers, the analogies to equipment, ordnance, and personnel are physical infrastructure, business operations, and editorial staff; just about none of them (yes, including most of the editorial staff) are actually necessary for covering local news.


Printing presses are obviously obsolete: while some newspapers have finally closed them down, others hold on because there is still a modicum of print advertising to be earned. It’s the most prominent example of how newspapers are fundamentally incapable of evolving. Naturally, this extends to distribution centers, delivery trucks, newsstands, and all of the administrative infrastructure that goes into moving around pieces of paper that have zero connection to the actual distribution of local news.

The infrastructure overhead, though, does not stop there: without a print edition there is no need for layout, for high-end photography, or a centralized office space to assemble everything on deadline. There is also a drastically reduced need for editors: when text was printed copy was permanent, raising the cost of a mistake high enough to justify editing workforces nearly as numerous as journalistic ones. Digital stories, though, can be updated after-the-fact. Moreover, digital stories are interactive: readers can submit feedback instantly, and as I noted while writing about Wikitribune, the collective knowledge of readers will always be greater than the most seasoned set of editors.

Moreover, given that local news requires little more than text and images and perhaps some video, there is no need for expensive digital infrastructure either; a basic WordPress site is more than sufficient. In short, the entire infrastructure category, which makes up probably 60%~70% of a newspaper’s cost structure (possibly more if you include the editors), has nothing to do with sustainable local news.

Business Operations

Monetizing via print advertisements requires a lot of staff: salespeople to sell the ad, graphic artists to lay it out, account managers to collect the money, plus all the management required to make it work. For large national newspapers like The New York Times, this may all still be necessary, thanks to the ability to sell premium advertising online. However, all of this can be eliminated for most digital-only operations: simply use an ad network. Of course, those come with their own problems: ad networks make web pages suck, and just as importantly, most consumption is shifting to mobile where ad network monetization is particularly ineffective; to the extent advertising is part of the business model relying on Facebook is (still) probably the best option. Or better yet, don’t have any ads at all.

A purely subscription-based business model not only drastically cuts costs, it also makes for a better user experience, a particularly attractive point given that users are the paying customers. Even better, thanks to services like Stripe, digital subscriptions not only cost far less to administer than traditional newspaper subscriptions, but are far more user-friendly as well.

The reality is that for local news this entire category probably only needs to be one person: handle customer service for self-service subscriptions, do the books, and that’s about it. The 15~20% of revenue newspapers are paying for business operations has nothing to do with local news.


This is the biggest blindspot for those lamenting the travails of local newspapers: it may be obvious that printing presses don’t make much sense with the Internet, and most websites have moved to ad networks for the obvious reasons; in fact, though, nearly all of the content in most newspapers is not just unnecessary but in fact actively harmful to building a sustainable future for local news.

Start with the front page (of a physical newspaper, natch): most newspapers have given up on having international, national, or even regional reporters, instead relying on wire services. Even that, though, is a waste: those wire services have their own websites, and international publications are only a click away. Maintaining the veneer of comprehensive coverage is simply clutter, and a cost to boot.

The same thing applies to the opinion section: any column or editorial that is concerned with non-local affairs is competing with the entire Internet (including social media). It’s the same thing with non-local business coverage. Moreover, the cost is more than clutter and dollars: almost by definition the content is inferior to what is available elsewhere, which reduces the willingness to pay.

It’s the same story in what were traditionally the most valuable parts of newspapers:1 sports and the (variously named) lifestyle sections. There are multiple national entities dedicated to covering sports all the way down to the university level, augmented by a still-thriving sports blogosphere. Granted, there may still be a market for local sports coverage, but that is a different market than local news: there is no reason it has to be bundled together.

As for the lifestyle section, it is everywhere. BuzzFeed has set its sight on cooking, crafts, and the horoscope;2 there are all kinds of sites covering gossip and advice; meanwhile, not only are there web comics, but social media provides far more humor than the funny pages ever did. What’s left, bridge? Why not simply play online?

A lot of this content has long since been standardized across newspapers, but the broader point remains the same: absolutely none of it has anything to do with local news, and it should not exist in the local news publication of the future.

Bundles and Business Models

What is critical to understand is that everything in the preceding section is interconnected: by owning printing presses and delivery trucks (and thanks to the low marginal cost of printing extra pages), newspapers were the primary outlet for advertising that didn’t work (or couldn’t afford) TV or radio — and there was a lot of it. Maximizing advertising, though, meant maximizing the potential audience, which meant offering all kinds of different types of content in volume: thus the mashup of wildly disparate content listed above, all focused on quantity over quality. And then, having achieved the most readership and the ability to expand to fit it all, the biggest newspaper could squeeze out its competitors.

In short, the business model drove the content, just as it drove every other piece of the business. It follows, though, that if the content bundle no longer makes sense — which it doesn’t in the slightest — that the business model probably doesn’t make sense either. This is the problem with newspapers: every aspect of their operations, from costs to content, is optimized for a business model that is obsolete. To put it another way, an obsolete business model means an obsolete business. There is nothing to be saved.

The Subscription Business Model

I’ve already hinted at the general outline of a sustainable local news publication, but the critical point is the one I just made: everything must start with the business model, of which there is only one choice — subscriptions.

It is very important to clearly define what a subscriptions means. First, it’s not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value.

Each of those words is meaningful:

  • Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye.
  • Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to to the subscriber directly, whether that be email, a bookmark, or an app.
  • Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it.

This last point is at the crux of why many ad-based newspapers will find it all but impossible to switch to a real subscription business model. When asking people to pay, quality matters far more than quantity, and the ratio matters: a publication with 1 valuable article a day about a well-defined topic will more easily earn subscriptions than one with 3 valuable articles and 20 worthless ones covering a variety of subjects. Yet all too many local newspapers, built for an ad-based business model that calls for daily content to wrap around ads, spend their limited resources churning out daily filler even though those ads no longer exist.

A sustainable local news publication will be fundamentally different: a minimal rundown of the news of the day, with a small number of in-depth articles a week featuring real in-depth reporting, with the occasional feature or investigative report. After all, it’s not like it is hard to find content to read on the Internet: what people will pay for is quality content about things they care about (and the fact that people care about their cities will be these publications’ greatest advantage).

It’s also worth noting what a subscription business model does not — must not — include:

  • Content that is widely available elsewhere. That means no national or international news (except what has a local impact, and even that is questionable), no non-local business content, no lifestyle section.
  • Non-journalistic costs centers. As I noted above, a publication might need one business operations person, and maybe a copy editor; they can probably be the same person. Nearly everything else, including subscription management, hosting, payments, etc. can leverage widely available online services (and you can include social networks: treating all content the same hurts big media companies, but it’s a big opportunity for small ones).
  • Any sort of wall between business and editorial. This is perhaps the easiest change to make, and the hardest for newspaper advocates to accept. A subscription business is just that: a business that must, through its content, earn ongoing revenue from customers. That means understanding what those customers want, and what they don’t. It means focusing on the user experience, and the content mix. And it means selling by every member of the organization.

Notice how different this looks from a newspaper, as it must. After all, the business model is different.

I strongly believe the market for this sort of publication is there. My hometown city of Madison, WI has around 250,000 people (500,000 in Dane County), primarily served by The Wisconsin State Journal. To the paper’s credit the website is almost all local news; unfortunately, most of it is uninteresting filler. Worse, to produce this filler took a staff of 52 people, of which only 10 by my count are local reporters (supported by at least 8 editors).

Were a new publication to come along, offering a five minute summary of Madison’s local news of the day, plus an actually relevant story or two a week with the occasional feature or investigative report,3 I’d gladly pay, and I don’t even live there anymore. What I won’t do, though, is bother visiting the Wisconsin State Journal because there simply is too much dreck to wade through, created at ridiculous cost in service of an obsolete business model.4

Indeed, the real problem with local newspapers is more obvious than folks like Rutenberg wish to admit: no one — advertisers nor subscribers — wants to pay for them because they’re not worth paying for. If newspapers were actually holding local government accountable I don’t think they would have any problem earning money; that they aren’t is a function of wasting time and money on the past instead of the future.

  1. Other than the classifieds, that is []
  2. “Choose these foods and we will tell you your ideal mate!” []
  3. With typos []
  4. This is where news foundations and benefactors can actually make a difference: stop supporting local newspapers and instead fund new startups until they build a critical mass of subscribers []

Apple’s China Problem

Did you hear about the new Microsoft Surface Laptop? The usual suspects are claiming it’s a MacBook competitor, which is true insomuch as it is a laptop. In truth, though, the Surface Laptop isn’t a MacBook competitor at all for the rather obvious reason that it runs Windows, while the MacBook runs MacOS. This has always been the foundation of Apple’s business model: hardware differentiated by software such that said hardware can be sold with a margin much greater than nominal competitors running a commodity operating system.

Moreover, the advantages go beyond margins: the best way to understand both Apple’s profits and many of its choices is to understand that the company has a monopoly on not just MacOS but even more importantly iOS. That means Apple can not only capture consumer surplus on hardware, but developer surplus when it comes to app sales; that some apps are not made is deadweight loss that Apple has chosen to bear to ensure total control.

And yet, as far as regulators are concerned (and rightly so), the iPhone is simply another smartphone, and the MacBook really is competing with the Surface Laptop. The functionality is mostly the same, and if users value a sustainable advantage in the user experience Apple deserves the profits — and power — that follow.

Apple’s Earnings

Apple announced its second quarter earnings yesterday; from Bloomberg:

Apple Inc. reported falling iPhone sales, highlighting the need to deliver blockbuster new features in the next edition of the flagship device if the company is to fend off rivals like Samsung Electronics Co. Investor confidence has been mounting ahead of a major iPhone revamp due later this year. Yet competitors released new high-end smartphones recently, putting pressure on Apple to deliver a device that’s advanced enough to entice existing users to upgrade and lure new customers.

Oh look! It’s an example of what I was just complaining about: yes, Samsung makes smartphones, and yes, they have high-end features. But — and this is the point that was forgotten the last time Samsung was held up as an iPhone threata Samsung smartphone does not run iOS. That has always been Apple’s trump card, and it was once again this past quarter. On the earnings call CFO Luca Maestri stated in his prepared remarks:

Revenue for the March quarter was $52.9 billion, and we achieved double-digit growth in the U.S., Canada, Australia, Germany, the Netherlands, Turkey, Russia and Mexico. Our growth rates were even higher, over 20% in many other markets, including Brazil, Scandinavia, the Middle East, Central and Eastern Europe, India, Korea and Thailand.

The numbers back that up:

Apple Revenue (in billions)

Q2 2017 Q2 2016 Q2 2015
Americas $21.2 (+11%) $19.1 (-10%) $21.3 (+19%)
Europe $12.7 (+10%) $11.5 (-5%) $12.2 (+12%)
Japan $4.5 (+5%) $4.3 (+24%) $3.5 (-15%)
Rest of Asia Pacific $3.8 (+20%) $3.2 (-25%) $4.2 (+48%)
Total $42.1 (+11%) $38.1 (-8%) $41.2 (+15%)

To be clear, these numbers reflect more than just the iPhone; in the most recent quarter Apple’s most important product contributed 63% of revenue, which means some of the change in overall revenue was due to Mac (mostly thanks to increased ASP), Services, and Other Products (primarily Apple Watch and AirPods) growth, counterbalanced by the iPad’s continued slippage. Still, the iPhone is the most important factor, and while it seems quite clear that the iPhone 6 did pull forward upgrades from the iPhone 6S, the iPhone 7 is growing quite nicely.

Moreover, this sort of growth is exactly what you would expect given Apple’s iOS monopoly: iPhone users very rarely switch to Android, while a fair number of Android users switch to iPhone, which means that even in a saturated market Apple’s share should grow over time. Plus, that share increase will result in not only increased iPhones sales but ever-growing Services revenue, and, in the long run, increased sales of other Apple products as well.

This picture, though, is incomplete: it doesn’t include China.

Apple’s China Problem

Here are those revenue numbers once again, this time with the Greater China region:

Apple Revenue (in billions)

Q2 2017 Q2 2016 Q2 2015
Americas $21.2 (+11%) $19.1 (-10%) $21.3 (+19%)
Europe $12.7 (+10%) $11.5 (-5%) $12.2 (+12%)
China $10.7 (-14%) $12.5 (-26%) $16.8 (71%)
Japan $4.5 (+5%) $4.3 (+24%) $3.5 (-15%)
Rest of Asia Pacific $3.8 (+20%) $3.2 (-25%) $4.2 (+48%)
Total $52.9 (+5%) $50.6 (-13%) $58.0 (+27%)

I was from the very beginning of Stratechery both a big advocate of large-screen iPhones and a big bull on Apple’s prospects in China. I wrote after the launch of the iPhone 5C:

Is [the iPhone] out-of-reach for the vast majority of consumers? Yep. But it will be aspirational, something you put on the table to show others you can afford it. And, to be clear, there are a lot of people that can afford it. Saying stupid things like “the iPhone 5C is equivalent to the average monthly salary in China” belies a fundamental misunderstanding of China, its inequality, and its sheer size specifically, and all of Asia broadly. Moreover, when you consider a Mercedes is tens of thousands of dollars more than a Toyota (and on down the line in luxury goods, for whom Asia generally and China specifically is the largest market by far), $300 more isn’t that much.

Moreover, in China it’s Apple’s brand that is, by far, the biggest allure of the iPhone. Apps are free (piracy is mainstream), larger screens are preferred, and specs and customization move the needle with the mainstream far more than they do in the US. But no one else is Apple.

Of course those large screens did eventually arrive in the same timeframe that Apple launched on China Mobile: that is why those Q2 2015 numbers are so eye-popping (71% growth!). And you could certainly argue last year that, much like the rest of the world, Apple had pulled forward a huge number of would-be buyers (China was down more than the rest of the world, but about the same as the rest of Asia).1 However, that does not explain the weak results this year: every region in the world — especially the rest of Asia — is up, except for China, which is down 14%. Apple has a China problem.

iOS Versus WeChat

In rather stark contrast to just a couple of years ago, when, in the midst of the iPhone 6 boom, Tim Cook was eager to sell the story of how many iPhone customers had not yet upgraded, this quarter the Apple CEO preferred to move the goalposts, telling analysts to wait for the next iPhone:

We’re seeing what we believe to be a pause in purchases on iPhone, which we believe are due to the earlier and much more frequent reports about future iPhones. And so that part is clearly going on, and it could be what’s behind the data.

But that is not what is going on in most of the world: plenty of folks — more than last year — are happy to buy the iPhone 7, even though it doesn’t look much different than the iPhone 6. After all, if you need a new phone, and you want iOS, you don’t have much choice! Except, again, for China: that is the country where the appearance of the iPhone matters most; Apple’s problem, though, is that in China that is the only thing that matters at all.

The fundamental issue is this: unlike the rest of the world, in China the most important layer of the smartphone stack is not the phone’s operating system. Rather, it is WeChat.2 Connie Chan of Andreessen Horowitz tried to explain in 2015 just how integrated WeChat is into the daily lives of nearly 900 million Chinese, and that integration has only grown since then: every aspect of a typical Chinese person’s life, not just online but also off is conducted through a single app (and, to the extent other apps are used, they are often games promoted through WeChat).

There is nothing in any other country that is comparable: not LINE, not WhatsApp, not Facebook. All of those are about communication or wasting time: WeChat is that, but it is also for reading news, for hailing taxis, for paying for lunch (try and pay with cash for lunch, and you’ll look like a luddite), for accessing government resources, for business. For all intents and purposes WeChat is your phone, and to a far greater extent in China than anywhere else, your phone is everything.

Naturally, WeChat works the same on iOS as it does on Android.3 That, by extension, means that for the day-to-day lives of Chinese there is no penalty to switching away from an iPhone. Unsurprisingly, in stark contrast to the rest of the world, according to a report earlier this year only 50% of iPhone users who bought another phone in 2016 stayed with Apple:

Screen Shot 2017-05-03 at 8.58.25 PM

This is still better than the competition, but compared to the 80%+ retention rate Apple enjoys in the rest of the world,4 it is shockingly low, and the result is that the iPhone has slid down China’s sales rankings: iPhone sales were only 9.6% of the market last year, behind local Chinese brands like Oppo, Huawei and Vivo. All of those companies sold high-end phones of their own; the issue isn’t that Apple was too expensive, it’s that the iPhone 6S and 7 were simply too boring.

Perhaps the most surprising takeaway from this analysis is that Cook is right: there is reason to be optimistic about the iPhone 8. Rumors are that there will be an all-new edge-to-edge design that will stand out in the hand, or on the coffee shop table. And, of course, like any modern smartphone, it will run WeChat. And to be sure, an iPhone is still status-conferring: Apple is by no means doomed, and it’s possible those China numbers will turn positive this fall.

That, though, is a long-term problem for Apple: what makes the iPhone franchise so valuable — and, I’d add, the fundamental factor that was missed by so many for so long — is that monopoly on iOS. For most of the world it is unimaginable for an iPhone user to upgrade to anything but another iPhone: there is too much of the user experience, too many of the apps, and, in some countries like the U.S., too many contacts on iMessage to even countenance another phone.

None of that lock-in exists in China: Apple may be a de facto monopolist for most of the world, but in China the company is simply another smartphone vendor, and being simply another smartphone vendor is a hazardous place to be. To be clear, it’s not all bad: in China Apple still trades on status and luxury; unlike the rest of the world, though, the company has to earn it with every release, and that’s a bar both difficult to clear in the abstract and, given the last two iPhones, difficult to clear in reality.

  1. As an aside, Apple continues to blame Hong Kong for weak China numbers, but as I have explained in the Daily Update, I don’t buy this excuse: I suspect a huge number of Hong Kong sales were for China; once distribution in China increased those sales went down. Plus, Hong Kong’s dollar is pegged to the U.S. dollar, so currency isn’t an excuse either []
  2. Or, to put it another way, the operating system of China is WeChat, not iOS/Android []
  3. Well, at least until earlier this month when Apple, accustomed to being able to dictate terms to app developers, banned WeChat tipping []
  4. This originally said 90%, but retention numbers have slipped globally []

Not OK, Google

I was, frankly, amazed when I saw this tweet:

Let me remind you that Washington Post Editor-in-Chief Marty Baron’s industry — newspapers — is one without a business model (Baron’s newspaper is more fortunate than most in its reliance on a billionaire’s largesse). Said lack of business model is leading to a dwindling of local coverage, click-chasing, and, arguably, Donald Trump. That seems like a pretty big problem!

Fake news, on the other hand, tells people who’ve already made up their minds what they want to hear. Certainly it’s not ideal, but the tradeoffs in dealing with the problem, at least in terms of Facebook, are very problematic. I wrote last fall in Fake News:

I get why top-down solutions are tempting: fake news and filter bubbles are in front of our faces, and wouldn’t it be better if Facebook fixed them? The problem is the assumption that whoever wields that top-down power will just so happen to have the same views I do. What, though, if they don’t? Just look at our current political situation: those worried about Trump have to contend with the fact that the power of the executive branch has been dramatically expanded over the decades; we place immense responsibility and capability in the hands of one person, forgetting that said responsibility and capability is not so easily withdrawn if we don’t like the one wielding it.

To that end I would be far more concerned about Facebook were they to begin actively editing the News Feed; as I noted last week I’m increasingly concerned about Zuckerberg’s utopian-esque view of the world, and it is a frighteningly small step from influencing the world to controlling the world. Just as bad would be government regulation: our most critical liberty when it comes to a check on tyranny is the freedom of speech, and it would be directly counter to that liberty to put a bureaucrat — who reports to the President — in charge of what people see.

As if to confirm my worst fears, Zuckerberg, a few months later, came out with a manifesto committing Facebook to political action, leading me to call for checks on the company’s monopoly. What was perhaps the most interesting lesson about that manifesto, though, was that most of the media — which to that point had been resolutely opposed to Facebook — were by and large unified in their approval. It was, I suspect, a useful lesson for tech executives: ensure the established media controls the narrative, and your company’s dominance may proceed without criticism.

Google’s Algorithm Change

Today Google announced its own fake-news motivated changes. From Bloomberg:

The Alphabet Inc. company is making a rare, sweeping change to the algorithm behind its powerful search engine to demote misleading, false and offensive articles online. Google is also setting new rules encouraging its “raters” — the 10,000-plus staff that assess search results — to flag web pages that host hoaxes, conspiracy theories and what the company calls “low-quality” content.

The moves follow months after criticism of Google and Facebook Inc. for hosting misleading information, particular tied to the 2016 U.S. presidential election. Google executives claimed the type of web pages categorized in this bucket are relatively small, which is a reason why the search giant hadn’t addressed the issue before. “It was not a large fraction of queries — only about a quarter percent of our traffic — but they were important queries,” said Ben Gomes, vice president of engineering for Google.

I noted above that deciding how to respond to fake news is a trade-off; in the case of Facebook, the fact that fake news is largely surfaced to readers already inclined to believe it means I see the harm as being less than Facebook actively taking an editorial position on news stores.

Google, on the other hand, is less in the business of driving engagement via articles you agree with, than it is in being a primary source of truth. The reason to do a Google search is that you want to know the answer to a question, and for that reason I have long been more concerned about fake news in search results, particularly “featured snippets”:

My concern here is quite straightforward: yes, Facebook may be pushing you news, fake, slanted, or whatever bias there may be, but at least it is not stamping said news with its imprimatur or backing it with its reputation (indeed, many critics wish that that is exactly what Facebook would do), and said news is arriving on a rather serendipitous basis. Google, on the other hand, is not only serving up these snippets as if they are the truth, but serving them up as a direct response to someone explicitly searching for answers. In other words, not only is Google effectively putting its reputation behind these snippets, it is serving said snippets to users in a state where they are primed to believe they are true.

To that end I am pleased that Google is making this change, at least at a high level. The way Google is approaching it, though, is very problematic.

Google and Authority

Danny Sullivan, who has been covering Google for years, has one of the best write-ups on Google’s changes, including this frank admission that the change is PR-driven:

Problematic searches aren’t new but typically haven’t been an big issue because of how relatively infrequent they are. In an interview last week, Pandu Nayak — a Google Fellow who works on search quality — spoke to this: “This turns out to be a very small problem, a fraction of our query stream. So it doesn’t actually show up very often or almost ever in our regular evals and so forth. And we see these problems. It feels like a small problem,” Nayak said.

But over the past few months, they’ve grown as a major public relations nightmare for the company…“People [at Google] were really shellshocked, by the whole thing. That, even though it was a small problem [in terms of number of searches], it became clear to us that we really needed to solve it. It was a significant problem, and it’s one that we had I guess not appreciated before,” Nayak said.

Suffice it to say, Google appreciates the problem now. Hence today’s news, to stress that it’s taking real action that it hopes will make significant changes.

Sullivan goes on to explain the changes Google is making to autocomplete search suggestions and featured snippets, particularly the opportunity to provide immediate feedback. What was much more convoluted, though, was a third change: an increased reliance on “authoritative content”.

The other and more impactful way that Google hopes to attack problematic Featured Snippets is by improving its search quality generally to show more authoritative content for obscure and infrequent queries…

How’s Google learning from the data to figure out what’s authoritative? How’s that actually being put into practice? Google wouldn’t comment about these specifics. It wouldn’t say what goes into determining how a page is deemed to be authoritative now or how that is changing with the new algorithm. It did say that there isn’t any one particular signal. Instead, authority is determined by a combination of many factors.

This simply isn’t good enough: Google is going to be making decisions about who is authoritative and who is not, which is another way of saying that Google is going to be making decisions about what is true and what is not, and that demands more transparency, not less.

Again, I tend to agree that fake news is actually more of a problem on Google than it is Facebook; moreover, I totally understand that Google can’t make its algorithms public because they will be gamed by spammers and fake news purveyors. But even then, the fact remains that the single most important resource for finding the truth, one that is dominant in its space thanks to the fact that being bigger inherently means being better, is making decisions about what is true without a shred of transparency.

More Monopoly Trade-offs

I wrote last week about Facebook and the Cost of Monopolies: Facebook wins because, by virtue of connecting everyone on earth, its apps both provide a better user experience even as they build impregnable moats. The moat is the network is the superior user experience. The cost, though, as I sought to quantify, at least in theory, is the aforementioned decay in our media diet, increasing concentration of advertising, and, in the long run, diminished innovation.

That raises the question, though, of what to do about it; I noted in a follow-up that Facebook hasn’t done anything wrong, and under the current interpretation of the law, isn’t even really a monopoly. The fact of the matter is that people like Facebook1 and that it generates a massive amount of consumer surplus. It follows, then, that any action to break up that monopoly is inherently anti-consumer, at least in the short-run.

The conundrum is even worse with Google, in large part because the company’s core service is even more critical to its users: being able to search the entire Internet is a truly awesome feat, and, thanks to that capability, it is critical that Google get the answer right. That, though, means that Google’s power is even greater, with all of the problems that entails.

Indeed, that is why Google needs to be a whole lot more explicit about how it is ranking news. Perhaps the most unanticipated outcome of the unfettered nature of the Internet is that the sheer volume of information didn’t disperse influence, but rather concentrated it to a far greater degree than ever before, not to those companies that handle distribution (because distribution is free) but to those few that handle discovery. The result is an environment where what is best for the individual in the short-term is potentially at odds with what is best for a free society in the long-term; it would behoove Google to push off the resolution of this paradox by being more open, not less.

Sadly, it seems unlikely that my request for more transparency will get much support; Google’s announcement was widely applauded, and why not? It is the established media that will have a leg up when it comes to authority. That, it seems, is all they ever wanted, even if it means Google and Facebook taking all of the money.

  1. Even if you don’t personally, dear reader []

Facebook and the Cost of Monopoly

The shamelessness was breathtaking.

Having told a few jokes, summarized his manifesto, and acknowledged the victim of the so-called “Facebook-killer” in Cleveland, Facebook founder and CEO Mark Zuckerberg opened his keynote presentation at the company’s F8 developer conference like this:

You may have noticed that we rolled out some cameras across our apps recently. That was Act One. Photos and videos are becoming more central to how we share than text. So the camera needs to be more central than the text box in all of our apps. Today we’re going to talk about Act Two, and where we go from here, and it’s tied to this broader technological trend that we’ve talked about before: augmented reality.

If that seems familiar, it’s because it is the Explain Like I’m Five summary of Snap’s S-1:

In the way that the flashing cursor became the starting point for most products on desktop computers, we believe that the camera screen will be the starting point for most products on smartphones. This is because images created by smartphone cameras contain more context and richer information than other forms of input like text entered on a keyboard. This means that we are willing to take risks in an attempt to create innovative and different camera products that are better able to reflect and improve our life experiences.

Snap may have declared itself a camera company; Zuckerberg dismissed it as “Act One”, making it clear that Facebook intended to not simply adopt one of Snapchat’s headline features but its entire vision.

Facebook and Microsoft

Shortly after Snap’s S-1 came out, I wrote in Snap’s Apple Strategy that the company was like Apple; unfortunately, the Apple I was referring to was not the iPhone-making juggernaut we are familiar with today, but rather the Macintosh-creating weakling that was smushed by Microsoft, which is where Facebook comes in.

Today, if Snap is Apple, then Facebook is Microsoft. Just as Microsoft succeeded not because of product superiority but by leveraging the opportunity presented by the IBM PC, riding Big Blue’s coattails to ecosystem dominance, Facebook has succeeded not just on product features but by digitizing offline relationships, leveraging the desire of people everywhere to connect with friends and family. And, much like Microsoft vis-à-vis Apple, Facebook has had The Audacity of Copying Well.

I wrote The Audacity of Copying Well when Instagram launched Instagram stories; what was brilliant about the product is that Facebook didn’t try to re-invent the wheel. Instagram Stories — and now Facebook Stories and WhatsApp Stories and Messenger Day — are straight rip-offs of Snapchat Stories, which is not only not a problem, it’s actually the exact optimal strategy: Instagram’s point of differentiation was not features, but rather its network. By making Instagram Stories identical to Snapchat Stories, Facebook reduced the competition to who had the stronger network, and it worked.

Microsoft and Monopoly

Microsoft, of course, was found to be a monopoly, and, as I wrote a couple of months ago in Manifestos and Monopolies, it is increasingly difficult to not think the same about Facebook. That, though, is exactly what you would expect for an aggregator. From Antitrust and Aggregation:

The first key antitrust implication of Aggregation Theory is that, thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers. This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience.

This self-selection, particularly onto a “free” platform, makes it very difficult to calculate what cost, if any, Facebook’s seeming monopoly exacts on society. Consider the Econ 101 explanation of why monopolies are problematic:

  • In a perfectly competitive market the price of a good is set at the intersection of demand and supply, the latter being determined by the marginal cost of producing that good:1

    stratechery Year One - 336

  • The “Consumer Surplus”, what consumers would have paid for a product minus what they actually paid, is the area that is under the demand curve but over the price point; the “Producer Surplus”, what producers sold a product for minus the marginal cost of producing that product, is the area above the marginal cost/supply curve and below the price point:

    stratechery Year One - 339

  • In a monopoly situation, there is no competition; therefore, the monopoly provider makes decisions based on profit maximization. That means instead of considering the demand curve, the monopoly provider considers the marginal revenue (price minus marginal cost) that is gained from selling additional items, and sets the price where marginal revenue equals marginal cost. Crucially, though, the price is set according to the demand curve:

    stratechery Year One - 332

  • The result of monopoly pricing is that consumer surplus is reduced and producer surplus is increased; the reason we care as a society, though, is the part in brown: that is deadweight loss. Some amount of demand that would be served by a competitive market is being ignored, which means there is no surplus of any kind being generated:

    stratechery Year One - 337

The problem with using this sort of analysis for Facebook should be obvious: the marginal cost for Facebook of serving an additional customer is zero! That means the graph looks like this:

stratechery Year One - 327

So sure, Facebook may have a monopoly in social networking, and while that may be a problem for Snap or any other would be networks, Facebook would surely argue that the lack of deadweight loss means that society as a whole shouldn’t be too bothered.

Facebook and Content Providers

The problem is that Facebook isn’t simply a social network: the service is a three-sided market — users, content providers, and advertisers — and while the basis of Facebook’s dominance is in the network effects that come from connecting all of those users, said dominance has seeped to those other sides.

Content providers are an obvious example: Facebook passed Google as the top traffic driver back in 2015, and as of last fall drove over 40% of traffic for the average news site, even after an algorithm change that reduced publisher reach.

So is that a monopoly when it comes to the content provider market? I would argue yes, thanks to the monopoly framework above.

Note that once again we are in a situation where there is not a clear price: no content provider pays Facebook to post a link (although they can obviously make said link into an advertisement). However, Facebook does, at least indirectly, make money from that content: the more users find said content engaging, the more time they will spend on Facebook, which means the more ads they will see.

This is why Facebook Instant Articles seemed like such a brilliant idea: on the one side, readers would have a better experience reading content, which would keep them on Facebook longer. On the other side, Facebook’s proposal to help publishers monetize — publishers could sell their own ads or, enticingly, Facebook could sell them for a 30% commission — would not only support the content providers that are one side of Facebook’s three-sided market, but also lock them into Facebook with revenue they couldn’t get elsewhere. The market I envisioned would have looked something like this:

stratechery Year One - 338

However, Instant Articles haven’t turned out the way I expected: the consumer benefits are there, but Facebook has completely dropped the ball when it comes to monetizing the publishers using them. That is not to say that Facebook isn’t monetizing as a whole, thanks in part to that content, but rather that the company wasn’t motivated to share. Or, to put it another way, Facebook kept most of the surplus for itself:

stratechery Year One - 327

In this case, it’s not that Facebook is setting a higher price to maximize their profits; rather, they are sharing less of their revenue; the outcome, though, is the same — maximized profits. Keep in mind this approach isn’t possible in competitive markets: were there truly competitors for Facebook when it came to placing content, Facebook would have to share more revenue to ensure said content was on its platform. In truth, though, Facebook is so dominant when it comes to attention that it doesn’t have to do anything for publishers at all (and, if said publishers leave Instant Articles, well, they will still place links, and the users aren’t going anywhere regardless).

Facebook and Advertisers

There may be similar evidence — that Facebook is able to reduce supply in a way that increases price and thus profits — emerging in advertising. In a perfectly competitive market the cost of advertising would look like this:

stratechery Year One - 337

Facebook, though, will soon be limiting quantity, or at least limiting its growth. On last November’s earnings call CFO Dave Wehner said that Facebook would stop increasing ad load in the summer of 2017 (i.e. Facebook has been increasing the number of ads relative to content in the News Feed for a long time, but would stop doing so). What was unclear — and as I noted at the time, Wehner was quite evasive in answering this — was whether or not that would cause the price per ad to rise.

There are two possible reasons for Wehner to have been evasive:

  • Prices will not rise, which would be a bad sign for Facebook: it would mean that despite all of Facebook’s data, their ads are not differentiated, and that money that would have been spent on Facebook will simply be spent elsewhere
  • Prices will rise, which would mean that Facebook’s ads are differentiated such that Facebook can potentially increase profits by restricting supply

To put the second possibility in graph form:

stratechery Year One - 332

Note that Facebook has already said that revenue growth will slow because of this change; that, though, is not inconsistent with having monopoly power. Monopolists seek to maximize profit, not revenue. Alternately, it could simply be that Facebook is worried about the user experience; it will be fascinating to see how the company’s bottom line shifts with these changes.

Monopolies and Innovation

Still, even if Facebook does have monopoly power when it comes to content discovery and distribution and in digital advertising, is that really a problem for users? Might it even be a good thing?

Facebook board member Peter Thiel certainly thinks so. In Zero to One Thiel not only makes the obvious point that businesses that are monopolies are ideal, but says that models like the ones I used above aren’t useful because they presume a static world.

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you…But the world we live in is dynamic: it’s possible to invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decades-long operating system dominance. Before that, IBM’s hardware monopoly of the ’60s and ’70s was overtaken by Microsoft’s software monopoly. AT&T had a monopoly on telephone service for most of the 20th century, but now anyone can get a cheap cell phone plan from any number of providers. If the tendency of monopoly businesses were to hold back progress, they would be dangerous and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and to finance the ambitious research projects that firms locked in competition can’t dream of.

The problem is that Thiel’s examples refute his own case: decades-long monopolies like those of AT&T, IBM, and Microsoft sure seem like a bad thing to me! Sure, they were eventually toppled, but not after extracting rents and, more distressingly, stifling innovation for years. Think about Microsoft: the company spent billions of dollars on R&D and gave endless demos of futuristic tech; the most successful product that actually shipped (Kinect) ended up harming the product it was supposed to help.2

Indeed, it’s hard to think of any examples where established monopolies produced technology that wouldn’t have been produced by the free market; Thiel wrongly conflates the drive of new companies to create new monopolies with the right of old monopolies to do as they please.

That is why Facebook’s theft of not just Snapchat features but its entire vision bums me out, even if it makes good business sense. I do think leveraging the company’s network monopoly in this way hurts innovation, and the same monopoly graphs explain why. In a competitive market the return from innovation meets the demand for customers to determine how much innovation happens — and who reaps its benefits:

stratechery Year One - 330

A monopoly, though, doesn’t need that drive to innovate — or, more accurately, doesn’t need to derive a profit from innovation, which leads to lazy spending and prioritizing tech demos over shipping products. After all, the monopoly can simply take others’ innovation and earn even more profit than they would otherwise:

stratechery Year One - 334

This, ultimately, is why yesterday’s keynote was so disappointing. Last year, before Facebook realized it could just leverage its network to squash Snap, Mark Zuckerberg spent most of his presentation laying out a long-term vision for all the areas in which Facebook wanted to innovate. This year couldn’t have been more different: there was no vision, just the wholesale adoption of Snap’s, plus a whole bunch of tech demos that never bothered to tell a story of why they actually mattered for Facebook’s users. It will work, at least for a while, but make no mistake, Facebook is the only winner.

  1. If any individual firm’s marginal costs are higher, they will go out of business; if they are lower they will temporarily dominate the market until new competitors enter. Yes, this is all theoretical! []
  2. I’m referring to the fact the Xbox One had a higher price and lower specs than the PS4, thanks in large part to the bundled Kinect []