Apple’s Strengths and Weaknesses

The San Jose location of WWDC, Apple’s annual developer conference, felt a bit odd, but Apple sought to strike a familiar tone: the artwork on and around the San Jose McEnery Convention Center featured a top-down view of humans, and a familiar message:

FullSizeRender

The idea of Apple existing at the intersection of technology and liberal arts was central to the late Steve Jobs’ conception of Apple and, without question, a critical factor when it came to Apple’s success: at a time when technology was becoming accessible to consumers and their daily lives Apple created products — one product, really, the iPhone — that appealed to consumers not only because of what it did but how it did it.

That said, it was telling that this artwork and the sentiment it signified was not referenced in the keynote itself; after a humorous skit about a world without apps, Tim Cook delivered platitudes about how Apple and its developers were on a “collective mission to change the world”, and immediately launched into what he said were six important announcements. It was not dissimilar to Sundar Pichai’s opening at Google I/O: when the announcements that matter are grounded on the realities of a company’s core competencies and position in the market, vision can feel extraneous.

Apple’s Announcements

Cook’s first four announcements spoke to those core capabilities and the position they afford Apple (or don’t, as the case may be) in the markets in which it competes:

tvOS: It was generous of Cook to give tvOS top billing: the only announcement of note was the upcoming availability of Amazon Prime Video on Apple TV. That itself is a reminder of Apple’s diminished position in the space: winning in TV is not about hardware or software, much less the integration of the two, but rather content. The brevity of this announcement — there wasn’t even the traditional executive hand-off — spoke to Apple’s status as an also-ran.

watchOS: This garnered more time, and the headline feature was the Siri watch face. The watch face, which implied a broadening of Siri’s brand from voice to context-based general assistant, seeks to anticipate and deliver the information you need when you need it. The model is Google Now; the difference is that Siri is now housed in an attractive and increasingly popular watch that works natively with an iPhone, while the equivalent Google service requires not simply a different watch but a different phone entirely. It is a testament to Apple’s biggest advantage: thanks to the iPhone the company already owns the “best” customers,1 frequently rendering moot Google’s superiority in managing information.

macOS: This actually encompassed two of Cook’s six promised announcements;2 the separation of MacOS and Mac computers was, I suspect, born of Apple’s desire to convince developers and other pro users that the company was not abandoning their favorite platform. Moreover, the addition of hardware announcements, after several years in which WWDC was software only, resulted in a very different feel to this keynote: after all, hardware is exciting, even if, in the long run, it is software that actually matters. That feeling, though, goes to the very core of what Apple sells: superior hardware differentiated — and thus sold at a handsome margin — by exclusive software.

As is always the case with the modern incarnation of Apple, though, the announcements that truly mattered centered around iOS.

iOS 11

The iOS-related announcements, despite being only one of Cook’s “Big Six”, could have been their own keynote; given the importance of mobile generally and iOS specifically that would have been more than justified. Taken as a whole the iOS segment in particular highlighted what Apple does best, what it struggles with, and what reasons there are to be both optimistic and pessimistic about the company’s fortunes in the long run.

Strength: Defaults

Controlling one of the two dominant mobile operating systems grants Apple the power of defaults. That means Messages are both an iPhone lock-in and a channel to introduce new services like person-to-person Apple Pay. Siri can be accessed both via voice and the home button, and, just similar to the WatchOS update, is increasingly integrated throughout the operating system. Photos and Maps are used by the majority of iPhone customers, even if alternatives offer superior functionality.

Weakness: Limited Reach

At the same time, Messages will never reach the dominance of a service like WeChat because it is limited to Apple’s own platforms — as it should be! Messages is the canonical example of how strengths and weaknesses are two sides of the same coin: it is Messages exclusivity that allows it to be a lock-in, and it is that same exclusivity that limits the standalone value.

Strength: Hardware Integration

Peppered throughout Apple’s presentation were seemingly small features like new compression algorithms that depend on Apple controlling everything from Messages to the camera to the processor that makes it all work. The most impressive example was ARKit: in one fell swoop Apple leaped ahead of the rest of the industry in the race to realize the promise of augmented reality. The contrast to Facebook was striking: while the social network is seeking to leverage its control of content distribution to lure developers to build on Facebook’s “camera”, Apple is not only offering the same opportunity (the results of which can, of course, be shared on Facebook or Instagram), but also delivering a superior set of APIs that, by virtue of being part of that vertical stack, are both more powerful and accessible than anything a 3rd-party application can deliver.

Weakness: Services

While Apple bragged about Siri’s natural language capabilities and alluded to a limited number of new “intents” that can be leveraged by apps, it is not an accident that there were no slides about accuracy, speed, or developer support: Siri is well behind the competition in all three. More fundamentally, all of Apple’s services are intrinsically limited by the fact that they exist to sell Apple hardware: those services, and the teams that work on them, will never be the most important people in the company, and their development will be constrained by the culture of Apple itself.

Strength: Privacy

Apple not only touted its privacy credentials, it also showed off new features to actively limit things like autoplaying videos and advertising networks that follow you across sites. As a user both are very welcome; strategically, both features follow from the fact that Apple makes money on its hardware, while companies like Google, Facebook, and other online businesses rely on advertising and the collection of data.

Weakness: Data

Collecting data is useful for more than advertising, though. Here Google is the obvious counter: certainly the search company wants to better target advertisements, but the benefits gained from data go far beyond overt monetization. It is data that drives Google’s superior machine learning capabilities and the customer-friendly features that follow in apps like Google Photos. Interestingly, Apple made moves in this direction, syncing things like facial recognition data and Messages across devices, favoring convenience over a very slight increase in the risk to privacy. To be clear, the data will still be encrypted, both in transit and at rest, but that is my point: encryption means that Apple cannot leverage the data it will now store to make its services better.

Strength: The App Store

The strategic role of 3rd-party apps has shifted over time: once a differentiator for iOS, Android has largely reached parity, and apps are now table stakes. They are also a big moneymaker: Apple has been pushing the narrative on Wall Street that it is a services company, fueled by the $30 billion the company has collected from app sales and especially in-app purchases in free-to-play games; 30% of that total has come in the last year alone. Make no mistake, this is a compelling narrative: iPhone growth may be slowing in the face of saturation and elongated update cycles, but that only means there is that large of a base from which to earn App Store revenue.

Weakness: Developer Economics

The success of free-to-play games and the associated in-app purchases has come at a cost, specifically, management blindness to the fact that the rest of the developer ecosystem isn’t nearly as healthy, and that the App Store is no longer a differentiator from Android. The fundamental problem remains that for productivity apps in particular it is necessary to monetize your best customers over time; Apple has improved the situation, particularly with the addition of subscription pricing and de facto trials (basically, starting a subscription at $0), but hasn’t made any moves to support trials or upgrade pricing for paid apps, despite the fact that is the proven successful model for productivity applications on the Mac. I have long argued that bad developer economics is the fundamental reason that the iPad hasn’t fulfilled its potential; yesterday’s iPad software enhancements were welcome and will help, but I suspect letting developers set their own business models would be even more transformative.

Strength and Weakness: Business Model

This point is part and parcel with all of the above: Apple’s strengths derive from the fact it sells software-differentiated hardware for a significant margin, which allows for exclusive apps and services set as defaults, deep integration from chipset to API, a focus on privacy, and total control of the developer ecosystem. And, on the flipside, Apple only reaches a segment of the market, is less incentivized to and capable of delivering superior services, has less data, and can afford to take developers for granted.

HomePod

Apple’s final announcement encapsulated all of these tensions. The long-rumored competitor to Amazon Echo and Google Home was, fascinatingly, framed as anything but. Cook began the unveiling by referencing Apple’s longtime focus on music, and indeed, the first several minutes of the HomePod were entirely about its quality as a speaker. It was, in my estimation, an incredibly smart approach: if you are losing the game, as Siri is to Alexa and Google, best to change the rules, and having heard the HomePod, its sound quality is significantly better than the Amazon Echo (and, one can safely assume, Google Home). Moreover, the ability to link multiple HomePods together is bad news for Sonos in particular (the HomePod sounded significantly better than the Sonos Play 3 as well).

Of course, superior sound quality is what you would expect from a significantly more expensive speaker: the HomePod costs $350, while the Sonos Play 3 is $300, and the Amazon Echo is $150. From Apple’s perspective, though, a high price is a feature, not a bug: remember, the company has a hardware-based business model, which means there needs to be room for a meaningful margin. The Echo is the opposite: because it is a hardware means to the service ends that is Amazon, it can be priced with much lower margins and, as has already happened, be augmented with even cheaper devices like Echo Dots (or, in the case of the Echo Show, offer superior functionality for a price that is still more than $100 cheaper than the HomePod).

The result is a product that, beyond being massively late to market (in part because of iPhone-induced myopia), is inferior to the competition on two of three possible vectors: the HomePod is significantly more expensive than an Echo or Google Home, it has an inferior voice assistant, but it has a better speaker. That is not as bad as it sounds: after all, the iPhone is significantly more expensive than most other smartphones, it has inferior built-in services, but it has a superior user experience otherwise. The difference — and this is why the iPhone is so much more dominant than any other Apple product — is that everyone already needs a phone; the only question is which one. It remains to be seen how many people need a truly impressive speaker.

This, broadly speaking, is the challenge for Apple moving forward: in what other categories does its business model (and everything that is tied up into that, including the company’s product development process, culture, etc.) create an advantage instead of a disadvantage? What existing needs can be met with a superior user experience, or what new needs — like the previously unknown need for wireless headphones that are always charged — can be created? To be clear, the iPhone is and will continue to be a juggernaut for a long time to come; indeed, it is so dominant that Apple could not change the underlying business model and resultant strengths and weaknesses even if they tried.

  1. Speaking strictly of which customers generate the most monetary value either through their purchases or advertising targeting []
  2. iPad hardware and optimizations all fell under the iOS umbrella []

Faceless Publishers

When I first worked for a (student) newspaper, the job of a publisher seemed odd to me; as far as I and my editorial colleagues were concerned, the publisher was the person the editor-in-chief, who we viewed as the boss, occasionally griped about after a few too many drinks, usually with the assertion that he (in that case) was a bit of a nuisance.

That attitude, of course, was the luxury of print: whatever happened on the other side of the office didn’t have any impact on the (in our eyes) heroic efforts to produce fresh content every day. We were the ones staying in the office until the wee hours of the night, writing, editing, and laying out the newspaper that would magically appear on newsstands the next morning, all while the publisher and his team were at home in bed.

The moral of this story is obvious: the publisher represented the business side of the newspaper, and the effect of the Internet was to make the job and impact of editorial easier and that of a publisher immeasurably harder, in large part because many of a publisher’s jobs became obsolete; it is the editorial side, though, that has paid the price.

The Jobs a Publisher Did

In the days of print, publishers provided multiple interlocking functions that made newspapers into fabulous businesses:

  • Brand: A publisher had a brand, specifically, the name of the publication; this was the primary touchpoint for readers, whether they were interested in national news, local news, sports, or the funny pages.
  • Revenue Generation: Most publishers drove revenue in two ways: some money was made through subscriptions, the selling, administration, and support of which was handled by dedicated staff; most money was made from advertising, which had its own dedicated team.
  • Human Resources: Editorial staff were free to write and complain about their publishers because everything else in their work life was taken care of, from payroll to travel expenses to office supplies.

What tied these functions together was distribution: a publisher owned printing presses and delivery trucks which, combined with their established readership and advertising relationships, gave most newspapers an effective monopoly (or oligopoly) in their geographic area on readers and advertisers and writers:

stratechery Year One - 264

Each of these functions supported the other: the brand drove revenue generation which paid for editorial that delivered on the brand promise, all underpinned by owning distribution.

Publishing’s Downward Spiral

It is hardly new news, particularly on this blog, to note that this model has fallen apart. The most obvious culprit is that on the Internet, distribution, particular text and images, is effectively free, which meant that advertisers had new channels: first ad networks that operated at scale across publishers, and increasingly Facebook and Google who offer the power to reach the individual directly.

modeldisintegration

I wrote about this progression in Popping the Publishing Bubble, and the intertwined functionality of publishers explains the downward spiral that followed: with less revenue there was less money for quality journalism (and a greater impetus to chase clicks), which meant a devaluing of the brand, which meant fewer readers, which led to even less money.

What made this downward spiral particularly devastating is that, as demonstrated by the advertising shift, newspapers did not exist in a vacuum. Readers could read any newspaper, or digital-only publisher, or even individual bloggers. And, just as social media made it possible for advertisers to target individuals, it also made everyone a content creator pushing their own media into the same feed as everyone else: the brand didn’t matter at all, only the content, or, in a few exceptional cases, the individual authors, many of whom amassed massive followings of their own; one prominent example is Bill Simmons, the American sportswriter.

Vox Media + The Ringer

I wrote about Simmons two years ago in Grantland and the (Surprising) Future of Publishing, and noted that media entities needed to think about monetization holistically:

Too much of the debate about monetization and the future of publishing in particular has artificially restricted itself to monetizing text. That constraint made sense in a physical world: a business that invested heavily in printing presses and delivery trucks didn’t really have a choice but to stick the product and the business model together, but now that everything — text, video, audio files, you name it — is 1’s and 0’s, what is the point in limiting one’s thinking to a particular configuration of those 1’s and 0’s?

In fact, it’s more than possible that in the long-run the current state of publishing — massive scale driven by advertising on one hand, and one-person shops with low revenue numbers and even lower costs on the other — will end up being an aberration. Focused, quality-obsessed publications will take advantage of bundle economics to collect “stars” and monetize them through some combination of subscriptions (less likely) or alternate media forms. Said media forms, like podcasts, are tough to grow on their own, but again, that is what makes them such a great match for writing, which is perfect for growth but terrible for monetization.

My back-of-the-envelope calculations estimated that Simmons’ Ringer podcast network was likely generating millions of dollars, and in an interview with Recode earlier this year, Simmons confirmed that is the case, claiming that podcast revenue was more than covering the cost of creating not just podcasts but the website that, at least in theory, created podcast listeners.

Still, given Simmons’ ambitions, it would certainly be better were the site more than a cost center, which makes the company’s most recent announcement particularly interesting. From the New York Times:

The Ringer, a sports and culture website created by Bill Simmons, will soon be hosted on Vox Media’s platform but maintain editorial independence under a partnership announced on Tuesday. Mr. Simmons, a former ESPN personality, will keep ownership of The Ringer, but Vox will sell advertising for the site and share in the revenue. The Ringer will leave its current home on Medium, where it has been hosted since it began in June 2016.

Jim Bankoff, Vox’s chief executive, said in a phone interview that the partnership was the first of its type for the company and would allow it to expand its offerings to advertisers. Mr. Simmons said in a statement: “This partnership allows us to remain independent while leveraging two of the things that Vox Media is great at: sales and technology. We want to devote the next couple of years to creating quality content, innovating as much as we can, building our brand and growing The Ringer as a multimedia business.”

Simmons is exactly right about the benefits he gets from the deal: instead of building duplicative technology and ad sales infrastructure, The Ringer can simply use Vox Media’s. This is less important with regards to the technology (Vox’s insistence that Chorus is a meaningful differentiator notwithstanding) but hugely important when it comes to advertising. It’s not simply the expense of building an infrastructure for ad sales; the top line is even more critical: it is all but impossible to compete with Google and Facebook for advertising dollars without massive scale.

Make no mistake, Simmons is the sort of writer that many advertisers would be happy to advertise next to (his podcast has had an impressive slate of brand names, in addition to the usual mainstays like Squarespace and Casper mattresses); the problem is that when it comes to the return-on-investment of buying ads, the “investment” — particularly time — is just as important as the “return”: a brand looking to advertise directly on premium media is far more likely to deal with Vox Media and its huge stable of sites than it is to do a relatively small deal with a site like The Ringer.

Indeed, the bifurcation in the Internet’s impact on editorial and advertising — the former is becoming atomized, the latter consolidated — explains why the implications for Vox Media are, in my estimation, the more important takeaway from this deal.

Vox Media’s Upside

To date Vox Media has been a relatively traditional publisher, albeit one that has executed better than most: the company has built strong brands that attract audiences which can be monetized through advertising, and that revenue, along with venture capital, has been fed into an impressive editorial product that builds up the company’s brands.

The Ringer, though, is not a Vox Media brand: it is Simmons’ brand, a point he emphasized in his statement, and that’s great news for Vox. The problem with editorial is that while the audience scales, production doesn’t: content still has to be created on an ongoing basis, and that means high variable costs.

Infrastructure, though, does scale: Vox Media uses the same underlying technology for all of its sites, which is exactly what you would expect given that software can be replicated endlessly. Crucially, the same principle applies to advertising: one sales team can sell ads across any number of sites, and the more impressions the better. Presuming The Ringer ends up being not an outlier but rather the first of many similar deals,1 then that means that Vox Media has far more growth potential than it did as long as it was focused only on monetizing its owned-and-operated content.

Publishers of the Future

The new model portended by this deal looks something like this:

stratechery Year One - 265

In this model the most effective and scalable publisher is faceless: atomized content creators, fueled by social media, build their own brands and develop their own audiences; the publisher, meanwhile, builds scale on the backside, across infrastructure, monetization, and even human-resource type functions.2 This last point makes a faceless publisher more than an ad network, and crucially, I suspect the greatest impact will not be (just) about ads.

Earlier this month I wrote about the future of local news, which I argued would entail relatively small subscription-based publications. Said publications would be more viable were there a faceless publisher in place to provide technology, including subscription and customer support capabilities, and all of the other repeatable minutiae that comes with running a business. Publishers still matter, but much of what matters can be scaled and offered as a service without being tied to a brand and a specific set of content.

I suspect this is part of the endgame for publishing on the Internet: free distribution blew up the link between editorial and publishing and drove them in opposite directions — atomization on one side and massively greater scale on the other. And now, that same reality makes possible a new model: a huge number of small publications backed by entities more concerned with building viable businesses than having memorable names.

Disclosure: I have previously spoken at the Vox Media-owned Code Media conference and was previously a guest on The Bill Simmons Podcast; I received no monetary compensation for either appearance

  1. There is already a parallel to The Ringer within Vox Media: the company’s vast network of team-specific sites that sit under the SBNation umbrella []
  2. This is where Medium went wrong: the company made motions towards this model — which is why The Ringer is hosted there — but has decided to pursue a Medium subscription model instead []

Tulips, Myths, and Cryptocurrencies

Everyone knows about the Tulip Bubble, first documented by Charles Mackay in 1841 in his book Extraordinary Popular Delusions and the Madness of Crowds:

In 1634, the rage among the Dutch to possess [tulips] was so great that the ordinary industry of the country was neglected, and the population, even to its lowest dregs, embarked in the tulip trade. As the mania increased, prices augmented, until, in the year 1635, many persons were known to invest a fortune of 100,000 florins in the purchase of forty roots. It then became necessary to sell them by their weight in perits, a small weight less than a grain. A tulip of the species called Admiral Liefken, weighing 400 perits, was worth 4400 florins; an Admiral Van der Eyck, weighing 446 perits, was worth 1260 florins; a Childer of 106 perits was worth 1615 florins; a Viceroy of 400 perits, 3000 florins, and, most precious of all, a Semper Augustus, weighing 200 perits, was thought to be very cheap at 5500 florins. The latter was much sought after, and even an inferior bulb might command a price of 2000 florins. It is related that, at one time, early in 1636, there were only two roots of this description to be had in all Holland, and those not of the best. One was in the possession of a dealer in Amsterdam, and the other in Harlaem [sic]. So anxious were the speculators to obtain them, that one person offered the fee-simple of twelve acres of building-ground for the Harlaem tulip. That of Amsterdam was bought for 4600 florins, a new carriage, two grey horses, and a complete suit of harness.

Mackay goes on to recount other tall tales; I’m partial to the sailor who thought a Semper Augustus bulb was an onion, and stole it for his breakfast; “Little did he dream that he had been eating a breakfast whose cost might have regaled a whole ship’s crew for a twelvemonth.”

Jean-Léon_Gérôme_-_The_Tulip_Folly_-_Walters_372612

Anyhow, we all know how it ended:

At first, as in all these gambling mania, confidence was at its height, and everybody gained. The tulip-jobbers speculated in the rise and fall of the tulip stocks, and made large profits by buying when prices fell, and selling out when they rose. Many individuals grew suddenly rich. A golden bait hung temptingly out before the people, and one after the other, they rushed to the tulip-marts, like flies around a honey-pot. Every one imagined that the passion for tulips would last for ever…

At last, however, the more prudent began to see that this folly could not last for ever. Rich people no longer bought the flowers to keep them in their gardens, but to sell them again at cent per cent profit. It was seen that somebody must lose fearfully in the end. As this conviction spread, prices fell, and never rose again. Confidence was destroyed, and a universal panic seized upon the dealers…The cry of distress resounded every where, and each man accused his neighbour. The few who had contrived to enrich themselves hid their wealth from the knowledge of their fellow-citizens, and invested it in the English or other funds. Many who, for a brief season, had emerged from the humbler walks of life, were cast back into their original obscurity. Substantial merchants were reduced almost to beggary, and many a representative of a noble line saw the fortunes of his house ruined beyond redemption.

Thanks to Mackay’s vivid account, tulips are a well-known cautionary tale, applied to asset bubbles of all types; here’s the problem, though: there’s a decent chance Mackay’s account is completely wrong.

The Truth About Tulips

In 2006, UCLA economist Earl Thompson wrote a paper entitled The Tulipmania: Fact or Artifact?1 that includes this chart that looks like Mackay’s bubble:

Screen Shot 2017-05-23 at 7.16.24 PM

However, as Thompson wrote in the paper, “appearances are sometimes quite deceiving.” A much more accurate chart looks like this:

Screen Shot 2017-05-23 at 7.17.16 PM

Mackay was right that there were insanely high prices: those prices, though, were for options; if the actual price of tulips were lower on the strike date for the options, then the owner of the option only needed to pay a small percentage of the contract price (ultimately 3.5%). Meanwhile, though, actual spot prices and futures (that locked in a price) stayed flat.

The broader context comes from this chart:

Screen Shot 2017-05-23 at 7.17.32 PM

As Thompson explains, tulips in fact were becoming more popular, particularly in Germany, and, as the first phase of the 30 Years War wound down, it looked like Germany would be victorious, which would mean a better market for tulips. In early October, 1636, though, Germany suffered an unexpected defeat, and the tulip price crashed, not because it was irrationally high, but because of an external shock.

As Thompson recounts, that October crash was in fact a financial disaster for many, including some public officials who had bought tulip futures on a speculative basis; to get themselves out of trouble, said officials retroactively decreed that futures were in fact options. These deliberations were well-publicized throughout the winter of 1636 and early 1637, but not made official until February 24th; the dramatic rise in options, then, is explained as a longshot bet that the conversion would not actually take place: when it did, the price of the options naturally dropped to the spot price.2

By Thompson’s reckoning, Mackay’s entire account was a myth.

Myths and Humans

Early on in Sapiens: A Brief History of Humankind, Yuval Noah Harari explains the importance of myth:

Once the threshold of 150 individuals is crossed, things can no longer work [on the basis of intimate relations]…How did Homo sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.

Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe that God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights – and the money paid out in fees. Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.

The implication of Harari’s argument3 is pretty hard to wrap one’s head around.4 Take the term “tulip bubble”: everyone knows it is in reference to a speculative mania that will end in a crash, even those like me — and now you — that have learned about what actually happened in the Netherlands in the winter of 1636. Like I said, it’s a myth — and myths matter.

The Rise in Cryptocurrencies

The reason I mention the tulip bubble at all is probably obvious:

Screen Shot 2017-05-23 at 5.07.24 PM

This is the total market capitalization of all cryptocurrencies. To date that has mostly meant Bitcoin, but over the last two months Bitcoin’s share of cryptocurrency capitalization has actually plummeted to less than 50%, thanks to the sharp rise of Ethereum and Ripple in particular:

currencyshare

As you might expect, the tulip is having a renaissance, or to be more precise, our shared myth of the tulip bubble. This tweet summed up the skeptics’ sentiment well:

To be perfectly clear, this random twitterer may very well be correct about an impending crash. And, in the grand scheme of things, it is mostly true today that cryptocurrencies don’t have meaningful “industrial [or] consumer use except as a medium of exchange.” What he is the most right about, though, is that cryptocurrencies have no intrinsic value.

Compare cryptocurrencies to, say, the U.S. dollar. The U.S. dollar is worth, well, a dollar because…well, because the United States government says it is.5 And because currency traders have established it as such, relative to other currencies. And the worth of those currencies is based on…well, like the dollar, they are based on a mutual agreement of everyone that they are worth whatever they are worth. The dollar is a myth.

Of course this isn’t a new view: there are still those that believe it was a mistake to move the dollar off of the gold standard: that was a much more concrete definition. After all, you could always exchange one dollar for a fixed amount of gold, and gold, of course, has intrinsic value because…well, because us humans think it looks pretty, I guess. In fact, it turns out gold — at least the idea that it is of intrinsically more worth than another mineral — is another myth.

I would argue that cryptocurrency broadly, and Bitcoin especially, are no different. Bitcoin has been around for eight years now, it has captured the imagination, ingenuity, and investment of a massive number of very smart people, and it is increasingly trivial to convert it to the currency of your choice. Can you use Bitcoin to buy something from the shop down the street? Well, no, but you can’t very well use a piece of gold either, and no one argues that the latter isn’t worth whatever price the gold market is willing to bear. Gold can be converted to dollars which can be converted to goods, and Bitcoin is no different. To put it another way, enough people believe that gold is worth something, and that is enough to make it so, and I suspect we are well past that point with Bitcoin.

The Utility of Blockchains

To be fair, there is an argument that gold is valuable because it does have utility beyond ornamentation (I, of course, would argue that that is a perfectly valuable thing in its own right): for example, gold is used in electronics and dentistry. An argument based on utility, though, applies even moreso to cryptocurrencies. I wrote back in 2014:

The defining characteristic of anything digital is its zero marginal cost…Bitcoin and the breakthrough it represents, broadly speaking, changes all that. For the first time something can be both digital and unique, without any real world representation. The particulars of Bitcoin and its hotly-debated value as a currency I think cloud this fact for many observers; the breakthrough I’m talking about in fact has nothing to do with currency, and could in theory be applied to all kinds of objects that can’t be duplicated, from stock certificates to property deeds to wills and more.

One of the big recent risers, Ethereum, is exactly that: Ethereum is based on a blockchain,6 like Bitcoin, which means it has an attached currency (Ether) that incentivizes miners to verify transactions. However, the protocol includes smart contract functionality, which means that two untrusted parties can engage in a contract without a 3rd-party enforcement entity.7

One of the biggest applications of this functionality is, unsurprisingly, other cryptocurrencies. The last year in particular has seen an explosion in Initial Coin Offerings (ICOs), usually on Ethereum. In an ICO a new blockchain-based entity is created, with the initial “tokens” — i.e. currency — being sold (for Ether or Bitcoin). These initial offerings are, at least in theory, valuable because the currency will, if the application built on the blockchain is successful, increase in value over time.

This has the potential to be particularly exciting for the creation of decentralized networks. Fred Ehrsam explained on the Coinbase blog:

Historically it has been difficult to incentivize the creation of new protocols as Albert Wenger points out. This has been because 1) there had been no direct way to monetize the creation and maintenance of these protocols and 2) it had been difficult to get a new protocol off the ground because of the chicken and the egg problem. For example, with SMTP, our email protocol, there was no direct monetary incentive to create the protocol — it was only later that businesses like Outlook, Hotmail, and Gmail started using it and made a real business on top of it. As a result we see very successful protocols and they tend to be quite old. (Editor: and created when the Internet was government-supported)

Now someone can create a protocol, create a tokens that is native to that protocol, and retain some of that token for themselves and for future development. This is a great way to incentivize creators: if the protocol is successful, the token will go up in value…In addition, tokens help solve the classic chicken and the egg problem that many networks have…the value of a network goes up a lot when more people join it. So how do you get people to join a brand new network? You give people partial ownership of the network…

0*cSLRTnSGE6vw2VGi.

These two incentives are amazing offsets for each other. When the network is less populated and useful you now have a stronger incentive to join it.

This is a huge deal, and probably the most viable way out from the antitrust trap created by Aggregation Theory.

Party Like It’s 1999

The problem, of course, is that while blockchain applications make sense in theory, the road to them becoming a reality is still a long one. That is why I suspect the better analogy for blockchain-based applications and their associated cryptocurrencies is not tulips but rather the Internet itself, specifically the 1990s. Marc Andreessen is fond of observing, most recently on this excellent podcast with Barry Ritholtz, that all of the dot-com failures turned out to be viable businesses: they were just 15 years too early (the most recent example: Chewy.com, the spiritual heir of famed dot-com bust Pets.com, acquired earlier this year for $3.35 billion).

As the aphorism goes, being early (or late) is no different than being wrong, and that’s true in a financial sense. As I noted above, I would not be surprised if the ongoing run-up in cryptocurrency prices proves to be, well, a bubble. However, bubbles of irrationality and bubbles of timing are fundamentally different: one is based on something real (the latter), and one is not. That is to say, one is a myth, and one is merely a fable — and myths can lift an entire species.

Consistent with my ethics policy, I do not own any Bitcoin or any other cryptocurrency; that said, the implication of this article is that comparing Bitcoin or any other cryptocurrencies to stock in an individual company probably doesn’t make much sense

  1. This link requires payment; there is an uploaded version of the paper here []
  2. Thompson’s take is not without its critics: see Brad DeLong’s takedown here []
  3. If you’re religious, please apply the point about “gods” to other religions — the point still stands! []
  4. I first encountered this sort of thinking in an Introduction to Constitutional Law course in university, when my professor contended that the U.S. Constitution was simply a shared myth, dependent on the mutual agreement of Americans and its leaders that it mattered. It’s a lesson that has served me well []
  5. And, as @nosunkcosts notes, said claim, via taxes, is backed by military might []
  6. A useful overview of how cryptocurrencies work is here []
  7. What happened with The Dao will not be covered here! []

Boring Google

My favorite part of keynotes is always the opening. That is the moment when the CEO comes on stage, not to introduce new products or features, but rather to create the frame within which new products and features will be introduced.

This is why last week’s Microsoft keynote was so interesting: CEO Satya Nadella spent a good 30 minutes on the framing, explaining a new world where the platform that mattered was not a distinct device or a particular cloud, but rather one that ran on all of them. In this framing Microsoft, freed from a parochial focus on its own devices, could be exactly that; the problem, as I noted earlier this week, is that platforms come from products, and Microsoft is still searching for an on-ramp other than Windows.

The opening to Google I/O couldn’t have been more different. There was no grand statement of vision, no mind-bending re-framing of how to think about the broader tech ecosystem, just an affirmation of the importance of artificial intelligence — the dominant theme of last year’s I/O — and how it fit in with Google’s original vision. CEO Sundar Pichai said in his prepared remarks:

It’s been a very busy year since last year, no different from my 13 years at Google. That’s because we’ve been focused ever more on our core mission of organizing the world’s information. And we are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale. That approach has served us very, very well. This is what has allowed us to scale up seven of our most important products and platforms to over a billion users…It’s a privilege to serve users at this scale, and this is all because of the growth of mobile and smartphones.

But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.

Honestly, it was kind of boring.

Google’s Go-to-Market Problem

After last year’s I/O I wrote Google’s Go-To-Market Problem, and it remains very relevant. No company benefited more from the open web than Google: the web not only created the need for Google search, but the fact that all web pages were on an equal footing meant that Google could win simply by being the best — and they did.

Mobile has been much more of a challenge: while Android remains a brilliant strategic move, its dominance is rooted more in its business model than in its quality (that’s not to denigrate its quality in the slightest, particularly the fact that Android runs on so many different kinds of devices at so many different price points). The point of Android — and the payoff today — is that Google services are the default on the vast majority of phones.

The problem, of course, is iOS: Apple has the most valuable customers (from a monetization perspective, to be clear), who mostly don’t bother to use different services than the default Apple ones, even if they are, in isolation, inferior. I wrote in that piece:

Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.

To that end, I thought there were three product announcements yesterday that suggested Google is on the right track:

Google Assistant

Google Assistant was first announced last year, but it was only available through the Allo messenger app, Google’s latest attempt to build a social product; the company also pre-announced Google Home, which would not ship until the fall, alongside the Pixel phone. You could see Google’s thinking with all three products:

  • Given that the most important feature of a messaging app is whether or not your friends or family also use it, Google needed a killer feature to get people to even download Allo. Enter Google Assistant.

  • Thanks to the company’s bad bet on Nest, Google was behind Amazon in the home. Google Assistant being smarter than Alexa was the best way to catch up.

  • A problem for Google with voice computing is that it is not clear what the business model might be; one alternative would be to start monetizing through hardware, and so the high-end Pixel phone was differentiated by Google Assistant.

All three approaches suffered from the same flaw: Google Assistant was the means to a strategic goal, not the end. The problem, though, is that unlike search, Google Assistant was not yet established as something people should jump through hoops to get: driving Google Assistant usage needs to be the goal; only then can it be leveraged for something else.

To that end Google has significantly changed its approach over the last 12 months.

  • Google Assistant is now available as its own app, both on Android and iOS. No unwanted messenger app necessary.

  • The Google Assistant SDK will allow Google Assistant to be built in to just about anything. Scott Huffman, the VP of Google Assistant said:

    We think the assistant should be available on all kinds of devices where people might want to ask for help. The new Google Assistant SDK allows any device manufacturer to easily build the Google Assistant into whatever they’re building, speakers, toys, drink-mixing robots, whatever crazy device all of you think up now can incorporate the Google Assistant. We’re working with many of the world’s best consumer brands and their suppliers so keep an eye out for the badge that says “Google Assistant Built-in” when you do your holiday shopping this year.

    This is the exact right approach for a services company.

  • That leads to the Pixel phone: earlier this year Google finally added Google Assistant to Android broadly — built-in, not an app — after having insisted just a few months earlier it was a separate product. The shifting strategy was a big mistake (as, arguably, is the entire program), but at least Google has ended up where they should be: everywhere.

Google Photos

Google Assistant has a long ways to go, but there is a clear picture of what success will look like: Google Photos. Launched only two years ago, Pichai bragged that Photos now has over 500 million active users who upload 1.2 billion photos a day. This is a spectacular number for one very simple reason: Google Photos is not the default photo app for Android1 or iOS. Rather, Google has earned all of those photos simply by being better than the defaults, and the basis of that superiority is Google’s machine learning.

Moreover, much like search, Photos gets better the more data it gets, creating a virtuous cycle: more photos means more data which means a better experience which means more users which means more photos. It is already hard to see other photo applications catching up.

stratechery Year One - 263

Yesterday Google continued to push forward, introducing suggested sharing, shared libraries, and photo books. All utilize vision recognition (for example, you can choose to automatically share pictures of your kids with your significant other) and all make Photos an even better app, which will lead to new users, which will lead to more data.

What is particularly exciting from Google’s perspective is that these updates add a social component: suggested sharing, for example, is self-contained within Google Photos, creating ad hoc private networks with you and your friends. Not only does this help spread Google Photos, it is also a much more viable and sustainable approach to social networking than something like Google Plus. Complex entities like social networks are created through evolution, not top-down design, and they must rely on their creator’s strengths, not weaknesses.

Google Lens

Google Lens was announced as a feature of Google Assistant and Google Photos. From Pichai:

We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.

How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.

As you can see, we are beginning to understand images and videos. All of Google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.

The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.

My one concern is that Google is repeating its previous mistake: that is, seeking to use a new product as a means instead of an end. Limiting Google Lens to Google Assistant and Google Photos risks handicapping Lens’ growth; ideally Lens will be its own app — and thus the foundation for other applications — sooner rather than later.


Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization. Google Assistant requires you to open an app instead of using what is built-in (although the Android situation should improve going forward), Photos requires a download instead of the default photos app, and Lens sits on top of both. It’s a far cry from simply setting Google as the home page of your browser, and Google making more money the more people used the Internet.

All three apps, though, are leaning into Google’s strengths:

  • Google Assistant is focused on being available everywhere
  • Google Photos is winning by being better through superior data and machine learning
  • Google Lens is expanding Google’s utility into the physical world

There were other examples too: Google’s focus with VR is building a cross-device platform that delivers an immersive experience at multiple price points, as opposed to Facebook’s integrated high-end approach that makes zero sense for a social network. And, just as Apple invests in chips to make its consumer products better, Google is investing in chips to make its machine learning better.

The Beauty of Boring

This is the culmination of a shift that happened two years ago, at the 2015 Google I/O. As I noted at the time,2 the event was two keynotes in one.

[The first hour was] a veritable smorgasbord of features and programs that [lacked a] unifying vision, just a sense that Google should do them. An operating system for the home? Sure! An Internet of Things language? Bring it on! Android Wear? We have apps! Android Pay? Obviously! A vision for Android? Not necessary!

None of these had a unifying vision, just a sense that Google ought to do them because they’re a big company that ought to do big things.

What was so surprising, though was that the second hour of that keynote was completely different. Pichai gave a lengthy, detailed presentation about machine learning and neural nets, and tied it to Google’s mission, much like he did in yesterday’s introduction. After quoting Pichai’s monologue I wrote:

Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do…The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.

This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.

That is the best place to be, for a person and for a company.

  1. Google Photos was default for the Pixel, and for more and more Android phones that have come out in the past few months []
  2. This is a Daily Update but I have made it publicly available []

WannaCry About Business Models

For the users and administrators of the estimated 200,000 computers affected by “WannaCry” — a number that is expected to rise as new variants come online (the original was killed serendipitously by a security researcher) — the answer to the question implied in the name is “Yes.”

wannacryscreenshotpng

WannaCry is a type of malware called “ransomware”: it encrypts a computers’ files and demands payment to decrypt them. Ransomware is not new; what made WannaCry so destructive was that it was built on top of a computer worm — a type of malware that replicates itself onto other computers on the same network (said network, of course, can include the Internet).

Worms have always been the most destructive type of malware — and the most famous: even non-technical readers may recognize names like Conficker (an estimated $9 billion of damage in 2008), ILOVEYOU (an estimated $15 billion of damage in 2000), or MyDoom (an estimated $38 billion of damage in 2004). There have been many more, but not so many the last few years: the 2000s were the sweet spot when it came to hundreds of millions of computers being online with an operating system — Windows XP — that was horrifically insecure, operated by users given to clicking and paying for scams to make the scary popups go away.

Over the ensuing years Microsoft has, from Windows XP Service Pack 2 on, gotten a lot more serious about security, network administrators have gotten a lot smarter about locking down their networks, and users have at least improved in not clicking on things they shouldn’t. Still, as this last weekend shows, worms remain a threat, and as usual, everyone is looking for someone to blame. This, time, though, there is a juicy new target: the U.S. government.

The WannaCry Timeline

Microsoft President and Chief Legal Officer Brad Smith didn’t mince any words on The Microsoft Blog (“WannaCrypt” is an alternative name for WannaCry”):

Starting first in the United Kingdom and Spain, the malicious “WannaCrypt” software quickly spread globally, blocking customers from their data unless they paid a ransom using Bitcoin. The WannaCrypt exploits used in the attack were drawn from the exploits stolen from the National Security Agency, or NSA, in the United States. That theft was publicly reported earlier this year. A month prior, on March 14, Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments, and computers at homes were affected.

Smith mentions a number of key dates, but it’s important to get the timeline right, so let me summarize it as best as I understand it:1

  • 2001: The bug in question was first introduced in Windows XP and has hung around in every version of Windows since then
  • 2001–2015: At some point the NSA (likely the Equation Group, allegedly a part of the NSA) discovered the bug and built an exploit named EternalBlue, and may or may not have used it
  • 2012–2015: An NSA contractor allegedly stole more than 75% of the NSA’s library of hacking tools
  • August, 2016: A group called “ShadowBrokers” published hacking tools they claimed were from the NSA; the tools appeared to come from the Equation Group
  • October, 2016: The aforementioned NSA contractor was charged with stealing NSA data
  • January, 2017: ShadowBrokers put a number of Windows exploits up for sale, including a SMB zero day exploit — likely the “EternalBlue” exploit used in WannaCry — for 250 BTC (around $225,000 at that time)
  • March, 2017: Microsoft, without fanfare, patched a number of bugs without giving credit to whomever discovered them; among them was the EternalBlue exploit, and it seems very possible the NSA warned them
  • April, 2017: ShadowBrokers released a new batch of exploits, including EternalBlue, perhaps because Microsoft had already patched them (dramatically reducing the value of zero day exploits in particular)
  • May, 2017: WannaCry, based on the EternalBlue exploit, was released and spread to around 200,000 computers before its kill switch was inadvertently triggered; new versions have already begun to spread

It is axiomatic to note that the malware authors bear ultimate responsibility for WannaCry; hopefully they will be caught and prosecuted to the full extent of the law.

After that, though, it gets a bit murky.

Spreading Blame

The first thing to observe from this timeline is that, as with all Windows exploits, the initial blame lies with Microsoft. It is Microsoft that developed Windows without a strong security model for networking in particular, and while the company has done a lot of work to fix that, many fundamental flaws still remain.

Not all of those flaws are Microsoft’s fault: the default assumption for personal computers has always been to give applications mostly unfettered access to the entire computer, and all attempts to limit that have been met with howls of protest. iOS created a new model, in which applications were put in a sandbox and limited to carefully defined hooks and extensions into the operating system; that model, though, was only possible because iOS was new. Windows, in contrast, derived all of its market power from the established base of applications already in the market, which meant overly broad permissions couldn’t be removed retroactively without ruining Microsoft’s business model.

Moreover, the reality is that software is hard: bugs are inevitable, particularly in something as complex as an operating system. That is why Microsoft, Apple, and basically any conscientious software developer regularly issues updates and bug fixes; that products can be fixed after the fact is inextricably linked to why they need to be fixed in the first place!

To that end, though, it’s important to note that Microsoft did fix the bug two months ago: any computer that applied the March patch — which, by default, is installed automatically — is protected from WannaCry; Windows XP is an exception, but Microsoft stopped selling that operating system in 20082 and stopped supporting it in 2014 (despite that fact, Microsoft did release a Windows XP patch to Fix the bug on Friday night). In other words, end users and the IT organizations that manage their computers bear responsibility as well. Simply staying up-to-date on critical security patches would have kept them safe.

Still, staying up-to-date is expensive, particularly in large organizations, because updates break stuff. That “stuff” might be critical line-of-business software, which may be from 3rd-party vendors, external contractors, or written in-house: that said software is so dependent on one particular version of an OS is itself a problem, so you can blame those developers too. The same goes for hardware and its associated drivers: there are stories from the UK’s National Health Service of MRI and X-ray machines that only run on Windows XP, critical negligence by the manufacturers of those machines.

In short, there is plenty of blame to go around; how much, though, should go into the middle part of that timeline — the government part?

Blame the Government

Smith writes in that blog post:

This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017. We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.

This comparison, frankly, is ridiculous, even if you want to stretch and say that the impact of WannaCry on places like hospitals may actually result in physical harm (albeit much less than a weapon of war!).

First, the U.S. government creates Tomahawk missiles, but it is Microsoft that created the bug (even if inadvertently). What the NSA did was discover the bug (and subsequently exploit it), and that difference is critical. Finding bugs is hard work, requiring a lot of money and effort. It’s worth considering why, then, the NSA was willing to do just that, and the answer is right there in the name: national security. And, as we’ve seen through examples like Stuxnet, these exploits can be a powerful weapon.

Here is the fundamental problem: insisting that the NSA hand over exploits immediately is to effectively demand that the NSA not find the bug in the first place. After all, a patched (and thus effectively published) bug isn’t worth nearly as much, both monetarily as ShadowBrokers found out, or militarily, which means the NSA would have no reason to invest the money and effort to find them. To put it another way, the alternative is not that the NSA would have told Microsoft about EternalBlue years ago, but that the underlying bug would have remained un-patched for even longer than it was (perhaps to be discovered by other entities like China or Russia; the NSA is not the only organization searching for bugs).

In fact, the real lesson to be learned with regard to the government is not that the NSA should be Microsoft’s QA team, but rather that leaks happen: that is why, as I argued last year in the context of Apple and the FBI, government efforts to weaken security by fiat or the insertion of golden keys (as opposed to discovering pre-existing exploits) are wrong. Such an approach is much more in line with Smith’s Tomahawk missile argument, and given the indiscriminate and immediate way in which attacks can spread, the country that would lose the most from such an approach would be the one that has the most to lose (i.e. the United States).

Blame the Business Model

Still, even if the U.S. government is less to blame than Smith insists, nearly two decades of dealing with these security disasters suggests there is a systematic failure happening, and I think it comes back to business models. The fatal flaw of software, beyond the various technical and strategic considerations I outlined above, is that for the first several decades of the industry software was sold for an up-front price, whether that be for a package or a license.

This resulted in problematic incentives and poor decision-making by all sides:

  • Microsoft is forced to support multiple distinct code bases, which is expensive and difficult and not tied to any monetary incentives (thus, for example, the end of support for Windows XP).
  • 3rd-party vendors are inclined to view a particular version of an operating system as a fixed object: after all, Windows 7 is distinct from Windows XP, which means it is possible to specify that only XP is supported. This is compounded by the fact that 3rd-party vendors have no ongoing monetary incentive to update their software; after all, they have already been paid.
  • The most problematic impact is on buyers: computers and their associated software are viewed as capital costs, which are paid for once and then depreciated over time as the value of the purchase is realized. In this view ongoing support and security are an additional cost divorced from ongoing value; the only reason to pay is to avoid a future attack, which is impossible to predict both in terms of timing and potential economic harm.

The truth is that software — and thus security — is never finished; it makes no sense, then, that payment is a one-time event.

SaaS to the Rescue

Four years ago I wrote about why subscriptions are better for both developers and end users in the context of Adobe’s move away from packaged software:

surplus2

That article was about the benefit of better matching Adobe’s revenue with the value gained by its users: the price of entry is lower while the revenue Adobe extracts over time is more commensurate with the value it delivers. And, as I noted, “Adobe is well-incentivised to maintain the app to reduce churn, and users always have the most recent version.”

This is exactly what is necessary for good security: vendors need to keep their applications (or in the case of Microsoft, operating systems) updated, and end users need to always be using the latest version. Moreover, pricing software as a service means it is no longer a capital cost with all of the one-time payment assumptions that go with it: rather, it is an ongoing expense that implicitly includes maintenance, whether that be by the vendor or the end user (or, likely, a combination of the two).

I am, of course, describing Software-as-a-service, and that category’s emergence, along with cloud computing generally (both easier to secure and with massive incentives to be secure), is the single biggest reason to be optimistic that WannaCry is the dying gasp of a bad business model (although it will take a very long time to get out of all the sunk costs and assumptions that fully-depreciated assets are “free”).3 In the long run, there is little reason for the typical enterprise or government to run any software locally, or store any files on individual devices. Everything should be located in a cloud, both files and apps, accessed through a browser that is continually updated, and paid for with a subscription. This puts the incentives in all the right places: users are paying for security and utility simultaneously, and vendors are motivated to earn it.

To Microsoft’s credit the company has been moving in this direction for a long time: not only is the company focused on Azure and Office 365 for growth, but even its traditional software has long been monetized through subscription-like offerings. Still, implicit in this cloud-centric model is a lot less lock-in and a lot more flexibility in terms of both devices and services: the reality is that for as much of a headache Windows security has been for Microsoft, those headaches are inextricably tied up with the reasons that Microsoft has been one of the most profitable companies of all time.

The big remaining challenge will be hardware: the business model for software-enabled devices will likely continue to be upfront payment, which means no incentives for security; the costs are externalities to be borne by the targets of botnets like Mirai. Expect little progress and lots of blame, the hallmark of the sort of systematic breakdown that results from a mismatched business model.

  1. I will update this section as necessary and note said updates in this footnote []
  2. Windows XP was still available for Netbooks until 2010 []
  3. The answer for individual security is encryption []

The Local News Business Model

It’s hardly controversial to note that the traditional business model for most publishers, particularly newspapers, is obsolete. Absent the geographic monopolies formerly imposed by owning distribution, newspapers have nothing to offer advertisers: the sort of advertising that was formerly done in newspapers, both classified and display, is better done online. And, contra this rather fanciful suggestion by New York Times media columnist Jim Rutenberg that advertisers prop up newspapers for the good of democracy, nothing is going to change that.

I already explained the problems with Rutenberg’s idea in yesterday’s Daily Update: advertisers are (rightly) motivated by what is best for their business, plus there is a collective action problem. I added, though, mostly in passing, that the future of “local news” would almost certainly be subscription, not advertising-based.

I think it’s worth expounding on that point. What most, including Rutenberg, fail to understand about newspapers is that it is not simply the business model that is obsolete: rather, everything is obsolete. Most local newspapers are simply not worth saving, not because local news isn’t valuable, but rather because everything else in your typical local newspaper is worthless (from a business perspective). That is why I was careful in my wording: subscriptions will not save newspapers, but they just might save local news, and the sooner that distinction is made the better.

The Unnecessary Newspaper

To be clear, I agree with Rutenberg when he states that “A vibrant free press…keeps government honest and voters informed.” Local government needs oversight, which is another way of saying local news is necessary for a well-functioning democracy. The problem is that assuming oversight must be provided by a newspaper is akin to suggesting that a tank be used to kill a fly: sure, it may get the job done, but there is a lot of equipment, ordnance, and personnel that is really not necessary when a flyswatter would not only be sufficient, but actually more effective.

For newspapers, the analogies to equipment, ordnance, and personnel are physical infrastructure, business operations, and editorial staff; just about none of them (yes, including most of the editorial staff) are actually necessary for covering local news.

Infrastructure

Printing presses are obviously obsolete: while some newspapers have finally closed them down, others hold on because there is still a modicum of print advertising to be earned. It’s the most prominent example of how newspapers are fundamentally incapable of evolving. Naturally, this extends to distribution centers, delivery trucks, newsstands, and all of the administrative infrastructure that goes into moving around pieces of paper that have zero connection to the actual distribution of local news.

The infrastructure overhead, though, does not stop there: without a print edition there is no need for layout, for high-end photography, or a centralized office space to assemble everything on deadline. There is also a drastically reduced need for editors: when text was printed copy was permanent, raising the cost of a mistake high enough to justify editing workforces nearly as numerous as journalistic ones. Digital stories, though, can be updated after-the-fact. Moreover, digital stories are interactive: readers can submit feedback instantly, and as I noted while writing about Wikitribune, the collective knowledge of readers will always be greater than the most seasoned set of editors.

Moreover, given that local news requires little more than text and images and perhaps some video, there is no need for expensive digital infrastructure either; a basic WordPress site is more than sufficient. In short, the entire infrastructure category, which makes up probably 60%~70% of a newspaper’s cost structure (possibly more if you include the editors), has nothing to do with sustainable local news.

Business Operations

Monetizing via print advertisements requires a lot of staff: salespeople to sell the ad, graphic artists to lay it out, account managers to collect the money, plus all the management required to make it work. For large national newspapers like The New York Times, this may all still be necessary, thanks to the ability to sell premium advertising online. However, all of this can be eliminated for most digital-only operations: simply use an ad network. Of course, those come with their own problems: ad networks make web pages suck, and just as importantly, most consumption is shifting to mobile where ad network monetization is particularly ineffective; to the extent advertising is part of the business model relying on Facebook is (still) probably the best option. Or better yet, don’t have any ads at all.

A purely subscription-based business model not only drastically cuts costs, it also makes for a better user experience, a particularly attractive point given that users are the paying customers. Even better, thanks to services like Stripe, digital subscriptions not only cost far less to administer than traditional newspaper subscriptions, but are far more user-friendly as well.

The reality is that for local news this entire category probably only needs to be one person: handle customer service for self-service subscriptions, do the books, and that’s about it. The 15~20% of revenue newspapers are paying for business operations has nothing to do with local news.

Editorial

This is the biggest blindspot for those lamenting the travails of local newspapers: it may be obvious that printing presses don’t make much sense with the Internet, and most websites have moved to ad networks for the obvious reasons; in fact, though, nearly all of the content in most newspapers is not just unnecessary but in fact actively harmful to building a sustainable future for local news.

Start with the front page (of a physical newspaper, natch): most newspapers have given up on having international, national, or even regional reporters, instead relying on wire services. Even that, though, is a waste: those wire services have their own websites, and international publications are only a click away. Maintaining the veneer of comprehensive coverage is simply clutter, and a cost to boot.

The same thing applies to the opinion section: any column or editorial that is concerned with non-local affairs is competing with the entire Internet (including social media). It’s the same thing with non-local business coverage. Moreover, the cost is more than clutter and dollars: almost by definition the content is inferior to what is available elsewhere, which reduces the willingness to pay.

It’s the same story in what were traditionally the most valuable parts of newspapers:1 sports and the (variously named) lifestyle sections. There are multiple national entities dedicated to covering sports all the way down to the university level, augmented by a still-thriving sports blogosphere. Granted, there may still be a market for local sports coverage, but that is a different market than local news: there is no reason it has to be bundled together.

As for the lifestyle section, it is everywhere. BuzzFeed has set its sight on cooking, crafts, and the horoscope;2 there are all kinds of sites covering gossip and advice; meanwhile, not only are there web comics, but social media provides far more humor than the funny pages ever did. What’s left, bridge? Why not simply play online?

A lot of this content has long since been standardized across newspapers, but the broader point remains the same: absolutely none of it has anything to do with local news, and it should not exist in the local news publication of the future.

Bundles and Business Models

What is critical to understand is that everything in the preceding section is interconnected: by owning printing presses and delivery trucks (and thanks to the low marginal cost of printing extra pages), newspapers were the primary outlet for advertising that didn’t work (or couldn’t afford) TV or radio — and there was a lot of it. Maximizing advertising, though, meant maximizing the potential audience, which meant offering all kinds of different types of content in volume: thus the mashup of wildly disparate content listed above, all focused on quantity over quality. And then, having achieved the most readership and the ability to expand to fit it all, the biggest newspaper could squeeze out its competitors.

In short, the business model drove the content, just as it drove every other piece of the business. It follows, though, that if the content bundle no longer makes sense — which it doesn’t in the slightest — that the business model probably doesn’t make sense either. This is the problem with newspapers: every aspect of their operations, from costs to content, is optimized for a business model that is obsolete. To put it another way, an obsolete business model means an obsolete business. There is nothing to be saved.

The Subscription Business Model

I’ve already hinted at the general outline of a sustainable local news publication, but the critical point is the one I just made: everything must start with the business model, of which there is only one choice — subscriptions.

It is very important to clearly define what a subscriptions means. First, it’s not a donation: it is asking a customer to pay money for a product. What, then, is the product? It is not, in fact, any one article (a point that is missed by the misguided focus on micro-transactions). Rather, a subscriber is paying for the regular delivery of well-defined value.

Each of those words is meaningful:

  • Paying: A subscription is an ongoing commitment to the production of content, not a one-off payment for one piece of content that catches the eye.
  • Regular Delivery: A subscriber does not need to depend on the random discovery of content; said content can be delivered to to the subscriber directly, whether that be email, a bookmark, or an app.
  • Well-defined Value: A subscriber needs to know what they are paying for, and it needs to be worth it.

This last point is at the crux of why many ad-based newspapers will find it all but impossible to switch to a real subscription business model. When asking people to pay, quality matters far more than quantity, and the ratio matters: a publication with 1 valuable article a day about a well-defined topic will more easily earn subscriptions than one with 3 valuable articles and 20 worthless ones covering a variety of subjects. Yet all too many local newspapers, built for an ad-based business model that calls for daily content to wrap around ads, spend their limited resources churning out daily filler even though those ads no longer exist.

A sustainable local news publication will be fundamentally different: a minimal rundown of the news of the day, with a small number of in-depth articles a week featuring real in-depth reporting, with the occasional feature or investigative report. After all, it’s not like it is hard to find content to read on the Internet: what people will pay for is quality content about things they care about (and the fact that people care about their cities will be these publications’ greatest advantage).

It’s also worth noting what a subscription business model does not — must not — include:

  • Content that is widely available elsewhere. That means no national or international news (except what has a local impact, and even that is questionable), no non-local business content, no lifestyle section.
  • Non-journalistic costs centers. As I noted above, a publication might need one business operations person, and maybe a copy editor; they can probably be the same person. Nearly everything else, including subscription management, hosting, payments, etc. can leverage widely available online services (and you can include social networks: treating all content the same hurts big media companies, but it’s a big opportunity for small ones).
  • Any sort of wall between business and editorial. This is perhaps the easiest change to make, and the hardest for newspaper advocates to accept. A subscription business is just that: a business that must, through its content, earn ongoing revenue from customers. That means understanding what those customers want, and what they don’t. It means focusing on the user experience, and the content mix. And it means selling by every member of the organization.

Notice how different this looks from a newspaper, as it must. After all, the business model is different.


I strongly believe the market for this sort of publication is there. My hometown city of Madison, WI has around 250,000 people (500,000 in Dane County), primarily served by The Wisconsin State Journal. To the paper’s credit the website is almost all local news; unfortunately, most of it is uninteresting filler. Worse, to produce this filler took a staff of 52 people, of which only 10 by my count are local reporters (supported by at least 8 editors).

Were a new publication to come along, offering a five minute summary of Madison’s local news of the day, plus an actually relevant story or two a week with the occasional feature or investigative report,3 I’d gladly pay, and I don’t even live there anymore. What I won’t do, though, is bother visiting the Wisconsin State Journal because there simply is too much dreck to wade through, created at ridiculous cost in service of an obsolete business model.4

Indeed, the real problem with local newspapers is more obvious than folks like Rutenberg wish to admit: no one — advertisers nor subscribers — wants to pay for them because they’re not worth paying for. If newspapers were actually holding local government accountable I don’t think they would have any problem earning money; that they aren’t is a function of wasting time and money on the past instead of the future.

  1. Other than the classifieds, that is []
  2. “Choose these foods and we will tell you your ideal mate!” []
  3. With typos []
  4. This is where news foundations and benefactors can actually make a difference: stop supporting local newspapers and instead fund new startups until they build a critical mass of subscribers []

Apple’s China Problem

Did you hear about the new Microsoft Surface Laptop? The usual suspects are claiming it’s a MacBook competitor, which is true insomuch as it is a laptop. In truth, though, the Surface Laptop isn’t a MacBook competitor at all for the rather obvious reason that it runs Windows, while the MacBook runs MacOS. This has always been the foundation of Apple’s business model: hardware differentiated by software such that said hardware can be sold with a margin much greater than nominal competitors running a commodity operating system.

Moreover, the advantages go beyond margins: the best way to understand both Apple’s profits and many of its choices is to understand that the company has a monopoly on not just MacOS but even more importantly iOS. That means Apple can not only capture consumer surplus on hardware, but developer surplus when it comes to app sales; that some apps are not made is deadweight loss that Apple has chosen to bear to ensure total control.

And yet, as far as regulators are concerned (and rightly so), the iPhone is simply another smartphone, and the MacBook really is competing with the Surface Laptop. The functionality is mostly the same, and if users value a sustainable advantage in the user experience Apple deserves the profits — and power — that follow.

Apple’s Earnings

Apple announced its second quarter earnings yesterday; from Bloomberg:

Apple Inc. reported falling iPhone sales, highlighting the need to deliver blockbuster new features in the next edition of the flagship device if the company is to fend off rivals like Samsung Electronics Co. Investor confidence has been mounting ahead of a major iPhone revamp due later this year. Yet competitors released new high-end smartphones recently, putting pressure on Apple to deliver a device that’s advanced enough to entice existing users to upgrade and lure new customers.

Oh look! It’s an example of what I was just complaining about: yes, Samsung makes smartphones, and yes, they have high-end features. But — and this is the point that was forgotten the last time Samsung was held up as an iPhone threata Samsung smartphone does not run iOS. That has always been Apple’s trump card, and it was once again this past quarter. On the earnings call CFO Luca Maestri stated in his prepared remarks:

Revenue for the March quarter was $52.9 billion, and we achieved double-digit growth in the U.S., Canada, Australia, Germany, the Netherlands, Turkey, Russia and Mexico. Our growth rates were even higher, over 20% in many other markets, including Brazil, Scandinavia, the Middle East, Central and Eastern Europe, India, Korea and Thailand.

The numbers back that up:

Apple Revenue (in billions)

Q2 2017 Q2 2016 Q2 2015
Americas $21.2 (+11%) $19.1 (-10%) $21.3 (+19%)
Europe $12.7 (+10%) $11.5 (-5%) $12.2 (+12%)
Japan $4.5 (+5%) $4.3 (+24%) $3.5 (-15%)
Rest of Asia Pacific $3.8 (+20%) $3.2 (-25%) $4.2 (+48%)
Total $42.1 (+11%) $38.1 (-8%) $41.2 (+15%)

To be clear, these numbers reflect more than just the iPhone; in the most recent quarter Apple’s most important product contributed 63% of revenue, which means some of the change in overall revenue was due to Mac (mostly thanks to increased ASP), Services, and Other Products (primarily Apple Watch and AirPods) growth, counterbalanced by the iPad’s continued slippage. Still, the iPhone is the most important factor, and while it seems quite clear that the iPhone 6 did pull forward upgrades from the iPhone 6S, the iPhone 7 is growing quite nicely.

Moreover, this sort of growth is exactly what you would expect given Apple’s iOS monopoly: iPhone users very rarely switch to Android, while a fair number of Android users switch to iPhone, which means that even in a saturated market Apple’s share should grow over time. Plus, that share increase will result in not only increased iPhones sales but ever-growing Services revenue, and, in the long run, increased sales of other Apple products as well.

This picture, though, is incomplete: it doesn’t include China.

Apple’s China Problem

Here are those revenue numbers once again, this time with the Greater China region:

Apple Revenue (in billions)

Q2 2017 Q2 2016 Q2 2015
Americas $21.2 (+11%) $19.1 (-10%) $21.3 (+19%)
Europe $12.7 (+10%) $11.5 (-5%) $12.2 (+12%)
China $10.7 (-14%) $12.5 (-26%) $16.8 (71%)
Japan $4.5 (+5%) $4.3 (+24%) $3.5 (-15%)
Rest of Asia Pacific $3.8 (+20%) $3.2 (-25%) $4.2 (+48%)
Total $52.9 (+5%) $50.6 (-13%) $58.0 (+27%)

I was from the very beginning of Stratechery both a big advocate of large-screen iPhones and a big bull on Apple’s prospects in China. I wrote after the launch of the iPhone 5C:

Is [the iPhone] out-of-reach for the vast majority of consumers? Yep. But it will be aspirational, something you put on the table to show others you can afford it. And, to be clear, there are a lot of people that can afford it. Saying stupid things like “the iPhone 5C is equivalent to the average monthly salary in China” belies a fundamental misunderstanding of China, its inequality, and its sheer size specifically, and all of Asia broadly. Moreover, when you consider a Mercedes is tens of thousands of dollars more than a Toyota (and on down the line in luxury goods, for whom Asia generally and China specifically is the largest market by far), $300 more isn’t that much.

Moreover, in China it’s Apple’s brand that is, by far, the biggest allure of the iPhone. Apps are free (piracy is mainstream), larger screens are preferred, and specs and customization move the needle with the mainstream far more than they do in the US. But no one else is Apple.

Of course those large screens did eventually arrive in the same timeframe that Apple launched on China Mobile: that is why those Q2 2015 numbers are so eye-popping (71% growth!). And you could certainly argue last year that, much like the rest of the world, Apple had pulled forward a huge number of would-be buyers (China was down more than the rest of the world, but about the same as the rest of Asia).1 However, that does not explain the weak results this year: every region in the world — especially the rest of Asia — is up, except for China, which is down 14%. Apple has a China problem.

iOS Versus WeChat

In rather stark contrast to just a couple of years ago, when, in the midst of the iPhone 6 boom, Tim Cook was eager to sell the story of how many iPhone customers had not yet upgraded, this quarter the Apple CEO preferred to move the goalposts, telling analysts to wait for the next iPhone:

We’re seeing what we believe to be a pause in purchases on iPhone, which we believe are due to the earlier and much more frequent reports about future iPhones. And so that part is clearly going on, and it could be what’s behind the data.

But that is not what is going on in most of the world: plenty of folks — more than last year — are happy to buy the iPhone 7, even though it doesn’t look much different than the iPhone 6. After all, if you need a new phone, and you want iOS, you don’t have much choice! Except, again, for China: that is the country where the appearance of the iPhone matters most; Apple’s problem, though, is that in China that is the only thing that matters at all.

The fundamental issue is this: unlike the rest of the world, in China the most important layer of the smartphone stack is not the phone’s operating system. Rather, it is WeChat.2 Connie Chan of Andreessen Horowitz tried to explain in 2015 just how integrated WeChat is into the daily lives of nearly 900 million Chinese, and that integration has only grown since then: every aspect of a typical Chinese person’s life, not just online but also off is conducted through a single app (and, to the extent other apps are used, they are often games promoted through WeChat).

There is nothing in any other country that is comparable: not LINE, not WhatsApp, not Facebook. All of those are about communication or wasting time: WeChat is that, but it is also for reading news, for hailing taxis, for paying for lunch (try and pay with cash for lunch, and you’ll look like a luddite), for accessing government resources, for business. For all intents and purposes WeChat is your phone, and to a far greater extent in China than anywhere else, your phone is everything.

Naturally, WeChat works the same on iOS as it does on Android.3 That, by extension, means that for the day-to-day lives of Chinese there is no penalty to switching away from an iPhone. Unsurprisingly, in stark contrast to the rest of the world, according to a report earlier this year only 50% of iPhone users who bought another phone in 2016 stayed with Apple:

Screen Shot 2017-05-03 at 8.58.25 PM

This is still better than the competition, but compared to the 80%+ retention rate Apple enjoys in the rest of the world,4 it is shockingly low, and the result is that the iPhone has slid down China’s sales rankings: iPhone sales were only 9.6% of the market last year, behind local Chinese brands like Oppo, Huawei and Vivo. All of those companies sold high-end phones of their own; the issue isn’t that Apple was too expensive, it’s that the iPhone 6S and 7 were simply too boring.


Perhaps the most surprising takeaway from this analysis is that Cook is right: there is reason to be optimistic about the iPhone 8. Rumors are that there will be an all-new edge-to-edge design that will stand out in the hand, or on the coffee shop table. And, of course, like any modern smartphone, it will run WeChat. And to be sure, an iPhone is still status-conferring: Apple is by no means doomed, and it’s possible those China numbers will turn positive this fall.

That, though, is a long-term problem for Apple: what makes the iPhone franchise so valuable — and, I’d add, the fundamental factor that was missed by so many for so long — is that monopoly on iOS. For most of the world it is unimaginable for an iPhone user to upgrade to anything but another iPhone: there is too much of the user experience, too many of the apps, and, in some countries like the U.S., too many contacts on iMessage to even countenance another phone.

None of that lock-in exists in China: Apple may be a de facto monopolist for most of the world, but in China the company is simply another smartphone vendor, and being simply another smartphone vendor is a hazardous place to be. To be clear, it’s not all bad: in China Apple still trades on status and luxury; unlike the rest of the world, though, the company has to earn it with every release, and that’s a bar both difficult to clear in the abstract and, given the last two iPhones, difficult to clear in reality.

  1. As an aside, Apple continues to blame Hong Kong for weak China numbers, but as I have explained in the Daily Update, I don’t buy this excuse: I suspect a huge number of Hong Kong sales were for China; once distribution in China increased those sales went down. Plus, Hong Kong’s dollar is pegged to the U.S. dollar, so currency isn’t an excuse either []
  2. Or, to put it another way, the operating system of China is WeChat, not iOS/Android []
  3. Well, at least until earlier this month when Apple, accustomed to being able to dictate terms to app developers, banned WeChat tipping []
  4. This originally said 90%, but retention numbers have slipped globally []