Amazon’s New Customer

Back in 2006, when the iPhone was a mere rumor, Palm CEO Ed Colligan was asked if he was worried:

“We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.” What if Steve Jobs’ company did bring an iPod phone to market? Well, it would probably use WiFi technology and could be distributed through the Apple stores and not the carriers like Verizon or Cingular, Colligan theorized.

I was reminded of this quote after Amazon announced an agreement to buy Whole Foods for $13.7 billion; after all, it was only two years ago that Whole Foods founder and CEO John Mackey predicted that groceries would be Amazon’s Waterloo. And while Colligan’s prediction was far worse — Apple simply left Palm in the dust, unable to compete — it is Mackey who has to call Amazon founder and CEO Jeff Bezos, the Napoleon of this little morality play, boss.

The similarities go deeper, though: both Colligan and Mackey made the same analytical mistakes: they mis-understood their opponents goals, strategies, and tactics. This is particularly easy to grok in the case of Colligan and the iPhone: Apple’s goal was not to build a phone but to build an even more personal computer; their strategy was not to add on functionality to a phone but to reduce the phone to an app; and their tactics were not to duplicate the carriers but to leverage their connection with customers to gain concessions from them.

Mackey’s misunderstanding was more subtle, and more profound: while the iPhone may be the most successful product of all time, Amazon and Jeff Bezos have their sights on being the most dominant company of all time. Start there, and this purchase makes all kinds of sense.

Amazon’s Goal

If you don’t understand a company’s goals, how can you know what the strategies and tactics will be? Unfortunately, many companies, particularly the most ambitious, aren’t as explicit as you might like. In the case of Amazon, the company stated in its 1997 S-1:

Amazon.com’s objective is to be the leading online retailer of information-based products and services, with an initial focus on books.

Even if you picked up on the fact that books were only step one (which most people at the time did not), it was hard to imagine just how all-encompassing Amazon.com would soon become; within a few years Amazon’s updated mission statement reflected the reality of the company’s e-commerce ambitions:

Our vision is to be earth’s most customer centric company; to build a place where people can come to find and discover anything they might want to buy online.

“Anything they might want to buy online” was pretty broad; the advent of Amazon Web Services a few years later showed it wasn’t broad enough, and a few years ago Amazon reduced its stated goal to just that first clause: We seek to be Earth’s most customer-centric company. There are no more bounds, and I don’t think that is an accident. As I put it on a podcast a few months ago, Amazon’s goal is to take a cut of all economic activity.

This, then, is the mistake Mackey made: while he rightly understood that Amazon was going to do everything possible to win in groceries — the category accounts for about 20% of consumer spending — he presumed that the effort would be limited to e-commerce. E-commerce, though, is a tactic; indeed, when it comes to Amazon’s current approach, it doesn’t even rise to strategy.

Amazon’s Strategy

As you might expect, given a goal as audacious as “taking a cut of all economic activity”, Amazon has several different strategies. The key to the enterprise is AWS: if it is better to build an Internet-enabled business on the public cloud, and if all businesses will soon be Internet-enabled businesses, it follows that AWS is well-placed to take a cut of all business activity.

On the consumer side the key is Prime. While Amazon has long pursued a dominant strategy in retail — superior cost and superior selection — it is difficult to build sustainable differentiation on these factors alone. After all, another retailer is only a click away.

This, though, is the brilliance of Prime: thanks to its reliability and convenience (two days shipping, sometimes faster!), plus human fallibility when it comes to considering sunk costs (you’ve already paid $99!), why even bother looking anywhere else? With Prime Amazon has created a powerful moat around consumer goods that does not depend on simply having the lowest price, because Prime customers don’t even bother to check.

This, though, is why groceries is a strategic hole: not only is it the largest retail category, it is the most persistent opportunity for other retailers to gain access to Prime members and remind them there are alternatives. That is why Amazon has been so determined in the space: AmazonFresh launched a decade ago, and unlike other Amazon experiments, has continued to receive funding along with other rumored initiatives like convenience store and grocery pick-ups. Amazon simply hasn’t been able to figure out the right tactics.

Amazon’s Tactics

To understand why groceries are such a challenge look at how they differ from books, Amazon’s first product:

  • There are far more books than can ever fit in a physical store, which means an e-commerce site can win on selection; in comparison, there simply aren’t that many grocery items (a typical grocery store will have between 30,000 and 50,000 SKUs)
  • When you order a book, you know exactly what you are getting: a book from Amazon is the same as a book from a local bookstore; groceries, on the other hand, can vary in quality not just store-to-store but, particularly in the case of perishable goods, item-to-item
  • Books can be stored in a centralized warehouse indefinitely; perishable groceries can only be stored for a limited amount of time and degrade in quality during transit

As Mackey surely understood, this meant that AmazonFresh was at a cost disadvantage to physical grocers as well: in order to be competitive AmazonFresh needed to stock a lot of perishable items; however, as long as AmazonFresh was not operating at meaningful scale a huge number of those perishable items would spoil. And, given the inherent local nature of groceries, scale needed to be achieved not on a national basis but a city one.

Groceries are a fundamentally different problem that need a fundamentally different solution; what is so brilliant about this deal, though, is that it solves the problem in a fundamentally Amazonian way.

The First-And-Best Customer

Last year in The Amazon Tax I explained how the different parts of the company — like AWS and Prime — were on a conceptual level more similar than you might think, and that said concepts were rooted in the very structure of Amazon itself. The best example is AWS, which offered server functionality as “primitives”, giving maximum flexibility for developers to build on top of:1

The “primitives” model modularized Amazon’s infrastructure, effectively transforming raw data center components into storage, computing, databases, etc. which could be used on an ad-hoc basis not only by Amazon’s internal teams but also outside developers:

stratechery Year One - 274

This AWS layer in the middle has several key characteristics:

  • AWS has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build AWS was justified because the first and best customer is Amazon’s e-commerce business
  • AWS’s focus on “primitives” meant it could be sold as-is to developers beyond Amazon, increasing the returns to scale and, by extension, deepening AWS’ moat

This last point was a win-win: developers would have access to enterprise-level computing resources with zero up-front investment; Amazon, meanwhile, would get that much more scale for a set of products for which they would be the first and best customer.

As I detailed in that article, this exact same framework applies to Amazon.com:

Prime is a super experience with superior prices and superior selection, and it too feeds into a scale play. The result is a business that looks like this:

stratechery Year One - 275

That is, of course, the same structure as AWS — and it shares similar characteristics:

  • E-commerce distribution has massive fixed costs but benefits tremendously from economies of scale
  • The cost to build-out Amazon’s fulfillment centers was justified because the first and best customer is Amazon’s e-commerce business
  • That last bullet point may seem odd, but in fact 40% of Amazon’s sales (on a unit basis) are sold by 3rd-party merchants; most of these merchants leverage Fulfilled-by-Amazon, which means their goods are stored in Amazon’s fulfillment centers and covered by Prime. This increases the return to scale for Amazon’s fulfillment centers, increases the value of Prime, and deepens Amazon’s moat

As I noted in that piece, you can see the outline of similar efforts in logistics: Amazon is building out a delivery network with itself as the first-and-best customer; in the long run it seems obvious said logistics services will be exposed as a platform.

This, though, is what was missing from Amazon’s grocery efforts: there was no first-and-best customer. Absent that, and given all the limitations of groceries, AmazonFresh was doomed to be eternally sub-scale.

Whole Foods: Customer, not Retailer

This is the key to understanding the purchase of Whole Foods: to the outside it may seem that Amazon is buying a retailer. The truth, though, is that Amazon is buying a customer — the first-and-best customer that will instantly bring its grocery efforts to scale.

Today, all of the logistics that go into a Whole Foods store are for the purpose of stocking physical shelves: the entire operation is integrated. What I expect Amazon to do over the next few years is transform the Whole Foods supply chain into a service architecture based on primitives: meat, fruit, vegetables, baked goods, non-perishables (Whole Foods’ outsized reliance on store brands is something that I’m sure was very attractive to Amazon). What will make this massive investment worth it, though, is that there will be a guaranteed customer: Whole Foods Markets.

stratechery Year One - 270

In the long run, physical grocery stores will be only one of the Amazon Grocery Services’ customers: obviously a home delivery service will be another, and it will be far more efficient than a company like Instacart trying to layer on top of Whole Foods’ current integrated model.

I suspect Amazon’s ambitions stretch further, though: Amazon Grocery Services will be well-placed to start supplying restaurants too, gaining Amazon access to another big cut of economic activity. It is the AWS model, which is to say it is the Amazon model, but like AWS, the key to profitability is having a first-and-best customer able to utilize the massive investment necessary to build the service out in the first place.


I said at the beginning that Mackey mis-understood Amazon’s goals, strategies, and tactics, and while that is true, the bigger error was in misunderstanding Amazon itself: unlike Whole Foods Amazon has no desire to be a grocer, and contrary to conventional wisdom the company is not even a retailer. At its core Amazon is a services provider enabled — and protected — by scale.

Indeed, to the extent Waterloo is a valid analogy, Amazon is much more akin to the British Empire, and there is now one less obstacle to sitting astride all aspects of the economy.

  1. To be clear, AWS was not about selling extra capacity; it was new capability, and Amazon itself has slowly transitioned over time (as I understand it Amazon.com is still a hybrid) []

Podcasts, Analytics, and Centralization

Tucked into the last day of WWDC was a session on podcasting, and it contained some big news for the burgeoning industry. Before getting into the specific announcements, though, the session itself is worth a bit of analysis, particularly the opening from Apple Podcasts Business Manager James Boggs:

First we want to talk for a moment about how we think about modern podcasts. Long-form and audio. We get excited about episodic content that entertains, informs, and inspires. We get excited and many of our users have gotten excited too.

I went on to transcribe the next 500 or so words of Boggs’s presentation, which included various statistics on downloads, catalog size, and reach; a listing of Apple “partners” organized by media and broadcast organizations, public media, and independents; and even started in on Boggs’s review/promotion of individual podcasts like “Up and Vanished” and “Masters of Scale” before I realized Boggs was never going to actually say “how [Apple] think[s] about modern podcasts.” I won’t make you read the transcript — take my word when I say that there was nothing there.

Still, that itself was telling; Boggs’s presentation perfectly reflects the state of podcasting today: Apple is an essential piece, even as they really don’t have anything to do with what is going on (but naturally, are happy to take credit).

A Brief History of Podcasts

Probably the first modern podcast was created by Dave Winer in 2003, although it wasn’t called a “podcast”: that was coined by Ben Hammersley in 2004, and the inspiration was Apple’s iPod. Still, while the media had a name, the “industry”, such that it was, was very much the wild west: a scattering of podcast creators, podcatchers (software for downloading the podcasts), and podcast listeners, finding each other by word-of-mouth.

stratechery Year One - 267

A year later Apple made the move that cemented their current position as the accidental gorilla of the industry: iTunes 4.9 included support for podcasts and, crucially, the iTunes Music Store created a directory (Apple did not — and still does not — host the podcast files themselves). The landscape of podcasting was completely transformed:

stratechery Year One - 268

Centralization occurs in industry after industry for a reason: everyone benefits, at least in the short term. Start with the users: before iTunes 4.9 subscribing and listening to a podcast was a multi-step process, and most of those steps were so obscure as to be effective barriers for all but the most committed of listeners.

  • Find a podcast
  • Get a podcatcher
  • Copy the URL of the podcast feed into the podcatcher
  • Copy over the audio file from the podcatcher into iTunes
  • Sync the audio file to an iPod
  • Listen to the podcast
  • Delete the podcast from the iPod the next time you sync’d

iTunes 4.9 made this far simpler:

  • Find a podcast in the iTunes Store and click ‘Subscribe’
  • Sync your iPod
  • Listen

Recounting this simplification may seem pedantic, but there is a point: this was the most important improvement for podcast creators as well. Yes, the iTunes Music Store offered an important new discovery mechanism, but it was the dramatic improvement to the user experience that, for the vast majority of would-be listeners, made podcasts even worth discovering in the first place. Centralized platforms win because they make things easier for the user; producers willingly follow.

Interestingly, though, beyond that initial release, which was clearly geared towards selling more iPods, Apple largely left the market alone, with one important exception: in 2012 the company released a standalone Podcasts app for iOS in the App Store, and in 2014 the app was built-in to iOS 8. At that point the power of defaults did its job: according to the IAB Podcast Ad Metrics Guidelines released last fall, the Apple Podcast App accounts for around 50% of all podcast players across all operating systems (iTunes is a further ~10%).1

The Business of Podcasting

It’s not clear when the first podcast advertisement was recorded; a decent guess is Episode 67 of This Week in Tech, recorded on September 3, 2006 (Topic: “Does the Google CEO’s place on Apple’s board presage a Sun merger?”). The sponsor was surprisingly familiar — Visa (“Safer, better money. Life takes Visa.”), and Dell joined a week later.

Over the ensuing years, though, the typical podcast sponsor was a bit less of a name brand — unless, of course, you were a regular podcast listener, in which case you quickly knew the brands by heart: Squarespace, Audible, Casper Mattress, Blue Apron, and recent favorite MeUndies (because who doesn’t want to hear a host-read endorsement for underwear!). Companies like Visa or Dell were few and far between: a study by FiveThirtyEight suggested brand advertisers were less than five percent of ad reads.

The reason is quite straightforward: for podcasts there is neither data nor scale. The data part is obvious: while podcasters can (self-)report download numbers, no one knows whether or not a podcast is played, or if the ads are skipped. The scale bit is more subtle: podcasts are both too small and too big. They are too small in that it is difficult to buy ads at scale (and there is virtually no quality control, even with centralized ad sellers like Midroll); they are too large in that the audience, which may be located anywhere in the world listening at any time, is impossible to survey in order to measure ad effectiveness.

That is why the vast majority of podcast advertisers are actually quite similar: nearly all are transaction-initiated subscription-based services. The “transaction-initiated” bit means that there is a discrete point at which the customer can indicate where they heard about the product, usually through a special URL, while the “subscription-based” part means these products are evaluating their marketing spend relative to expected lifetime value. In other words, the only products that find podcast advertising worthwhile are those that expect to convert a listener in a measurable way and make a significant amount of money off of them, justifying the hassle.2

The result is an industry that, from a monetization perspective, looks a lot like podcasting before iTunes 4.9; there are small businesses to be built, but the industry as a whole is stunted.

Apple Podcast Analytics

This is the context for what Apple actually announced. Jason Snell had a good summary at Six Colors:

New extensions to Apple’s podcast feed specification will allow podcasts to define individual seasons and explain whether an episode is a teaser, a full episode, or bonus content. These extensions will be read by the Podcast app and used to present a podcast in a richer way than the current, more linear, approach…

The other big news out of today’s session is for podcasters (and presumably for podcast advertisers): Apple is opening up in-episode analytics of podcasts. For the most part, podcasters only really know when an episode’s MP3 file is downloaded. Beyond that, we can’t really tell if anyone listens to an episode, or how long they listen—only the apps know for sure. Apple said today that it will be using (anonymized) data from the app to show podcasters how many people are listening and where in the app people are stopping or skipping. This has the potential to dramatically change our perception of how many people really listen to a show, and how many people skip ads, as well as how long a podcast can run before people just give up.

The new extensions are a nice addition, and a way in which Apple can enhance the user experience to the benefit of everyone. As you might expect, though, I’m particularly interested in the news about analytics. Problem solved, right? Or is it problem caused? What happens when advertisers realize that everyone is skipping their ads?

Advertisers: Not Idiots

In fact, I expect these analytics to have minimal impact, at least in the short run. For one, every indication is that analytics will only be available to the podcast publishers, although certainly advertisers will push to have them shared.3 More pertinently, though, all of the current podcast advertisers know exactly what they are getting: X amount of podcast ads results in Y number of conversions that result in Z amount of lifetime value.

Indeed, contrary to what many folks seem to believe, advertisers, whether they leverage podcasts, Facebook, Google, or old school formats like radio or TV, are not idiots blindly throwing money over a wall in the vague hopes that it will drive revenue, ever susceptible to being shocked, shocked! that their ads are being ignored. Particularly in the case of digital formats advertisers are quite sophisticated, basing advertising decisions off of well-known ROI calculations. That is certainly the case with podcasts: knowing to a higher degree of precision how many ads are skipped doesn’t change the calculation for the current crop of podcast advertisers in the slightest.

What more data does do is open the door to more varied types of advertisers beyond the subscription services that dominate the space. Brand advertisers, in particular, are more worried about reaching a guaranteed number of potential customers than they are tracking directly to conversion, and Apple’s analytics will help podcasters tell a more convincing story in that regard.

In truth, though, Apple’s proposed analytics aren’t nearly enough: advertisers still won’t know who they are reaching or where they are located, and while brand advertisers may not have the expectation of tracking-to-purchase no one wants to throw money to the wind either. The problem of surveying effectively to measure things like brand lift is as acute as ever, and it simply isn’t worth the trouble to do a bunch of relatively small media buys with zero quality control.

Apple’s Opportunity

This, though, is why Apple’s centralized role is so intriguing. Remember, the web was thought to be a wasteland for advertising until Google provided a centralized point that aggregated users and could be sold to advertisers. Similarly, mobile was thought to monetize even worse than the (desktop) web until Facebook provided a centralized point that aggregated users and could be sold to advertisers. I expect a similar dynamic in podcasts: the industry will remain the province of web hosting and underwear absent centralization and aggregation, and the only entity that can accomplish that is Apple.

One can envision the broad outlines of what the business for a centralized aggregator for podcasts might look like:

  • The centralized aggregator would likely offer hosting to podcast creators, not only to secure the user experience and get better analytics (including on downloads through other apps) but also to dynamically insert advertisements. Those advertisements would also be available to smaller podcasts that are currently not worth the effort to advertisers.
  • Advertisers would get their own dashboard for those analytics and, more importantly, the opportunity to buy ads at far greater scale across a large enough audience to make it worth their while. Ideally, at least from their perspective, they would actually be able to target their advertising buys as well.
  • Users would, at least in theory, benefit from a far broader array of content made possible by the growth in revenue for the industry broadly.

There are already companies trying to do just this: I wrote about E.W. Scripps’ Midroll and their acquisition of podcast player Stitcher last year. The problem is that Stitcher only has around 5% of listeners, and it is the ownership of users/listeners, not producers/podcast from which true market power derives. Apple has that ownership, and thus that power; the question is will they use it?

Surely the safe bet is “no”. iAd, Apple’s previous effort at building an advertising business, failed spectacularly, and Apple’s anti-advertising rhetoric has only deepened since then. That’s a problem not only in terms of image but culture: Apple seems highly unlikely to be willing to put in the effort necessary to build a real advertising business, and given how small such a business might be even in the best-case scenario relative to the rest of the company, that’s understandable.4

To be sure, should Apple decline to seize this opportunity it will be celebrated by many, particularly those doing well in the current ecosystem. Podcasting is definitely more open than not, with no real gatekeepers in terms of either distribution or monetization. That, though, is why the money is so small: gatekeepers are moneymakers, and while podcasts may continue to grow, it is by no means inevitable that, absent a more active Apple, the money will follow.

Disclosure: Exponent, the podcast I host with James Allworth, does have a (single) sponsor; the revenue from this sponsorship makes up a very small percentage of Stratechery’s overall revenue and does not impact the views in this article

  1. For what it’s worth, Exponent has a much different profile: Apple Podcasts has about 13% share, while Overcast leads the way with 26% share, followed by (surprisingly!) Mobile Safari with 23% []
  2. This shows why Casper mattresses are the exception that proves the rule: mattresses are not a subscription service, but they are much more expensive than most products bought online, which achieves the same effect as far as lifetime value is concerned []
  3. I’m less worried about the fact other podcast players may not offer similar analytics: the Apple Podcast app will be used as a proxy, although this may hurt podcasts that have a smaller share of downloads via the Apple Podcast app (as total listeners may be undercounted absent similar analytics from other apps) []
  4. It’s Google’s challenge in building a real hardware business in reverse []

Apple’s Strengths and Weaknesses

The San Jose location of WWDC, Apple’s annual developer conference, felt a bit odd, but Apple sought to strike a familiar tone: the artwork on and around the San Jose McEnery Convention Center featured a top-down view of humans, and a familiar message:

FullSizeRender

The idea of Apple existing at the intersection of technology and liberal arts was central to the late Steve Jobs’ conception of Apple and, without question, a critical factor when it came to Apple’s success: at a time when technology was becoming accessible to consumers and their daily lives Apple created products — one product, really, the iPhone — that appealed to consumers not only because of what it did but how it did it.

That said, it was telling that this artwork and the sentiment it signified were not referenced in the keynote itself; after a humorous skit about a world without apps, Tim Cook delivered platitudes about how Apple and its developers were on a “collective mission to change the world”, and immediately launched into what he said were six important announcements. It was not dissimilar to Sundar Pichai’s opening at Google I/O: when the announcements that matter are grounded on the realities of a company’s core competencies and position in the market, vision can feel extraneous.

Apple’s Announcements

Cook’s first four announcements spoke to those core capabilities and the position they afford Apple (or don’t, as the case may be) in the markets in which it competes:

tvOS: It was generous of Cook to give tvOS top billing: the only announcement of note was the upcoming availability of Amazon Prime Video on Apple TV. That itself is a reminder of Apple’s diminished position in the space: winning in TV is not about hardware or software, much less the integration of the two, but rather content. The brevity of this announcement — there wasn’t even the traditional executive hand-off — spoke to Apple’s status as an also-ran.

watchOS: This garnered more time, and the headline feature was the Siri watch face. The watch face, which implied a broadening of Siri’s brand from voice to context-based general assistant, seeks to anticipate and deliver the information you need when you need it. The model is Google Now; the difference is that Siri is now housed in an attractive and increasingly popular watch that works natively with an iPhone, while the equivalent Google service requires not simply a different watch but a different phone entirely. It is a testament to Apple’s biggest advantage: thanks to the iPhone the company already owns the “best” customers,1 frequently rendering moot Google’s superiority in managing information.

macOS: This actually encompassed two of Cook’s six promised announcements;2 the separation of MacOS and Mac computers was, I suspect, born of Apple’s desire to convince developers and other pro users that the company was not abandoning their favorite platform. Moreover, the addition of hardware announcements, after several years in which WWDC was software only, resulted in a very different feel to this keynote: after all, hardware is exciting, even if, in the long run, it is software that actually matters. That feeling, though, goes to the very core of what Apple sells: superior hardware differentiated — and thus sold at a handsome margin — by exclusive software.

As is always the case with the modern incarnation of Apple, though, the announcements that truly mattered centered around iOS.

iOS 11

The iOS-related announcements, despite being only one of Cook’s “Big Six”, could have been their own keynote; given the importance of mobile generally and iOS specifically that would have been more than justified. Taken as a whole the iOS segment in particular highlighted what Apple does best, what it struggles with, and what reasons there are to be both optimistic and pessimistic about the company’s fortunes in the long run.

Strength: Defaults

Controlling one of the two dominant mobile operating systems grants Apple the power of defaults. That means iMessage is both an iPhone lock-in and a channel to introduce new services like person-to-person Apple Pay. Siri can be accessed both via voice and the home button, and, just similar to the WatchOS update, is increasingly integrated throughout the operating system. Photos and Maps are used by the majority of iPhone customers, even if alternatives offer superior functionality.

Weakness: Limited Reach

At the same time, iMessage will never reach the dominance of a service like WeChat because it is limited to Apple’s own platforms — as it should be! iMessage is the canonical example of how strengths and weaknesses are two sides of the same coin: it is iMessage’s exclusivity that allows it to be a lock-in, and it is that same exclusivity that limits the standalone value.

Strength: Hardware Integration

Peppered throughout Apple’s presentation were seemingly small features like new compression algorithms that depend on Apple controlling everything from Messages to the camera to the processor that makes it all work. The most impressive example was ARKit: in one fell swoop Apple leaped ahead of the rest of the industry in the race to realize the promise of augmented reality. The contrast to Facebook was striking: while the social network is seeking to leverage its control of content distribution to lure developers to build on Facebook’s “camera”, Apple is not only offering the same opportunity (the results of which can, of course, be shared on Facebook or Instagram), but also delivering a superior set of APIs that, by virtue of being part of that vertical stack, are both more powerful and accessible than anything a 3rd-party application can deliver.

Weakness: Services

While Apple bragged about Siri’s natural language capabilities and alluded to a limited number of new “intents” that can be leveraged by apps, it is not an accident that there were no slides about accuracy, speed, or developer support: Siri is well behind the competition in all three. More fundamentally, all of Apple’s services are intrinsically limited by the fact that they exist to sell Apple hardware: those services, and the teams that work on them, will never be the most important people in the company, and their development will be constrained by the culture of Apple itself.

Strength: Privacy

Apple not only touted its privacy credentials, it also showed off new features to actively limit things like autoplaying videos and advertising networks that follow you across sites. As a user both are very welcome; strategically, both features follow from the fact that Apple makes money on its hardware, while companies like Google, Facebook, and other online businesses rely on advertising and the collection of data.

Weakness: Data

Collecting data is useful for more than advertising, though. Here Google is the obvious counter: certainly the search company wants to better target advertisements, but the benefits gained from data go far beyond overt monetization. It is data that drives Google’s superior machine learning capabilities and the customer-friendly features that follow in apps like Google Photos. Interestingly, Apple made moves in this direction, syncing things like facial recognition data and Messages across devices, favoring convenience over a very slight increase in the risk to privacy. To be clear, the data will still be encrypted, both in transit and at rest, but that is my point: encryption means that Apple cannot leverage the data it will now store to make its services better.

Strength: The App Store

The strategic role of 3rd-party apps has shifted over time: once a differentiator for iOS, Android has largely reached parity, and apps are now table stakes. They are also a big moneymaker: Apple has been pushing the narrative on Wall Street that it is a services company, fueled by the $30 billion the company has collected from app sales and especially in-app purchases in free-to-play games; 30% of that total has come in the last year alone. Make no mistake, this is a compelling narrative: iPhone growth may be slowing in the face of saturation and elongated update cycles, but that only means there is that large of a base from which to earn App Store revenue.

Weakness: Developer Economics

The success of free-to-play games and the associated in-app purchases has come at a cost, specifically, management blindness to the fact that the rest of the developer ecosystem isn’t nearly as healthy, and that the App Store is no longer a differentiator from Android. The fundamental problem remains that for productivity apps in particular it is necessary to monetize your best customers over time; Apple has improved the situation, particularly with the addition of subscription pricing and de facto trials (basically, starting a subscription at $0), but hasn’t made any moves to support trials or upgrade pricing for paid apps, despite the fact that is the proven successful model for productivity applications on the Mac. I have long argued that bad developer economics is the fundamental reason that the iPad hasn’t fulfilled its potential; yesterday’s iPad software enhancements were welcome and will help, but I suspect letting developers set their own business models would be even more transformative.

Strength and Weakness: Business Model

This point is part and parcel with all of the above: Apple’s strengths derive from the fact it sells software-differentiated hardware for a significant margin, which allows for exclusive apps and services set as defaults, deep integration from chipset to API, a focus on privacy, and total control of the developer ecosystem. And, on the flipside, Apple only reaches a segment of the market, is less incentivized and capable of delivering superior services, has less data, and can afford to take developers for granted.

HomePod

Apple’s final announcement encapsulated all of these tensions. The long-rumored competitor to Amazon Echo and Google Home was, fascinatingly, framed as anything but. Cook began the unveiling by referencing Apple’s longtime focus on music, and indeed, the first several minutes of the HomePod introduction were entirely about its quality as a speaker. It was, in my estimation, an incredibly smart approach: if you are losing the game, as Siri is to Alexa and Google, best to change the rules, and having heard the HomePod, its sound quality is significantly better than the Amazon Echo (and, one can safely assume, Google Home). Moreover, the ability to link multiple HomePods together is bad news for Sonos in particular (the HomePod sounded significantly better than the Sonos Play 3 as well).

Of course, superior sound quality is what you would expect from a significantly more expensive speaker: the HomePod costs $350, while the Sonos Play 3 is $300, and the Amazon Echo is $150. From Apple’s perspective, though, a high price is a feature, not a bug: remember, the company has a hardware-based business model, which means there needs to be room for a meaningful margin. The Echo is the opposite: because it is a hardware means to the service ends that is Amazon, it can be priced with much lower margins and, as has already happened, be augmented with even cheaper devices like Echo Dots (or, in the case of the Echo Show, offer more functionality for a price that is still more than $100 cheaper than the HomePod).

The result is a product that, beyond being massively late to market (in part because of iPhone-induced myopia), is inferior to the competition on two of three possible vectors: the HomePod is significantly more expensive than an Echo or Google Home, it has an inferior voice assistant, but it has a better speaker. That is not as bad as it sounds: after all, the iPhone is significantly more expensive than most other smartphones, it has inferior built-in services, but it has a superior user experience otherwise. The difference — and this is why the iPhone is so much more dominant than any other Apple product — is that everyone already needs a phone; the only question is which one. It remains to be seen how many people need a truly impressive speaker.

This, broadly speaking, is the challenge for Apple moving forward: in what other categories does its business model (and everything that is tied up into that, including the company’s product development process, culture, etc.) create an advantage instead of a disadvantage? What existing needs can be met with a superior user experience, or what new needs — like the previously unknown need for wireless headphones that are always charged — can be created? To be clear, the iPhone is and will continue to be a juggernaut for a long time to come; indeed, it is so dominant that Apple could not change the underlying business model and resultant strengths and weaknesses even if they tried.

  1. Speaking strictly of which customers generate the most monetary value either through their purchases or advertising targeting []
  2. iPad hardware and optimizations all fell under the iOS umbrella []

Faceless Publishers

When I first worked for a (student) newspaper, the job of a publisher seemed odd to me; as far as I and my editorial colleagues were concerned, the publisher was the person the editor-in-chief, who we viewed as the boss, occasionally griped about after a few too many drinks, usually with the assertion that he (in that case) was a bit of a nuisance.

That attitude, of course, was the luxury of print: whatever happened on the other side of the office didn’t have any impact on the (in our eyes) heroic efforts to produce fresh content every day. We were the ones staying in the office until the wee hours of the night, writing, editing, and laying out the newspaper that would magically appear on newsstands the next morning, all while the publisher and his team were at home in bed.

The moral of this story is obvious: the publisher represented the business side of the newspaper, and the effect of the Internet was to make the job and impact of editorial easier and that of a publisher immeasurably harder, in large part because many of a publisher’s jobs became obsolete; it is the editorial side, though, that has paid the price.

The Jobs a Publisher Did

In the days of print, publishers provided multiple interlocking functions that made newspapers into fabulous businesses:

  • Brand: A publisher had a brand, specifically, the name of the publication; this was the primary touchpoint for readers, whether they were interested in national news, local news, sports, or the funny pages.
  • Revenue Generation: Most publishers drove revenue in two ways: some money was made through subscriptions, the selling, administration, and support of which was handled by dedicated staff; most money was made from advertising, which had its own dedicated team.
  • Human Resources: Editorial staff were free to write and complain about their publishers because everything else in their work life was taken care of, from payroll to travel expenses to office supplies.

What tied these functions together was distribution: a publisher owned printing presses and delivery trucks which, combined with their established readership and advertising relationships, gave most newspapers an effective monopoly (or oligopoly) in their geographic area on readers and advertisers and writers:

stratechery Year One - 264

Each of these functions supported the other: the brand drove revenue generation which paid for editorial that delivered on the brand promise, all underpinned by owning distribution.

Publishing’s Downward Spiral

It is hardly new news, particularly on this blog, to note that this model has fallen apart. The most obvious culprit is that on the Internet, distribution, particular text and images, is effectively free, which meant that advertisers had new channels: first ad networks that operated at scale across publishers, and increasingly Facebook and Google who offer the power to reach the individual directly.

modeldisintegration

I wrote about this progression in Popping the Publishing Bubble, and the intertwined functionality of publishers explains the downward spiral that followed: with less revenue there was less money for quality journalism (and a greater impetus to chase clicks), which meant a devaluing of the brand, which meant fewer readers, which led to even less money.

What made this downward spiral particularly devastating is that, as demonstrated by the advertising shift, newspapers did not exist in a vacuum. Readers could read any newspaper, or digital-only publisher, or even individual bloggers. And, just as social media made it possible for advertisers to target individuals, it also made everyone a content creator pushing their own media into the same feed as everyone else: the brand didn’t matter at all, only the content, or, in a few exceptional cases, the individual authors, many of whom amassed massive followings of their own; one prominent example is Bill Simmons, the American sportswriter.

Vox Media + The Ringer

I wrote about Simmons two years ago in Grantland and the (Surprising) Future of Publishing, and noted that media entities needed to think about monetization holistically:

Too much of the debate about monetization and the future of publishing in particular has artificially restricted itself to monetizing text. That constraint made sense in a physical world: a business that invested heavily in printing presses and delivery trucks didn’t really have a choice but to stick the product and the business model together, but now that everything — text, video, audio files, you name it — is 1’s and 0’s, what is the point in limiting one’s thinking to a particular configuration of those 1’s and 0’s?

In fact, it’s more than possible that in the long-run the current state of publishing — massive scale driven by advertising on one hand, and one-person shops with low revenue numbers and even lower costs on the other — will end up being an aberration. Focused, quality-obsessed publications will take advantage of bundle economics to collect “stars” and monetize them through some combination of subscriptions (less likely) or alternate media forms. Said media forms, like podcasts, are tough to grow on their own, but again, that is what makes them such a great match for writing, which is perfect for growth but terrible for monetization.

My back-of-the-envelope calculations estimated that Simmons’ Ringer podcast network was likely generating millions of dollars, and in an interview with Recode earlier this year, Simmons confirmed that is the case, claiming that podcast revenue was more than covering the cost of creating not just podcasts but the website that, at least in theory, created podcast listeners.

Still, given Simmons’ ambitions, it would certainly be better were the site more than a cost center, which makes the company’s most recent announcement particularly interesting. From the New York Times:

The Ringer, a sports and culture website created by Bill Simmons, will soon be hosted on Vox Media’s platform but maintain editorial independence under a partnership announced on Tuesday. Mr. Simmons, a former ESPN personality, will keep ownership of The Ringer, but Vox will sell advertising for the site and share in the revenue. The Ringer will leave its current home on Medium, where it has been hosted since it began in June 2016.

Jim Bankoff, Vox’s chief executive, said in a phone interview that the partnership was the first of its type for the company and would allow it to expand its offerings to advertisers. Mr. Simmons said in a statement: “This partnership allows us to remain independent while leveraging two of the things that Vox Media is great at: sales and technology. We want to devote the next couple of years to creating quality content, innovating as much as we can, building our brand and growing The Ringer as a multimedia business.”

Simmons is exactly right about the benefits he gets from the deal: instead of building duplicative technology and ad sales infrastructure, The Ringer can simply use Vox Media’s. This is less important with regards to the technology (Vox’s insistence that Chorus is a meaningful differentiator notwithstanding) but hugely important when it comes to advertising. It’s not simply the expense of building an infrastructure for ad sales; the top line is even more critical: it is all but impossible to compete with Google and Facebook for advertising dollars without massive scale.

Make no mistake, Simmons is the sort of writer that many advertisers would be happy to advertise next to (his podcast has had an impressive slate of brand names, in addition to the usual mainstays like Squarespace and Casper mattresses); the problem is that when it comes to the return-on-investment of buying ads, the “investment” — particularly time — is just as important as the “return”: a brand looking to advertise directly on premium media is far more likely to deal with Vox Media and its huge stable of sites than it is to do a relatively small deal with a site like The Ringer.

Indeed, the bifurcation in the Internet’s impact on editorial and advertising — the former is becoming atomized, the latter consolidated — explains why the implications for Vox Media are, in my estimation, the more important takeaway from this deal.

Vox Media’s Upside

To date Vox Media has been a relatively traditional publisher, albeit one that has executed better than most: the company has built strong brands that attract audiences which can be monetized through advertising, and that revenue, along with venture capital, has been fed into an impressive editorial product that builds up the company’s brands.

The Ringer, though, is not a Vox Media brand: it is Simmons’ brand, a point he emphasized in his statement, and that’s great news for Vox. The problem with editorial is that while the audience scales, production doesn’t: content still has to be created on an ongoing basis, and that means high variable costs.

Infrastructure, though, does scale: Vox Media uses the same underlying technology for all of its sites, which is exactly what you would expect given that software can be replicated endlessly. Crucially, the same principle applies to advertising: one sales team can sell ads across any number of sites, and the more impressions the better. Presuming The Ringer ends up being not an outlier but rather the first of many similar deals,1 then that means that Vox Media has far more growth potential than it did as long as it was focused only on monetizing its owned-and-operated content.

Publishers of the Future

The new model portended by this deal looks something like this:

stratechery Year One - 265

In this model the most effective and scalable publisher is faceless: atomized content creators, fueled by social media, build their own brands and develop their own audiences; the publisher, meanwhile, builds scale on the backside, across infrastructure, monetization, and even human-resource type functions.2 This last point makes a faceless publisher more than an ad network, and crucially, I suspect the greatest impact will not be (just) about ads.

Earlier this month I wrote about the future of local news, which I argued would entail relatively small subscription-based publications. Said publications would be more viable were there a faceless publisher in place to provide technology, including subscription and customer support capabilities, and all of the other repeatable minutiae that comes with running a business. Publishers still matter, but much of what matters can be scaled and offered as a service without being tied to a brand and a specific set of content.

I suspect this is part of the endgame for publishing on the Internet: free distribution blew up the link between editorial and publishing and drove them in opposite directions — atomization on one side and massively greater scale on the other. And now, that same reality makes possible a new model: a huge number of small publications backed by entities more concerned with building viable businesses than having memorable names.

Disclosure: I have previously spoken at the Vox Media-owned Code Media conference and was previously a guest on The Bill Simmons Podcast; I received no monetary compensation for either appearance

  1. There is already a parallel to The Ringer within Vox Media: the company’s vast network of team-specific sites that sit under the SBNation umbrella []
  2. This is where Medium went wrong: the company made motions towards this model — which is why The Ringer is hosted there — but has decided to pursue a Medium subscription model instead []

Tulips, Myths, and Cryptocurrencies

Everyone knows about the Tulip Bubble, first documented by Charles Mackay in 1841 in his book Extraordinary Popular Delusions and the Madness of Crowds:

In 1634, the rage among the Dutch to possess [tulips] was so great that the ordinary industry of the country was neglected, and the population, even to its lowest dregs, embarked in the tulip trade. As the mania increased, prices augmented, until, in the year 1635, many persons were known to invest a fortune of 100,000 florins in the purchase of forty roots. It then became necessary to sell them by their weight in perits, a small weight less than a grain. A tulip of the species called Admiral Liefken, weighing 400 perits, was worth 4400 florins; an Admiral Van der Eyck, weighing 446 perits, was worth 1260 florins; a Childer of 106 perits was worth 1615 florins; a Viceroy of 400 perits, 3000 florins, and, most precious of all, a Semper Augustus, weighing 200 perits, was thought to be very cheap at 5500 florins. The latter was much sought after, and even an inferior bulb might command a price of 2000 florins. It is related that, at one time, early in 1636, there were only two roots of this description to be had in all Holland, and those not of the best. One was in the possession of a dealer in Amsterdam, and the other in Harlaem [sic]. So anxious were the speculators to obtain them, that one person offered the fee-simple of twelve acres of building-ground for the Harlaem tulip. That of Amsterdam was bought for 4600 florins, a new carriage, two grey horses, and a complete suit of harness.

Mackay goes on to recount other tall tales; I’m partial to the sailor who thought a Semper Augustus bulb was an onion, and stole it for his breakfast; “Little did he dream that he had been eating a breakfast whose cost might have regaled a whole ship’s crew for a twelvemonth.”

Jean-Léon_Gérôme_-_The_Tulip_Folly_-_Walters_372612

Anyhow, we all know how it ended:

At first, as in all these gambling mania, confidence was at its height, and everybody gained. The tulip-jobbers speculated in the rise and fall of the tulip stocks, and made large profits by buying when prices fell, and selling out when they rose. Many individuals grew suddenly rich. A golden bait hung temptingly out before the people, and one after the other, they rushed to the tulip-marts, like flies around a honey-pot. Every one imagined that the passion for tulips would last for ever…

At last, however, the more prudent began to see that this folly could not last for ever. Rich people no longer bought the flowers to keep them in their gardens, but to sell them again at cent per cent profit. It was seen that somebody must lose fearfully in the end. As this conviction spread, prices fell, and never rose again. Confidence was destroyed, and a universal panic seized upon the dealers…The cry of distress resounded every where, and each man accused his neighbour. The few who had contrived to enrich themselves hid their wealth from the knowledge of their fellow-citizens, and invested it in the English or other funds. Many who, for a brief season, had emerged from the humbler walks of life, were cast back into their original obscurity. Substantial merchants were reduced almost to beggary, and many a representative of a noble line saw the fortunes of his house ruined beyond redemption.

Thanks to Mackay’s vivid account, tulips are a well-known cautionary tale, applied to asset bubbles of all types; here’s the problem, though: there’s a decent chance Mackay’s account is completely wrong.

The Truth About Tulips

In 2006, UCLA economist Earl Thompson wrote a paper entitled The Tulipmania: Fact or Artifact?1 that includes this chart that looks like Mackay’s bubble:

Screen Shot 2017-05-23 at 7.16.24 PM

However, as Thompson wrote in the paper, “appearances are sometimes quite deceiving.” A much more accurate chart looks like this:

Screen Shot 2017-05-23 at 7.17.16 PM

Mackay was right that there were insanely high prices: those prices, though, were for options; if the actual price of tulips were lower on the strike date for the options, then the owner of the option only needed to pay a small percentage of the contract price (ultimately 3.5%). Meanwhile, though, actual spot prices and futures (that locked in a price) stayed flat.

The broader context comes from this chart:

Screen Shot 2017-05-23 at 7.17.32 PM

As Thompson explains, tulips in fact were becoming more popular, particularly in Germany, and, as the first phase of the 30 Years War wound down, it looked like Germany would be victorious, which would mean a better market for tulips. In early October, 1636, though, Germany suffered an unexpected defeat, and the tulip price crashed, not because it was irrationally high, but because of an external shock.

As Thompson recounts, that October crash was in fact a financial disaster for many, including some public officials who had bought tulip futures on a speculative basis; to get themselves out of trouble, said officials retroactively decreed that futures were in fact options. These deliberations were well-publicized throughout the winter of 1636 and early 1637, but not made official until February 24th; the dramatic rise in options, then, is explained as a longshot bet that the conversion would not actually take place: when it did, the price of the options naturally dropped to the spot price.2

By Thompson’s reckoning, Mackay’s entire account was a myth.

Myths and Humans

Early on in Sapiens: A Brief History of Humankind, Yuval Noah Harari explains the importance of myth:

Once the threshold of 150 individuals is crossed, things can no longer work [on the basis of intimate relations]…How did Homo sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.

Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe that God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights – and the money paid out in fees. Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.

The implication of Harari’s argument3 is pretty hard to wrap one’s head around.4 Take the term “tulip bubble”: everyone knows it is in reference to a speculative mania that will end in a crash, even those like me — and now you — that have learned about what actually happened in the Netherlands in the winter of 1636. Like I said, it’s a myth — and myths matter.

The Rise in Cryptocurrencies

The reason I mention the tulip bubble at all is probably obvious:

Screen Shot 2017-05-23 at 5.07.24 PM

This is the total market capitalization of all cryptocurrencies. To date that has mostly meant Bitcoin, but over the last two months Bitcoin’s share of cryptocurrency capitalization has actually plummeted to less than 50%, thanks to the sharp rise of Ethereum and Ripple in particular:

currencyshare

As you might expect, the tulip is having a renaissance, or to be more precise, our shared myth of the tulip bubble. This tweet summed up the skeptics’ sentiment well:

To be perfectly clear, this random twitterer may very well be correct about an impending crash. And, in the grand scheme of things, it is mostly true today that cryptocurrencies don’t have meaningful “industrial [or] consumer use except as a medium of exchange.” What he is the most right about, though, is that cryptocurrencies have no intrinsic value.

Compare cryptocurrencies to, say, the U.S. dollar. The U.S. dollar is worth, well, a dollar because…well, because the United States government says it is.5 And because currency traders have established it as such, relative to other currencies. And the worth of those currencies is based on…well, like the dollar, they are based on a mutual agreement of everyone that they are worth whatever they are worth. The dollar is a myth.

Of course this isn’t a new view: there are still those that believe it was a mistake to move the dollar off of the gold standard: that was a much more concrete definition. After all, you could always exchange one dollar for a fixed amount of gold, and gold, of course, has intrinsic value because…well, because us humans think it looks pretty, I guess. In fact, it turns out gold — at least the idea that it is of intrinsically more worth than another mineral — is another myth.

I would argue that cryptocurrency broadly, and Bitcoin especially, are no different. Bitcoin has been around for eight years now, it has captured the imagination, ingenuity, and investment of a massive number of very smart people, and it is increasingly trivial to convert it to the currency of your choice. Can you use Bitcoin to buy something from the shop down the street? Well, no, but you can’t very well use a piece of gold either, and no one argues that the latter isn’t worth whatever price the gold market is willing to bear. Gold can be converted to dollars which can be converted to goods, and Bitcoin is no different. To put it another way, enough people believe that gold is worth something, and that is enough to make it so, and I suspect we are well past that point with Bitcoin.

The Utility of Blockchains

To be fair, there is an argument that gold is valuable because it does have utility beyond ornamentation (I, of course, would argue that that is a perfectly valuable thing in its own right): for example, gold is used in electronics and dentistry. An argument based on utility, though, applies even moreso to cryptocurrencies. I wrote back in 2014:

The defining characteristic of anything digital is its zero marginal cost…Bitcoin and the breakthrough it represents, broadly speaking, changes all that. For the first time something can be both digital and unique, without any real world representation. The particulars of Bitcoin and its hotly-debated value as a currency I think cloud this fact for many observers; the breakthrough I’m talking about in fact has nothing to do with currency, and could in theory be applied to all kinds of objects that can’t be duplicated, from stock certificates to property deeds to wills and more.

One of the big recent risers, Ethereum, is exactly that: Ethereum is based on a blockchain,6 like Bitcoin, which means it has an attached currency (Ether) that incentivizes miners to verify transactions. However, the protocol includes smart contract functionality, which means that two untrusted parties can engage in a contract without a 3rd-party enforcement entity.7

One of the biggest applications of this functionality is, unsurprisingly, other cryptocurrencies. The last year in particular has seen an explosion in Initial Coin Offerings (ICOs), usually on Ethereum. In an ICO a new blockchain-based entity is created, with the initial “tokens” — i.e. currency — being sold (for Ether or Bitcoin). These initial offerings are, at least in theory, valuable because the currency will, if the application built on the blockchain is successful, increase in value over time.

This has the potential to be particularly exciting for the creation of decentralized networks. Fred Ehrsam explained on the Coinbase blog:

Historically it has been difficult to incentivize the creation of new protocols as Albert Wenger points out. This has been because 1) there had been no direct way to monetize the creation and maintenance of these protocols and 2) it had been difficult to get a new protocol off the ground because of the chicken and the egg problem. For example, with SMTP, our email protocol, there was no direct monetary incentive to create the protocol — it was only later that businesses like Outlook, Hotmail, and Gmail started using it and made a real business on top of it. As a result we see very successful protocols and they tend to be quite old. (Editor: and created when the Internet was government-supported)

Now someone can create a protocol, create a tokens that is native to that protocol, and retain some of that token for themselves and for future development. This is a great way to incentivize creators: if the protocol is successful, the token will go up in value…In addition, tokens help solve the classic chicken and the egg problem that many networks have…the value of a network goes up a lot when more people join it. So how do you get people to join a brand new network? You give people partial ownership of the network…

0*cSLRTnSGE6vw2VGi.

These two incentives are amazing offsets for each other. When the network is less populated and useful you now have a stronger incentive to join it.

This is a huge deal, and probably the most viable way out from the antitrust trap created by Aggregation Theory.

Party Like It’s 1999

The problem, of course, is that while blockchain applications make sense in theory, the road to them becoming a reality is still a long one. That is why I suspect the better analogy for blockchain-based applications and their associated cryptocurrencies is not tulips but rather the Internet itself, specifically the 1990s. Marc Andreessen is fond of observing, most recently on this excellent podcast with Barry Ritholtz, that all of the dot-com failures turned out to be viable businesses: they were just 15 years too early (the most recent example: Chewy.com, the spiritual heir of famed dot-com bust Pets.com, acquired earlier this year for $3.35 billion).

As the aphorism goes, being early (or late) is no different than being wrong, and that’s true in a financial sense. As I noted above, I would not be surprised if the ongoing run-up in cryptocurrency prices proves to be, well, a bubble. However, bubbles of irrationality and bubbles of timing are fundamentally different: one is based on something real (the latter), and one is not. That is to say, one is a myth, and one is merely a fable — and myths can lift an entire species.

Consistent with my ethics policy, I do not own any Bitcoin or any other cryptocurrency; that said, the implication of this article is that comparing Bitcoin or any other cryptocurrencies to stock in an individual company probably doesn’t make much sense

  1. This link requires payment; there is an uploaded version of the paper here []
  2. Thompson’s take is not without its critics: see Brad DeLong’s takedown here []
  3. If you’re religious, please apply the point about “gods” to other religions — the point still stands! []
  4. I first encountered this sort of thinking in an Introduction to Constitutional Law course in university, when my professor contended that the U.S. Constitution was simply a shared myth, dependent on the mutual agreement of Americans and its leaders that it mattered. It’s a lesson that has served me well []
  5. And, as @nosunkcosts notes, said claim, via taxes, is backed by military might []
  6. A useful overview of how cryptocurrencies work is here []
  7. What happened with The Dao will not be covered here! []

Boring Google

My favorite part of keynotes is always the opening. That is the moment when the CEO comes on stage, not to introduce new products or features, but rather to create the frame within which new products and features will be introduced.

This is why last week’s Microsoft keynote was so interesting: CEO Satya Nadella spent a good 30 minutes on the framing, explaining a new world where the platform that mattered was not a distinct device or a particular cloud, but rather one that ran on all of them. In this framing Microsoft, freed from a parochial focus on its own devices, could be exactly that; the problem, as I noted earlier this week, is that platforms come from products, and Microsoft is still searching for an on-ramp other than Windows.

The opening to Google I/O couldn’t have been more different. There was no grand statement of vision, no mind-bending re-framing of how to think about the broader tech ecosystem, just an affirmation of the importance of artificial intelligence — the dominant theme of last year’s I/O — and how it fit in with Google’s original vision. CEO Sundar Pichai said in his prepared remarks:

It’s been a very busy year since last year, no different from my 13 years at Google. That’s because we’ve been focused ever more on our core mission of organizing the world’s information. And we are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale. That approach has served us very, very well. This is what has allowed us to scale up seven of our most important products and platforms to over a billion users…It’s a privilege to serve users at this scale, and this is all because of the growth of mobile and smartphones.

But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.

Honestly, it was kind of boring.

Google’s Go-to-Market Problem

After last year’s I/O I wrote Google’s Go-To-Market Problem, and it remains very relevant. No company benefited more from the open web than Google: the web not only created the need for Google search, but the fact that all web pages were on an equal footing meant that Google could win simply by being the best — and they did.

Mobile has been much more of a challenge: while Android remains a brilliant strategic move, its dominance is rooted more in its business model than in its quality (that’s not to denigrate its quality in the slightest, particularly the fact that Android runs on so many different kinds of devices at so many different price points). The point of Android — and the payoff today — is that Google services are the default on the vast majority of phones.

The problem, of course, is iOS: Apple has the most valuable customers (from a monetization perspective, to be clear), who mostly don’t bother to use different services than the default Apple ones, even if they are, in isolation, inferior. I wrote in that piece:

Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.

To that end, I thought there were three product announcements yesterday that suggested Google is on the right track:

Google Assistant

Google Assistant was first announced last year, but it was only available through the Allo messenger app, Google’s latest attempt to build a social product; the company also pre-announced Google Home, which would not ship until the fall, alongside the Pixel phone. You could see Google’s thinking with all three products:

  • Given that the most important feature of a messaging app is whether or not your friends or family also use it, Google needed a killer feature to get people to even download Allo. Enter Google Assistant.

  • Thanks to the company’s bad bet on Nest, Google was behind Amazon in the home. Google Assistant being smarter than Alexa was the best way to catch up.

  • A problem for Google with voice computing is that it is not clear what the business model might be; one alternative would be to start monetizing through hardware, and so the high-end Pixel phone was differentiated by Google Assistant.

All three approaches suffered from the same flaw: Google Assistant was the means to a strategic goal, not the end. The problem, though, is that unlike search, Google Assistant was not yet established as something people should jump through hoops to get: driving Google Assistant usage needs to be the goal; only then can it be leveraged for something else.

To that end Google has significantly changed its approach over the last 12 months.

  • Google Assistant is now available as its own app, both on Android and iOS. No unwanted messenger app necessary.

  • The Google Assistant SDK will allow Google Assistant to be built in to just about anything. Scott Huffman, the VP of Google Assistant said:

    We think the assistant should be available on all kinds of devices where people might want to ask for help. The new Google Assistant SDK allows any device manufacturer to easily build the Google Assistant into whatever they’re building, speakers, toys, drink-mixing robots, whatever crazy device all of you think up now can incorporate the Google Assistant. We’re working with many of the world’s best consumer brands and their suppliers so keep an eye out for the badge that says “Google Assistant Built-in” when you do your holiday shopping this year.

    This is the exact right approach for a services company.

  • That leads to the Pixel phone: earlier this year Google finally added Google Assistant to Android broadly — built-in, not an app — after having insisted just a few months earlier it was a separate product. The shifting strategy was a big mistake (as, arguably, is the entire program), but at least Google has ended up where they should be: everywhere.

Google Photos

Google Assistant has a long ways to go, but there is a clear picture of what success will look like: Google Photos. Launched only two years ago, Pichai bragged that Photos now has over 500 million active users who upload 1.2 billion photos a day. This is a spectacular number for one very simple reason: Google Photos is not the default photo app for Android1 or iOS. Rather, Google has earned all of those photos simply by being better than the defaults, and the basis of that superiority is Google’s machine learning.

Moreover, much like search, Photos gets better the more data it gets, creating a virtuous cycle: more photos means more data which means a better experience which means more users which means more photos. It is already hard to see other photo applications catching up.

stratechery Year One - 263

Yesterday Google continued to push forward, introducing suggested sharing, shared libraries, and photo books. All utilize vision recognition (for example, you can choose to automatically share pictures of your kids with your significant other) and all make Photos an even better app, which will lead to new users, which will lead to more data.

What is particularly exciting from Google’s perspective is that these updates add a social component: suggested sharing, for example, is self-contained within Google Photos, creating ad hoc private networks with you and your friends. Not only does this help spread Google Photos, it is also a much more viable and sustainable approach to social networking than something like Google Plus. Complex entities like social networks are created through evolution, not top-down design, and they must rely on their creator’s strengths, not weaknesses.

Google Lens

Google Lens was announced as a feature of Google Assistant and Google Photos. From Pichai:

We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.

How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.

As you can see, we are beginning to understand images and videos. All of Google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.

The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.

My one concern is that Google is repeating its previous mistake: that is, seeking to use a new product as a means instead of an end. Limiting Google Lens to Google Assistant and Google Photos risks handicapping Lens’ growth; ideally Lens will be its own app — and thus the foundation for other applications — sooner rather than later.


Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization. Google Assistant requires you to open an app instead of using what is built-in (although the Android situation should improve going forward), Photos requires a download instead of the default photos app, and Lens sits on top of both. It’s a far cry from simply setting Google as the home page of your browser, and Google making more money the more people used the Internet.

All three apps, though, are leaning into Google’s strengths:

  • Google Assistant is focused on being available everywhere
  • Google Photos is winning by being better through superior data and machine learning
  • Google Lens is expanding Google’s utility into the physical world

There were other examples too: Google’s focus with VR is building a cross-device platform that delivers an immersive experience at multiple price points, as opposed to Facebook’s integrated high-end approach that makes zero sense for a social network. And, just as Apple invests in chips to make its consumer products better, Google is investing in chips to make its machine learning better.

The Beauty of Boring

This is the culmination of a shift that happened two years ago, at the 2015 Google I/O. As I noted at the time,2 the event was two keynotes in one.

[The first hour was] a veritable smorgasbord of features and programs that [lacked a] unifying vision, just a sense that Google should do them. An operating system for the home? Sure! An Internet of Things language? Bring it on! Android Wear? We have apps! Android Pay? Obviously! A vision for Android? Not necessary!

None of these had a unifying vision, just a sense that Google ought to do them because they’re a big company that ought to do big things.

What was so surprising, though was that the second hour of that keynote was completely different. Pichai gave a lengthy, detailed presentation about machine learning and neural nets, and tied it to Google’s mission, much like he did in yesterday’s introduction. After quoting Pichai’s monologue I wrote:

Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do…The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.

This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.

That is the best place to be, for a person and for a company.

  1. Google Photos was default for the Pixel, and for more and more Android phones that have come out in the past few months []
  2. This is a Daily Update but I have made it publicly available []

WannaCry About Business Models

For the users and administrators of the estimated 200,000 computers affected by “WannaCry” — a number that is expected to rise as new variants come online (the original was killed serendipitously by a security researcher) — the answer to the question implied in the name is “Yes.”

wannacryscreenshotpng

WannaCry is a type of malware called “ransomware”: it encrypts a computers’ files and demands payment to decrypt them. Ransomware is not new; what made WannaCry so destructive was that it was built on top of a computer worm — a type of malware that replicates itself onto other computers on the same network (said network, of course, can include the Internet).

Worms have always been the most destructive type of malware — and the most famous: even non-technical readers may recognize names like Conficker (an estimated $9 billion of damage in 2008), ILOVEYOU (an estimated $15 billion of damage in 2000), or MyDoom (an estimated $38 billion of damage in 2004). There have been many more, but not so many the last few years: the 2000s were the sweet spot when it came to hundreds of millions of computers being online with an operating system — Windows XP — that was horrifically insecure, operated by users given to clicking and paying for scams to make the scary popups go away.

Over the ensuing years Microsoft has, from Windows XP Service Pack 2 on, gotten a lot more serious about security, network administrators have gotten a lot smarter about locking down their networks, and users have at least improved in not clicking on things they shouldn’t. Still, as this last weekend shows, worms remain a threat, and as usual, everyone is looking for someone to blame. This, time, though, there is a juicy new target: the U.S. government.

The WannaCry Timeline

Microsoft President and Chief Legal Officer Brad Smith didn’t mince any words on The Microsoft Blog (“WannaCrypt” is an alternative name for WannaCry”):

Starting first in the United Kingdom and Spain, the malicious “WannaCrypt” software quickly spread globally, blocking customers from their data unless they paid a ransom using Bitcoin. The WannaCrypt exploits used in the attack were drawn from the exploits stolen from the National Security Agency, or NSA, in the United States. That theft was publicly reported earlier this year. A month prior, on March 14, Microsoft had released a security update to patch this vulnerability and protect our customers. While this protected newer Windows systems and computers that had enabled Windows Update to apply this latest update, many computers remained unpatched globally. As a result, hospitals, businesses, governments, and computers at homes were affected.

Smith mentions a number of key dates, but it’s important to get the timeline right, so let me summarize it as best as I understand it:1

  • 2001: The bug in question was first introduced in Windows XP and has hung around in every version of Windows since then
  • 2001–2015: At some point the NSA (likely the Equation Group, allegedly a part of the NSA) discovered the bug and built an exploit named EternalBlue, and may or may not have used it
  • 2012–2015: An NSA contractor allegedly stole more than 75% of the NSA’s library of hacking tools
  • August, 2016: A group called “ShadowBrokers” published hacking tools they claimed were from the NSA; the tools appeared to come from the Equation Group
  • October, 2016: The aforementioned NSA contractor was charged with stealing NSA data
  • January, 2017: ShadowBrokers put a number of Windows exploits up for sale, including a SMB zero day exploit — likely the “EternalBlue” exploit used in WannaCry — for 250 BTC (around $225,000 at that time)
  • March, 2017: Microsoft, without fanfare, patched a number of bugs without giving credit to whomever discovered them; among them was the EternalBlue exploit, and it seems very possible the NSA warned them
  • April, 2017: ShadowBrokers released a new batch of exploits, including EternalBlue, perhaps because Microsoft had already patched them (dramatically reducing the value of zero day exploits in particular)
  • May, 2017: WannaCry, based on the EternalBlue exploit, was released and spread to around 200,000 computers before its kill switch was inadvertently triggered; new versions have already begun to spread

It is axiomatic to note that the malware authors bear ultimate responsibility for WannaCry; hopefully they will be caught and prosecuted to the full extent of the law.

After that, though, it gets a bit murky.

Spreading Blame

The first thing to observe from this timeline is that, as with all Windows exploits, the initial blame lies with Microsoft. It is Microsoft that developed Windows without a strong security model for networking in particular, and while the company has done a lot of work to fix that, many fundamental flaws still remain.

Not all of those flaws are Microsoft’s fault: the default assumption for personal computers has always been to give applications mostly unfettered access to the entire computer, and all attempts to limit that have been met with howls of protest. iOS created a new model, in which applications were put in a sandbox and limited to carefully defined hooks and extensions into the operating system; that model, though, was only possible because iOS was new. Windows, in contrast, derived all of its market power from the established base of applications already in the market, which meant overly broad permissions couldn’t be removed retroactively without ruining Microsoft’s business model.

Moreover, the reality is that software is hard: bugs are inevitable, particularly in something as complex as an operating system. That is why Microsoft, Apple, and basically any conscientious software developer regularly issues updates and bug fixes; that products can be fixed after the fact is inextricably linked to why they need to be fixed in the first place!

To that end, though, it’s important to note that Microsoft did fix the bug two months ago: any computer that applied the March patch — which, by default, is installed automatically — is protected from WannaCry; Windows XP is an exception, but Microsoft stopped selling that operating system in 20082 and stopped supporting it in 2014 (despite that fact, Microsoft did release a Windows XP patch to Fix the bug on Friday night). In other words, end users and the IT organizations that manage their computers bear responsibility as well. Simply staying up-to-date on critical security patches would have kept them safe.

Still, staying up-to-date is expensive, particularly in large organizations, because updates break stuff. That “stuff” might be critical line-of-business software, which may be from 3rd-party vendors, external contractors, or written in-house: that said software is so dependent on one particular version of an OS is itself a problem, so you can blame those developers too. The same goes for hardware and its associated drivers: there are stories from the UK’s National Health Service of MRI and X-ray machines that only run on Windows XP, critical negligence by the manufacturers of those machines.

In short, there is plenty of blame to go around; how much, though, should go into the middle part of that timeline — the government part?

Blame the Government

Smith writes in that blog post:

This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017. We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.

This comparison, frankly, is ridiculous, even if you want to stretch and say that the impact of WannaCry on places like hospitals may actually result in physical harm (albeit much less than a weapon of war!).

First, the U.S. government creates Tomahawk missiles, but it is Microsoft that created the bug (even if inadvertently). What the NSA did was discover the bug (and subsequently exploit it), and that difference is critical. Finding bugs is hard work, requiring a lot of money and effort. It’s worth considering why, then, the NSA was willing to do just that, and the answer is right there in the name: national security. And, as we’ve seen through examples like Stuxnet, these exploits can be a powerful weapon.

Here is the fundamental problem: insisting that the NSA hand over exploits immediately is to effectively demand that the NSA not find the bug in the first place. After all, a patched (and thus effectively published) bug isn’t worth nearly as much, both monetarily as ShadowBrokers found out, or militarily, which means the NSA would have no reason to invest the money and effort to find them. To put it another way, the alternative is not that the NSA would have told Microsoft about EternalBlue years ago, but that the underlying bug would have remained un-patched for even longer than it was (perhaps to be discovered by other entities like China or Russia; the NSA is not the only organization searching for bugs).

In fact, the real lesson to be learned with regard to the government is not that the NSA should be Microsoft’s QA team, but rather that leaks happen: that is why, as I argued last year in the context of Apple and the FBI, government efforts to weaken security by fiat or the insertion of golden keys (as opposed to discovering pre-existing exploits) are wrong. Such an approach is much more in line with Smith’s Tomahawk missile argument, and given the indiscriminate and immediate way in which attacks can spread, the country that would lose the most from such an approach would be the one that has the most to lose (i.e. the United States).

Blame the Business Model

Still, even if the U.S. government is less to blame than Smith insists, nearly two decades of dealing with these security disasters suggests there is a systematic failure happening, and I think it comes back to business models. The fatal flaw of software, beyond the various technical and strategic considerations I outlined above, is that for the first several decades of the industry software was sold for an up-front price, whether that be for a package or a license.

This resulted in problematic incentives and poor decision-making by all sides:

  • Microsoft is forced to support multiple distinct code bases, which is expensive and difficult and not tied to any monetary incentives (thus, for example, the end of support for Windows XP).
  • 3rd-party vendors are inclined to view a particular version of an operating system as a fixed object: after all, Windows 7 is distinct from Windows XP, which means it is possible to specify that only XP is supported. This is compounded by the fact that 3rd-party vendors have no ongoing monetary incentive to update their software; after all, they have already been paid.
  • The most problematic impact is on buyers: computers and their associated software are viewed as capital costs, which are paid for once and then depreciated over time as the value of the purchase is realized. In this view ongoing support and security are an additional cost divorced from ongoing value; the only reason to pay is to avoid a future attack, which is impossible to predict both in terms of timing and potential economic harm.

The truth is that software — and thus security — is never finished; it makes no sense, then, that payment is a one-time event.

SaaS to the Rescue

Four years ago I wrote about why subscriptions are better for both developers and end users in the context of Adobe’s move away from packaged software:

surplus2

That article was about the benefit of better matching Adobe’s revenue with the value gained by its users: the price of entry is lower while the revenue Adobe extracts over time is more commensurate with the value it delivers. And, as I noted, “Adobe is well-incentivised to maintain the app to reduce churn, and users always have the most recent version.”

This is exactly what is necessary for good security: vendors need to keep their applications (or in the case of Microsoft, operating systems) updated, and end users need to always be using the latest version. Moreover, pricing software as a service means it is no longer a capital cost with all of the one-time payment assumptions that go with it: rather, it is an ongoing expense that implicitly includes maintenance, whether that be by the vendor or the end user (or, likely, a combination of the two).

I am, of course, describing Software-as-a-service, and that category’s emergence, along with cloud computing generally (both easier to secure and with massive incentives to be secure), is the single biggest reason to be optimistic that WannaCry is the dying gasp of a bad business model (although it will take a very long time to get out of all the sunk costs and assumptions that fully-depreciated assets are “free”).3 In the long run, there is little reason for the typical enterprise or government to run any software locally, or store any files on individual devices. Everything should be located in a cloud, both files and apps, accessed through a browser that is continually updated, and paid for with a subscription. This puts the incentives in all the right places: users are paying for security and utility simultaneously, and vendors are motivated to earn it.

To Microsoft’s credit the company has been moving in this direction for a long time: not only is the company focused on Azure and Office 365 for growth, but even its traditional software has long been monetized through subscription-like offerings. Still, implicit in this cloud-centric model is a lot less lock-in and a lot more flexibility in terms of both devices and services: the reality is that for as much of a headache Windows security has been for Microsoft, those headaches are inextricably tied up with the reasons that Microsoft has been one of the most profitable companies of all time.

The big remaining challenge will be hardware: the business model for software-enabled devices will likely continue to be upfront payment, which means no incentives for security; the costs are externalities to be borne by the targets of botnets like Mirai. Expect little progress and lots of blame, the hallmark of the sort of systematic breakdown that results from a mismatched business model.

  1. I will update this section as necessary and note said updates in this footnote []
  2. Windows XP was still available for Netbooks until 2010 []
  3. The answer for individual security is encryption []