Stratechery Plus Update

  • Economic Power in the Age of Abundance

    At first, or second, or even third glance, it’s hard to not shake your head at European publishers’ dysfunctional relationship with Google. Just this week a group of German publishers started legal action against the search giant, demanding 11 percent of all revenue stemming from pages that include listings from their sites. From Danny Sullivan’s excellent Search Engine Land:

    German news publishers are picking up where the Belgians left off, a now not-so-proud tradition of suing Google for being included in its listings rather than choosing to opt-out. This time, the publishers want an 11% cut of Google’s revenue related to them being listed.

    As Sullivan notes, Google offers clear guidelines for publisher’s who do not want to be listed, or simply do not want content cached. The problem, though, as a group of Belgian newspapers found out, is that not being in Google means a dramatic drop in traffic:

    Back in 2006, Belgian news publishers sued Google over their inclusion in the Google News, demanding that Google remove them. They never had to sue; there were mechanisms in place where they could opt-out.

    After winning the initial suit, Google dropped them as demanded. Then the publications, watching their traffic drop dramatically, scrambled to get back in. When they returned, they made use of the exact opt-out mechanisms (mainly just to block page caching) that were in place before their suit, which they could have used at any time.

    In the case of the Belgian publishers in particular, it was difficult to understand what they were trying to accomplish. After all, isn’t the goal more page views (it certainly was in the end!)? The German publishers in this case are being a little more creative: like the Belgians before them they are alleging that Google benefits from their content, but instead of risking their traffic by leaving Google, they’re instead demanding Google give them a cut of the revenue they feel they deserve.

    The obvious reaction to this case, as with the Belgian one, is to marvel at the publisher’s nerve; after all, as we saw with the Belgians, Google is driving traffic from which the publishers profit. “Ganz im Gegenteil!” say the publishers. “Google would not exist without our content.” And, at a very high level, I suppose that’s true, but it’s true in a way that doesn’t matter, and understanding why it doesn’t matter gets at the core reason why traditional journalistic institutions are having so much trouble in the Internet era.


    One of the great paradoxes for newspapers today is that their financial prospects are inversely correlated to their addressable market. Even as advertising revenues have fallen off a cliff – adjusted for inflation, ad revenues are at the same level as the 1950s – newspapers are able to reach audiences not just in their hometowns but literally all over the world.

    Before the Internet, a newspaper like the New York Times was limited in reach; now it can reach anyone on the planet
    Before the Internet, a newspaper like the New York Times was limited in reach; now it can reach anyone on the planet

    The problem for publishers, though, is that the free distribution provided by the Internet is not an exclusive. It’s available to every other newspaper as well. Moreover, it’s also available to publishers of any type, even bloggers like myself.

    The city-by-city view of Stratechery's readers over the last 30 days.
    The city-by-city view of Stratechery’s readers over the last 30 days.

    To be clear, this is absolutely a boon, particularly for readers, but also for any writer looking to have a broad impact. For your typical newspaper, though, the competitive environment is diametrically opposed to what they are used to: instead of there being a scarce amount of published material, there is an overwhelming abundance. More importantly, this shift in the competitive environment has fundamentally changed just who has economic power.

    In a world defined by scarcity, those who control the scarce resources have the power to set the price for access to those resources. In the case of newspapers, the scarce resource was reader’s attention, and the purchasers were advertisers. The expected response in a well-functioning market would be for competitors to arise to offer more of whatever resource is scarce, but this was always more difficult when it came to newspapers: publishers enjoyed the dual moats of significant up-front capital costs (printing presses are expensive!) as well as a two-sided network (readers and advertisers). The result is that many newspapers enjoyed a monopoly in their area, or an oligopoly at worse.

    The Internet, though, is a world of abundance, and there is a new power that matters: the ability to make sense of that abundance, to index it, to find needles in the proverbial haystack. And that power is held by Google. Thus, while the audiences advertisers crave are now hopelessly fractured amongst an effectively infinite number of publishers, the readers they seek to reach by necessity start at the same place – Google – and thus, that is where the advertising money has gone.

    And so, the German publishers are both right and wrong. Without content generated by others, without the proverbial hay, Google would not exist. But at the same time, as the Belgian publishers learned eight years ago, any one publication is but a single haystalk, easily blown by the wind or trampled underfoot, and no one cries – or worse, even notices – when it is gone. Certainly not Google, and certainly none of the advertisers who provide the money to which the German publishers wrongly feel they are entitled.


  • Amazon’s Whale Strategy

    A week before yesterday’s launch of the Fire Phone, Amazon sent all of the attendees a copy of the children’s book “Mr. Pines Purple House” with a note from Jeff Bezos stating:

    I think you’ll agree that the world is a better place when things are a little bit different.

    Beyond the book, the first thing different about the event was that it was open to the general public. Over 60,000 people ended up applying to attend, and Amazon opened the event with a video featuring several of those applicants. The message was clear: this event was about, and for, Amazon’s most loyal customers. And so is the Fire Phone.

    It’s not the Phone that’s Different; It’s the Strategy

    At first glance, there is very little about the phone that feels different, at least in the ways that matter. Firefly is compelling, while Dynamic Perspective feels like a whole lot more trouble that it’s worth – both for Amazon and for end users. The camera looks impressive, and unlimited cloud storage is indeed a feature all smartphones should have. Still, much of this is at the margins: what we expected from Amazon was along the lines of Jeff Bezos’s promise for the Fire tablet line: “Premium products at non-premium prices.”

    Instead the Fire Phone is right up there with an iPhone 5S when it comes to price,1 and it’s sold with the exact same contract you would get with that iPhone.2 There is no margin compression, no subsidized data. There’s nothing different at all. Unlike the Fire tablets, which most assume are sold at near cost in order to drive usage of Amazon’s services – especially Prime video and Kindle books – the Fire phone seems unlikely to attract any new customers to the Amazon ecosystem. And perhaps, that is what makes it so different: the Fire Phone seems to be not only a different strategy for Amazon, but a new kind of smartphone strategy full-stop.

    How Will Amazon Ultimately Make Money?

    The chief question observers have about Amazon is when exactly they will start making profits, and, more importantly, how. Last quarter Amazon made a mere $108 million on $19.74 billion in sales, good for a price-to-earnings ratio of over 500. While some have suggested that Amazon is trying to eliminate competitors so it can raise prices, it seems much more likely that a significant raise in Amazon’s prices would cost them more customers than it would be worth.

    A more plausible argument Amazon’s eventual profitability is predicated on their total revenue and gross margins; while Amazon’s net margin last quarter was only 0.5%, its gross margin was a healthier 28.8%, which comes out to $5.69 billion in gross profit. Were less of that gross profit invested in growth, Amazon could very quickly begin generating significant return for shareholders. Still, though, this suggestion isn’t entirely satisfactory either: slowing investment works in the short-term, but Jeff Bezos is famously long-term focused, and over time less investment would lessen Amazon’s competitive advantages.

    The Whale Strategy

    The Fire Phone’s high price highlights what may be a third way: call it the whale strategy, for it entails a disproportionate amount of profit being generated by the same sorts of loyal customers who opened yesterday’s Keynote. In this view, instead of the Fire Phone being a means of driving e-commerce, e-commerce is in fact a means of capturing customers and building loyalty; Prime deepens the connections, and the profitability; and at the final stage, the Fire Phone makes more profit in a single purchase than anything that came before, even as it drives an even deeper connection with the customer.

    It helps to look at this strategy as a sort of funnel:

    Amazon Whales are the top one percent: Prime subscribers who buy the Fire Phone
    Amazon Whales are the top one percent: Prime subscribers who buy the Fire Phone

    Looking at the actual numbers involved makes this implications of this strategy even clearer:

    Screen Shot 2014-06-19 at 11.20.54 PM

    Various reports suggest that Prime subscribers make up about 10% of all Amazon shoppers. Presuming that is the case, and that 10% of Prime subscribers buy the Fire Phone, Fire Phone buyers could make up 4% of Amazon’s gross profit despite being only 1% of Amazon’s customers.3 Moreover, nearly 40% of that additional profit would be derived solely from the phone itself. In fact, the incremental $150 dollars in annual phone profit (spreading out the expected $300 gross margin over two years) would have the same impact as converting 2.5 normal shoppers to Prime subscribers.

    Ultimately, this may be Amazon’s endgame: unlike Apple, a vertical company which offers services to differentiate its phones, or Google, a horizontal one that offers services to everyone, and phone software for free to ensure access, Amazon is offering a phone to more fully monetize a segment of its broad base of e-commerce customers, and maybe, at long last, turn some sort of profit.

    Will It Work?

    I have admittedly gone out of my way here to paint a strategy that makes sense of Amazon’s announcement yesterday. In truth, I’m a bit more skeptical.

    Amazon has always had a unique relationship to the physical world: from a consumer perspective they’re totally virtual, yet their primary business is things you can actually touch. In that sense something like the Fire phone is perfect for Amazon: it tightly binds the virtual to not only what is on your computer screen, but to what is on your hand, and with Firefly, most of the objects you interact with everyday.

    The question, though, is if the Fire phone is perfect for Amazon’s customers. Just because someone loves Amazon doesn’t mean their entire life is about buying things. And while it’s true that Amazon has gone to great lengths to make the Fire Phone compelling as a phone, it’s still an inferior offering as compared to a high-end Android phone or especially an iPhone when it comes to things like apps. In this respect it’s fair to compare the Fire Phone to Facebook Home and the HTC First: just because people love Facebook didn’t mean they wanted Facebook to dominate their phone, and by extension, their lives.

    Moreover, I was troubled by the faint sense of hubris in yesterday’s presentation; it was 45 minutes too long and included far too much self-congratulation and navel-gazing. We get that the design process for Dynamic Perspective was hard, but that doesn’t mean we care. More broadly, Amazon is a horizontal company: they ought to be serving everyone. Having their own phone introduces the wrong sort of incentives when it comes to Amazon’s efforts on Android and the iPhone; it’s the same danger I see in Microsoft focusing on both services and devices.

    Ultimately, I think Amazon would have been better off investing the considerable time and effort it took to bring the Fire Phone to market into better marketing Prime, as well into a concerted campaign to get users to add and use an Amazon app – with Firefly functionality – on their smartphones. Those Apple-like phone margins may look attractive in the short term, but hasn’t Amazon always been the king of the long term?4


    1. The base model does have double the storage 

    2. Amazon is severely constrained by virtue of the fact they primarily compete in the US; because of subsidies, it is very impossible to undercut other phones in price, and mid-range phones are especially handicapped when iPhones are available for $200 out-of-pocket 

    3. Other assumptions and references:

      • The numbers for annual e-commerce spend for normal and Prime subscribers are from this Quartz article
      • I assumed the Fire Phone buyers would further increase their ecommerce spend by the same percentage as Prime subscribers relative to normal shoppers
      • I also assumed the Fire Phone shoppers buy a new phone every two years (and thus divided an estimated $300 gross margin by 2)
      • I did not calculate the profit from any other Fire devices, although Fire Phone buyers are much more likely to have a Fire tablet, Kindle e-reader, and/or Fire TV

       

    4. Yes, you could argue that the Fire Phone is a long term investment 


  • Privacy is Dead

    According to a new study, customers are very concerned about their privacy. From the New York Times write-up:

    People around the world are thrilled by the ease and convenience of their smartphones and Internet services, but they aren’t willing to trade their privacy to get more of it.

    That is the top-line finding of a new study of 15,000 consumers in 15 countries. The privacy paradox was surfaced most directly in one question: Would you be willing to trade some privacy for greater convenience and ease? Worldwide, 51 percent replied no, and 27 percent said yes. (The remainder had no opinion or didn’t know.) There were country-by-country differences, but there was a consistency to the results, especially in the developed nations…

    When asked to name the leading threats to online privacy in the future, 51 percent of the global panel of consumers picked “businesses using, trading or selling my personal data for financial gain without my knowledge or benefit”…The survey seems to present a grim outlook for data-driven online businesses and marketers.

    Coincidentally, on the same day it was revealed that Facebook will use your web browsing history for ad targeting. From Ad Age:

    Through its ubiquitous “like” buttons on publisher sites across the web, Facebook has long been able to watch the web surfing behavior of its 1.28 billion monthly users. Soon it will begin to use that information for ad targeting on Facebook…

    Facebook is using the passive data — where users go on their PCs and phones — to make its own ads smarter. Advertisers who want to reach Facebook users who are interested in camping, for example, will be able to reach that audience with greater accuracy. “There’s just a more robust set of information that informs that you’re interested in camping,” Mr. Boland said.

    So is Facebook doomed? Hardly. The truth is few consumers would be willing to pay the price for true privacy. To understand that price, it is necessary to understand why privacy is dead.

    Advertising is Lucrative, and Free Is Often Imperative

    The first reason that many sites rely on advertising is that it is simply much more lucrative than charging on a per-user basis. Consider this site: the week I write this, Booking.com is sponsoring Stratechery for $500. If there is a sponsorship every week, that means ~$2,250 in monthly income. I also write the Daily Update for Stratechery members; membership costs $10/month, which means one month of sponsorship has the same income potential as 225 members.

    To be clear, while I’m very grateful for my sponsors, I greatly enjoy writing the Daily Update and strongly believe in the member-supported model and am determined to make it work. But it shouldn’t be any surprise that for most sites, the advertisement versus member-support decision isn’t really a decision at all: advertising wins.

    There are other services that can’t even realistically choose between advertising and member-supported. Facebook is a great example: the utility of Facebook is directly correlated with how many people you know who are also using Facebook, and the only way to maximize that number is to make the service free, supported by advertising. Google is in a similar boat: the efficacy of search is in many ways tied to how many people are using search. Queries and clicks are the raw grist for Google improving its algorithm, and the more the better, which means making queries free.

    Targeting Information is the New Scarcity

    The explosion of ad-supported content and ads on the Internet have created a glut of inventory; all things being equal, one piece of inventory is worth the same as another, which is to say worth $0. It’s pricing 101: if supply outstrips demand, the price drops; the flipside of infinite inventory is a price of $0.

    When it comes to the price sites or apps can charge for ads, though, the key phrase is “all things being equal.” Some sites, like the New York Times, can charge more based on the the perceived quality of their audience and the prestigiousness of the New York Times brand; others, like Yahoo, can offer wide reach with one ad-buy (although this is worth increasingly less because programmatic buying offers the same scale benefits across a wide array of sites and apps).

    The greatest amount of pricing power, though, is commanded by sites that target ads to specific groups. This is why Facebook is such a powerful player in online advertising, and why Google has spent so much time on the Google+ identity system (the social network aspect was only ever frosting on the cake). This means that any site that monetizes through advertising is heavily incentivized to know as much about their customers as possible, and to devise ways to leverage that knowledge into higher rates.1

    Many Ad-Supported Apps and Sites Have Significant Consumer Benefit

    I just noted that Facebook and Google, in order to function, must be free (and thus ad-supported) in order to offer meaningful social networking and better search, respectively. The biggest beneficiaries, though, are the people who actually use Facebook and Google. Facebook doesn’t seem to get much respect in tech circles, but the truth is it has had and continues to have a more meaningful impact on normal consumers lives than any of the various companies and products that actually get tech people excited. As for Google, I’ve used the product countless times just to write this article; it’s exceedingly difficult to imagine how any of us could function without it.

    A whole host of other sites don’t have to be free, but as I noted above the economics lead them to the ad model; ultimately, though, consumers benefit by having access to an astonishing amount of information without paying anything. It’s fine to argue that content in particular should depend on subscribers, but no one person can subscribe to everything.

    No Room for Privacy

    The net result is an iron circle in which advertisers pay free apps and sites, who in turn provide significant benefit to consumers, who in exchange surrender targeting info about their demographics and preferences:

    photo (4)

    You cannot take away any one of these components without taking away all of them. Unless we as a society are willing to give up all of the benefits provided by search, social networks, and the free dissemination of information, then we will give up our privacy.

    Towards Responsibility

    The first step towards dealing with this new reality responsibly is coming to grips with the circle I just illustrated: demanding that all users have zero data collected about them necessarily means choking off much of the consumer benefit provided by the Internet. Those who truly wish to opt-out can opt-out of Facebook, Google, etc. (any objections that this would be unpleasant actually makes my point).

    Secondly, there needs to be an industry standard for anonymizing and aggregating data. All of the relevant players claim they do this sort of anonymization and aggregation, but the effectiveness of their methods are a black box. It should be impossible to tie any of this lucrative information to an individual, and in return, apps and sites that ensure this is the case should receive some sort of imprimatur attesting to their responsibility.

    Finally, there should be significant resources spent establishing this imprimatur as something worth “paying” for when it comes to consumer attention. Sites and apps should be rewarded for treating information responsibly, with meaningful fines if violations are found.

    The devil is certainly in the details of this sort of verification system, but the danger of not working through these issues now are more government impositions along the lines of the recent “Right to be forgotten” decision handed down in the European Union. While that decision is not directly applicable to the issue of collecting and selling user data, it ought to serve as evidence that governments will effect change with the blunt instruments of regulation and judicial decree if we as an industry decline to wield our own scalpels.


    A postscript: from what I’ve heard Apple is going to be making a significant marketing push around privacy; the rhetoric at WWDC certainly suggested this was the case. While this is primarily a strategy credit (i.e. the opposite of a strategy tax), it will be interesting to see just how much of an impact it makes on consumers.


    1. To be clear, I don’t collect any user information and don’t share any membership information – and never will. My only selling point to sponsors is the obvious sophistication of a Stratechery reader 🙂  


  • How Apple TV Might Disrupt Microsoft and Sony

    Beyond the fact most of us had nothing better to do in the 1980s, a big reason to own a gaming console was that they were a phenomenally good deal. In 1985 Nintendo introduced the Famicom to North America as the Nintendo Entertainment System for a mere $199, a remarkably low price considering the average PC cost around $2,400.1 While PC prices soon began to fall, the Playstation/Nintendo 64 generation was still nearly $1,500 cheaper than the average PC.

    Over the last two generations of consoles, however, prices have actually risen, and today a Playstation 4 or Xbox One is nearly the same price as an average PC.

    PC prices have plummeted while console prices have slowly risen
    PC prices have plummeted while console prices have slowly risen

    In some respects, this makes no sense: why hasn’t Moore’s law had the same impact on consoles as it has had on PCs? Moreover, when you consider that consoles now compete with a whole host of new time-wasters like phones, tablets, social networks, dramatically expanded TV offerings, the Internet, etc., it’s downright bizarre.

    I think the answer lies in a specific part of disruption theory. Specifically, incumbents are driven by their best customers to add more and more features that drive up the price, causing the incumbents’ product to move further and further away from the average customer’s needs (needs which have actually been decreasing as more entertainment options become available):

    High-end and low-end customer needs have diverged, and Microsoft and Sony have chased the high-end
    High-end and low-end customer needs have diverged, and Microsoft and Sony have chased the high-end

    It’s hard to imagine an industry where high-end customers are more vocal and demanding than the console business, and there is no better example of this phenomenon than the Xbox One. Just before last year’s E3, Microsoft formally introduced the Xbox One with a heavy emphasis on its built-in Kinect and entertainment features. While their presentation featured trailers for a few upcoming games, it was clear that Microsoft was finally making a major play for the living room. And, excuse my French, gamers went apeshit.

    Beyond the perceived lack of focus on games, gamers objected to the Xbox One’s always-on capabilities (meant to facilitate Kinect voice commands), its DRM, its price ($100 more expensive than the Playstation 4), and, especially, its perceived sacrifices in performance. Sony pounced on gamers’ disgust, using their E3 presentation and launch runup to position themselves as the anti-Microsoft pro-gamer alternative, and successfully so. The Playstation 4 beat the Microsoft One out of the gate, and has only increased its lead since then.

    Microsoft, meanwhile, has been backtracking furiously, completely remaking their DRM, making the essential Kinect optional, lowering the price, and this week at E3 focusing on “nothing but games”. From CNet’s coverage of Microsoft’s press conference:

    “You are shaping the future of Xbox and we are better for it,” said Xbox head Phil Spencer, the first Microsoft executive to take the stage before the conference launched into its first demo, a gameplay showing of Call of Duty: Advanced Warfare. “We are dedicating our entire briefing to games,” he added to strong applause.

    So began a press conference where barely a sliver of time was dedicated to even acknowledging the existence of the Kinect camera and motion sensor, which was unbundled from the Xbox One last month, or any of Microsoft’s original television programming or entertainment features.

    Let me be very clear: this is a perfectly rational response by Microsoft, and a strategic disaster, all at the same time. The reason the Xbox existed in the first place was to give Microsoft a toe-hold in the living room. Over time the expectation was that the entertainment aspects of the console would make it appeal to not just gamers, but normal consumers as well. Instead, Microsoft has (understandably) been captured by gamers, and the only purpose their original strategic intent has served has been to make them less competitive with said gamers (the Xbox was more expensive and made different processing choices in order to accommodate the Kinect-centric entertainment focus). Meanwhile, no rational non-gamer will buy an Xbox One for $499 $399 in the face of sub-$100 alternatives like the Apple TV, Kindle Fire TV, or Roku.

    Here’s the thing though: I’m not sure that Microsoft’s strategy was wrong, broadly speaking. As I wrote in Black Box Strategy, the TV is worth fighting for:

    As I’ve written multiple times, the scarcest resource for consumer tech companies, especially ad-supported ones, is user attention. There are only so many minutes in the day, and their consumption is zero-sum: a moment spent doing activity A is not spent doing activity B, and then that moment is gone.

    Meanwhile, TV continues to monopolize a significant amount of that user attention. Although digital products have overtaken the amount of time spent on TV, primarily due to the accretive time spent on smartphones, the absolute time spent on TV has remained stubbornly persistent at about four-and-a-half hours per day per U.S. adult (source).

    What Microsoft messed up is the exact thing they seem to always mess up: timing.

    Back in 2001, when Xbox launched, console hardware was still not “good enough” for most gamers. The Xbox was big and bulky, with remarkably dated graphics that don’t even stand up to modern smartphones. It made sense to follow the standard console pattern: produce hardware just at the edge of possible, sell it at a loss, and make up the difference through game licensing and bending the cost curve. However, once Microsoft was committed to that pattern, they were stuck with it. And so, it’s 13 years later, and Microsoft (along with Sony and Nintendo) has only launched three consoles.

    Compare that to Apple (or Samsung or Nokia), which has launched 8 new iPhones in the last 7 years (plus 7 different iPads in the last 4). True, for most of that time all of those phones and iPads have cost more than a console, and even if Apple has fewer fragmentation issues, there is still tremendous efficiencies to be gained from developers writing for one specific platform.

    However, the developer environment has changed as well. In particular, the move to HD graphics increased the cost of development significantly, leading most developers to focus on cross-platform engines that let them easily develop games that ran on Xbox, Playstation and PC. These high costs also began to squeeze out smaller developers and increased the focus on blockbusters – often sequels and formula-type games – that were sure to earn back their investment. Prices of games increased as well – $60 is standard for the current generation – further turning off average consumers.

    The net result is that traditional consoles are about as far removed from average consumers as they could be. There is clearly a core gamer market, and Sony and Microsoft are fighting ferociously for it, but no one is growing the pie. I think there is an opening.

    Imagine a new TV product, with two models:

    • $99 with a full set of entertainment options, but no gaming
    • $179 with a full set of entertainment options, plus gaming

    This TV product would be on an annual release cycle; average consumers would only upgrade every few years (the core OS and most games would support 3 generations), while more serious gamers would upgrade every year providing a nice bit of recurring revenue (this would be much more feasible today, as developers have long since developed the expertise to make games available across multiple architectures). Video games would be delivered not as packaged goods, but rather through an app store. Prices would likely be significantly lower than traditional consoles, but the aforementioned serious gamers would support a higher-price tier for AAA titles and ambitious indies. This console would also integrate seamlessly with the devices carried by many of its potential customers: video and photos could easily be transferred wirelessly, and you could even share screens or use the TV for video calling.

    Unlike the current console model, these TV boxes would be sold at a profit. I’ll stop the charade – I’m clearly talking about Apple – so that I can get specific about costs. According to iSuppli, a 16GB iPad Air has a bill of materials of $269. However, that cost includes the following components that would be unnecessary in an Apple TV:

    • $90 Display
    • $43 Touch screen
    • $ 9 Cameras
    • $10 User interface and sensors
    • $ 7 Power management
    • $19 Battery
    • $42 Mechanical/Electro-Mechanical

    That leaves a mere $49 in costs! I do think a console needs a controller (in fact, that’s largely the point),2 so let’s add $15, and mechanical/electro-mechanical is obviously not zero (but it’s likely less given there is less need for miniaturization); let’s put that at $25, plus $10 for a power supply. That comes out to $99; add in another $20 for IP, and you’re still talking about a box with a 33% margin. It’s a new growth driver for a company that could use one; more importantly, it increases the value of iPhones and iPads.

    I’ve gone back and forth on the Apple TV as a console; there is certainly a strategic incentive to own the TV, and the way to do that is by doing the jobs TV does. Still, though, the timing needs to be right, and now the tech is there, the APIs are there, and more importantly, I believe the market is there:

    There is a big market in meeting the needs of lower-end consumers (of which there are many more)
    There is a big market in meeting the needs of lower-end consumers (of which there are many more)

    Meanwhile, Sony3 and Microsoft will be stuck with increasingly old consoles that are too expensive and, sooner rather than later, less capable than the continually upgraded Apple TV. At that point they will lose the high end gamers as well, and the textbook disruption will be complete.

    Update: I wrote a big follow-up to this article in Friday’s Daily Update (members-only), including talk about Nintendo, Playstation TV, the archaic console business model, pricing, and whether or not this will actually happen. Check it out or sign up for a membership.

    Update 2: I have changed my mind about the inevitable disruption of consoles. Read Gaming and Good Enough


    1. PC Prices from 1985 to 1995 are from this paper; prices from 1995 to 2005 were found by searching CNet’s archives; prices from 2005 to present from information provided to me by Charles Arthur. I’d also like to thank @typistX for helping me search for data 

    2. I’ve gone back-and-forth as well as to whether an iPhone will be the default controller; I think that hurts the value prop, so I’m leaning towards no 

    3. To be fair, Sony has released the Playstation TV for $99; although it has some flaws, I think it’s a very smart move that shows some impressive strategic agility. However, I question how much traction they will get: both their traditional retail channels and Sony itself are incentivized to push the PS4, not the Playstation TV 


  • Why Uber is Worth $18.2 Billion

    Christopher Mims’ piece Uber’s $18.2B Valuation Is a Head Scratcher is getting a lot of play, and deservedly so: it’s a good encapsulation of many people’s objections to Uber’s recent round that valued the company at $18.2 billion.

    However, I think Mims is wrong. I would argue he:

    • Dramatically underestimates Uber’s potential market
    • Undervalues Uber’s ability to differentiate as a consumer product
    • Mistakes venture capital for public markets

    I’ll take on each one-by-one

    Uber Is More Than a Taxi Replacement

    Last fall I proposed a new term in the business lexicon: Obsoletive

    The context of the article was smartphones, and the point was to distinguish “obsoletive” from “disruptive”:

    Of course the easy answer is to say “The iPhone disrupted cell phones.” Except, at least to my reading, that kind of misses the point of what disruption is…

    Disruption is low-end; a disruptive product is worse than the incumbent technology on the vectors that the incumbent’s customers care about. But, it’s cheaper, and better on other vectors that different customers care about. And, eventually, as the new technology improves, it takes the incumbent’s market. This is not what happened in cell phones.

    In 2006, the Nokia 1600 was the top-selling phone in the world, and the BlackBerry Pearl the best-selling smartphone. Both were only a year away from their doom, but that doom was not a cheaper, less-capable product, but in fact the exact opposite: a far more powerful, and fantastically more expensive product called the iPhone.

    The jobs done by Nokia and BlackBerry were reduced to apps on the iPhone
    The jobs done by Nokia and BlackBerry were reduced to apps on the iPhone

    The problem for Nokia and BlackBerry was that their specialties – calling, messaging, and email – were simply apps: one function on a general-purpose computer. A dedicated device that only did calls, or messages, or email, was simply obsolete.

    Examples of obsoletive technologies include PCs, the Internet, search, and smartphones. All replaced multiples single-use tools with one (usually more expensive) single-purpose solution; more pertinently to this article, obsoletive technologies are huge opportunities, much greater in fact than disruptive ones (Read the original “Obsoletive” article here).

    This, then, is the first thing that Mims gets wrong in his criticism of Uber. From the conclusion:

    Even the most aggressive estimates of Uber’s value — let’s assume the company captures 50% of the world taxi market in 5 years — mean the company would still be worth less than $18.2 billion…It’s quite possible that investors have wildly overestimated the ultimate size of Uber’s potential revenue.

    This is a relevant comparison today, but to my mind sells short Uber’s potential: the addition of technology to people driving cars does not make Uber a taxi competitor; rather, it makes Uber a taxi obsoleter. 1 Moving passengers around for money is but one small job that could be done by a nearly infinitely scalable logistics company, just as typing documents is one small job for a PC, or making phone calls one small job for a smartphone. In other words, to suggest that Uber’s ceiling is the size of the taxi industry is no different than suggesting typewriters are the ceiling for PCs.2 If you’re considering upside it’s far better to look at the market caps of UPS ($95 billion), Fedex ($42 billion), or even Toyota ($182 billion).3

    Low Barriers to Entry Do Not Mean the End of Differentiation

    This is Mims primary objection. From the article:

    Uber’s larger vision, according to CEO Travis Kalanick, is to disrupt transportation and “make car ownership a thing of the past.”

    That’s a worthy mission, but the wrinkle is that Uber is entering what is essentially a frictionless market (for both drivers and riders) in which its services are a commodity. The company’s phenomenal growth so far (we don’t know the actual numbers, but doubling in revenue every 6 months is what Kalanick claims) has been built on the back of low-hanging fruit — expansion into new cities, particularly where taxi availability is low — and levels of dissatisfaction among taxi drivers that may be temporal. (When I asked drivers who had a particular loyalty to Lyft why they liked the company, they said they felt it treated them better in general.)

    In both respects, Uber’s growth is reminiscent of Groupon, and we know what happened to them.

    Actually, what did happen to Groupon? Mims contention, as far as I can tell, is that Groupon lost its value because of competitors. But actually, Groupon’s competitors have largely disappeared! (LivingSocial exists, but barely; it’s a surprise it’s even a going concern at this point)

    This was, in fact, what I predicted long before Groupon’s IPO: that while Daily Deals would be commoditized, there would always be an advantage that would accrue to the market leader based on brand and consumers’ willingness to tolerate managing multiple options (honestly, how many Daily Deals emails are too many?). After all, there are entire industries – consumer packaged goods, especially – built on the idea that, in the consumer market, commodities can be sustainably differentiated by brand, channel, distribution, etc. (Be right back – I’m going to snack on some Ritz crackers). Similarly, while the truly cost conscious may manage multiple ride-sharing apps, most will default to one, and in that case, the market and brand leader has a clear advantage (just like every branded item in your local grocery store).

    The problem with Groupon was the entire premise of their business; in short, daily deals were a terrible deal for small businesses, which meant the cost of getting more daily deals eventually became prohibitive (Groupon’s sales force costs were through the roof). In this respect, Uber stands in stark contrast: drivers are getting a great deal with Uber (and Lyft and all the other competitors). In other words, I believe Groupon lost most of its valuation because it had a crappy value proposition for its core constituency, not because it faced too much competition. If I’m right, and Uber v Lyft plays out the same way as Groupon v LivingSocial, then only Uber will be left standing (a la Groupon), but standing on top of a much healthier industry (unlike Groupon).

    Venture Round Valuations Do Not Equal Public Market Valuations

    At this point Mims could justifiably argue that Uber as logistical network or differentiated brand are simply pie-in-the-sky fantasies that pale in reality to today’s market. And that would be a fair thing to say if Uber were a public company and $18.2 billion were their market cap.

    But, in fact, $18.2 billion is a valuation used in a venture round, and that has entirely different implications. Venture capitalists are not buying stock per se, but rather mis-priced optionality. In the case of Uber, $18.2 billion valuation is the result of a $1.2 billion investment; for the investors ponying up the cash, their downside is capped at $1.2 billion (and likely much less, given that Uber’s valuation will never go to zero, and that investors get preferential treatment in a below valuation exit). The upside, though, is by definition infinite, because Uber’s valuation has no theoretical limit (again, the bottom limit is $0).

    The truth is that whenever Uber goes public, they are not an $18 billion company. They are either a $4 billion company, like Groupon, or a $180 billion (or more) company befitting their status as an obsoletor. While I think the latter is more likely, just for the sake of argument I’ll say the downside position has a 90% chance, and the upside position a 10% percent change. That means this investment round has an expected return as follows:

    10% * ($180b * $1.2b/$18.2b) + 90% * ($4b * $1.2b/$18.2b) = $1.4b return

    Again, in an extremely pessimistic scenario in which Uber has only a 10% chance of realizing its potential, investors in this latest round will still make their money back. There isn’t that much downside, and the upside is enormous. Opportunities like this investment round are the entire premise of venture capital, and the valuations that result just aren’t that analogous to public market valuations, at least in the short term.


    I don’t particularly like picking on individual writers or articles, and there’s a non-zero chance that Mims ends up being right – I still have friends giving me grief for being bullish on Groupon. However, even there I think my reasoning was sound: you can differentiate in consumer markets with low barriers to entry. Add that to the obsoletive nature of Uber’s product, along with an understanding of how venture valuations work, and $18.2 billion ends up looking downright reasonable.


    1. The head of a San Francisco cab company believes the entire industry will be gone from the city in 18 months 

    2. Interestingly, the inflation-adjusted value of the typewriter industry in 1975 was about $22 billion, the same as today’s taxi market 

    3. To be fair, Mims acknowledges this viewpoint; the fact he dismisses it matters for point number three 


  • What Steve Jobs Wouldn’t Have Done

    Between a feature-by-feature review (members only) and an analysis of strategic underpinnings, I’ve written nearly three thousand words about Apple’s WWDC announcements. Still, though, it feels like I haven’t written about what is perhaps the most important takeaway.

    It’s a takeaway I’ve resisted, even as writer after blogger after Twitterer has said the same thing: Apple is different, things have changed, they are opening up. My instinctual reaction has been to assume that it is all hyperbole, that nothing has changed, everything is expected. And yet, I have to admit the conclusion from Josh Topolsky’s piece Meet the New Apple rings true:

    But the big story — and the big picture — is that Apple seems to have come out of deep freeze. It feels light, like it’s moving forward. Like the cobwebs have brushed aside, and things are going to get fun again. Everything we saw at WWDC’s keynote points to a very interesting next few months for Apple — a period that will undoubtedly come into deep focus around the fall, when the company tends to roll out its major hardware updates. But unlike previous events, which have felt painfully predictable and iterative in the past couple of years, the next move Apple makes should be surprising. If the software and platform work that we saw at the keynote on Monday is any indication, the kind of apps and hardware that follow it aren’t just going to be business as usual.

    Indeed, there was an undeniable lightness and confidence in both the content and the style of Apple’s presentation. What’s more interesting is to consider why. Topolsky chalks it up to a post-Steve Jobs hangover:

    In recent years — and let’s be honest, probably since just after Steve Jobs’ death in 2011 — there has been a sense of hesitation, of standoffishness, and maybe even a little bit of fear in the tone of Apple events. That tone has carried over to the company’s approach to the outside world, and has left a lot of people wondering just whether there’s been a plan at all. You could feel a palpable sense of Apple being closed off, in a huddle, trying to figure out what kind of company it wants to be (and can be) in a post-Jobs world. Because whether you agreed with his style, decisions, or philosophies, it’s impossible to deny that Jobs was the voice of Apple and the holder of the keys to the company roadmap.

    I don’t think this is quite right. Topolsky is right about the fear, but wrong about the timing. What is critical to understand about Steve Jobs’ Apple was how much it was rooted in fear. Not fear of Jobs, but rather, the abject terror of the company ever finding itself in a similar situation to the one Jobs stepped into in 1997. A company bankrupt technically, and on the verge of being bankrupt financially, deserted by the partner it had made into a powerhouse (Adobe), and forced to accept a loan from its oldest and most bitter rival Microsoft. Jobs, and all of those closest to him, swore never again.

    And so, Apple hoarded cash like a depression-era grandma; every new Apple product was locked down to the fullest extent possible, with limitations removed grudgingly at best. This absolutely extended to developers: not only were apps originally banned from the iPhone, and later on subject to seemingly arbitrary limitations and restrictions, but even today it’s unclear if non-game apps can be the foundation of sustainable businesses because of Apple’s restrictions.


    Last year, in an interview with Eric Jackson at Forbes I detailed my long-term worry for Apple:

    The values Jobs insisted Apple embody – quality, simplicity, the “feel” of something – are actually not that difficult to understand. There is no magic formula, just the incredibly difficult work of making choices and executing in a way that ensures those values come through in the products…

    In other words, I think Apple is fine. My long-term worry is about what happens when all of the old guard leave, and those in charge have only ever known success. But that’s still a ways off.

    In fact, it wasn’t a ways off at all. Just compare the executive pages from 2010 and 2014 – nearly 60% of the 2010 team is gone.

    Apple's executive team in 2010 versus 2014

    Jobs, of course, is the biggest absence, but also gone is his protégé Scott Forstall. Both were ardent proponents of uncompromising Apple control, and, I think it’s safe to say, both would have been concerned with both the tenor and content of this week’s announcements. It was Forstall, remember, who dominated the most depressing post-Jobs keynote at WWDC 2012 (when iOS 6 was announced). That keynote is memorable largely for the introduction of Apple Maps, the moment when Apple’s insistence on control crossed the line from justifiable caution to injuriousness to the end-user experience.

    Interestingly, Craig Federighi, whose enthusiastic joviality is a stark contrast from Forstall’s certitude, was also at Apple in 1997, having been a part of the NeXT acquisition. Crucially, though, he left in 1999 for eight years, meaning he witnessed the first part of Apple’s incredible run from afar. And perhaps that perspective helped him to move on in a way Forstall and others in the old guard could not. Again, by “moving-on” I don’t mean moving-on from Jobs’ death, but rather moving-on from the darkest parts of Apple’s past. Apple is not about to go bankrupt, they hold the power in every partnership they enter, developers around the world desperately want to work with them. It is not 1997, and to make decisions with a 1997 mindset simply doesn’t make sense.

    In short, perhaps my fears for Apple’s future were precisely backwards: Apple didn’t need to always remember 1997; in fact, they needed to forget. And so they have.


  • Growing Apple at WWDC

    In the end, as with many of Apple strategies, much of what transpired in Monday’s WWDC keynote was telegraphed many months ago, at least from a strategic perspective. Consider the thinking behind iOS 7. As Jony Ive told USA Today:

    When we sat down last November (to work on iOS 7), we understood that people had already become comfortable with touching glass, they didn’t need physical buttons, they understood the benefits…So there was an incredible liberty in not having to reference the physical world so literally. We were trying to create an environment that was less specific. It got design out of the way.

    Implicit in that statement – “people had already become comfortable with touching glass” – is the acknowledgment that smartphones have reached the saturation point, especially in the premium segment that Apple has chosen to focus on. However, as I wrote in Why Apple is Buying Beats, this is problematic because Apple needs to grow.

    There are two ways to do so: steal share from competitors and sell more to existing users. While the sheer number of announcements at Monday’s WWDC keynote was almost overwhelming, much of what was announced slotted neatly into one of these two strategies.

    Steal Share: Winning Android Customers

    Steal share is just what it sounds like: get more of your competitors’ customers to switch to you than you lose to your competitor. The first part of that equation is largely a marketing effort. To that end Apple spent several minutes in the keynote talking about Android security problems and how few phones are upgraded, and also emphasized that Apple was very concerned about privacy at several different points in the presentation. It will be very interesting to see if Apple starts to incorporate this messaging into more of their outbound marketing.

    The second way Apple has gone about stealing share is through increasing iPhone distribution. Many premium Android phones have been sold on networks that did not offer the iPhone; beyond the big ones like China Mobile and NTT DoCoMo, there are scores of smaller carriers like US Cellular that did not offer the iPhone until very recently. Simply being available is half the battle, and it’s one Apple is fighting on ever more carriers around the world, increasing their carrier base by 15% in the last 8 months alone.

    Finally, there is a product component to winning Android customers: addressing specific reasons why someone might prefer an Android phone. One such reason is the availability of 3rd-party keyboards. Well, that’s only an advantage for the next three months or so. Much more importantly, many prefer larger screens, which have been Android-only; I strongly suspect that advantage is disappearing in three months as well.

    Steal Share: Retain iPhone Customers

    Many of the new technologies Apple announced Monday, especially HealthKit and HomeKit, have the potentially to serve as very powerful iOS lock-ins. It’s one thing to leave behind a few dollars’ worth of apps in order to get a larger screen or a lower price; it’s quite another to abandon expensive devices and home appliances that are designed to work with only iOS.

    Similar lock-in applies to iCloud Storage and everything else related to Apple’s cloud (this applies to developers too using Cloud Kit); once customers decide to buy an iPhone, Apple wants to ensure that customer only ever considers an iPhone from that point forward.

    Upsell Existing Users: Sell Macs

    The iPhone’s “bad” news – the fact that only about 20% of the worldwide smartphone market is premium – is great news for Apple’s other platforms, especially the Mac. Ever since the days of the iPod Apple has talked about the “halo” effect – the idea that the experience of owning one Apple product will make you more likely to purchase other Apple products in other categories.

    Many of Monday’s announcements, however, take this concept much further by making it very explicit that your iPhone experience can not be fully realized unless you also own a Mac (and, to a lesser extent, an iPad). Chief among these were the suite of features that made up Continuity. Answering phone calls on your computer, handing off documents or web pages, seamless SMS across devices, ad hoc hotspots – each of these is obviously useful to end users, yet only possible with a Mac. I would not be surprised to see Apple promote these features heavily in its marketing, with the intent of converting some portion of its non-Mac-owning iPhone base – which is huge – into Mac owners (And yes, it is rather incredible to think about a 30-year-old product being a growth story, but it absolutely is the case).

    Upsell Existing Users: Sell More iPhones

    The other way to think about upselling current iPhone customers is incentivizing them to sell iPhones to their friends and family. Just as many of Monday’s announcements only apply if you also have a Mac, others only apply if those you communicate with regularly also have an iPhone.

    The biggest example here are the raft of updates to Messages. The “blue” versus “green” bubble color has long-served as a pro-iPhone status symbol, even though the actual experience of chatting via iMessage versus SMS was identical. Now, though, there are a great many additional functions that are only possible if both users have iPhones. To be sure, almost all of the additional Messaging features Apple announced are also available in 3rd-party messaging apps, but the most critical feature of any messaging service is who else is using it. Apple is betting some percentage of users would rather convince their friends to get an iPhone than understand and convince their network to make a wholesale shift to an alternate messaging service.

    Family Sharing also fits in with this story. Shared calendars, photo albums, etc. are significant pain points, and if Apple is able to successfully enable these features out of the box it is absolutely a reason for parents to not only buy iPhones for themselves, but for their children as well.


    The other important growth story for iOS is the enterprise. Apple went beyond its usual boilerplate about 98% of the Fortune 500 using iOS and actually spent meaningful time detailing new features that had been developed specifically for enterprises, including new security features, enhanced Mail and Calendar, and better device management. Equally meaningful are app extensions, which will not only make power users happy, but also better enable corporations to create and meaningfully use proprietary line-of-business applications.

    All-in-all, it’s hard to imagine how Monday’s announcements could have been more impressive. It’s not just that Apple delivered an incredible number of new features and genuine surprises, but also the strategic clarity with which they did so. The big decisions have been made: Apple (and thus the iPhone) is a premium product company that won’t go downmarket; what is left is to execute, and they are doing just that.


  • WWDC Expectations

    Update: bumping to the top of the page

    Beyond the usual updates of iOS and OS X, there are two significant rumors about what Apple might unveil next week at WWDC.

    The first is Healthbook, as detailed by Mark Gurman here and here. The second is Apple’s plan for a smart home, first reported by The Financial Times. I’m considering them together because at a high level they are actually quite similar: both will likely rely primarily on 3rd-party hardware that is integrated at a software level on iOS. It’s a strategy that makes a lot of sense both at a theoretical level and at a strategic level for Apple specifically, which leads me to believe that both reports are mostly correct.

    Right now the health-monitoring and smart home markets are in their infancy; as Clay Christensen first laid out in The Innovator’s Solution, in immature markets nothing is “good enough,” meaning functionality is prioritized over price, giving the advantage to integrated solutions (in mature markets, everything is good enough, meaning price is prioritized over functionality; I discussed the extent to which this applies to the consumer market here).

    What Apple is rumored to be proposing for both health monitoring and smart homes is, unsurprisingly, an integrated solution: a series of devices and appliances that rely on one central interface for all of their functionality and interaction. It’s a similar model to the old digital hub, where your digital devices were spokes surrounding your Mac; in the future Apple allegedly envisions your home and personal accessories to be spokes around your iPhone:

    The iPhone as a new kind of digital hub
    The iPhone as a new kind of digital hub

    (See also: Digital Hub 2.0)

    Contrast this to Android@Home and Android Wear. Just as Android was developed from the beginning to be a standalone device that relied on the cloud, not a computer, Android@Home and Android Wear are at their heart meant to enable standalone devices that connect with all types of other devices through the cloud.

    The cloud as a hub connecting many disparate pieces
    The cloud as a hub connecting many disparate pieces

    This is a modular world, and while it is ideal for a mature market (as noted above), in immature markets it leads to a landscape like this:

    "Making Sense of the Internet of Things" - TechCrunch
    “Making Sense of the Internet of Things” – TechCrunch

    Lots of devices that use Android@Home and Android Wear may be sold, but the majority will be like cheap Android: smart in name, but not in actual usage. It will only be over time, as devices and appliances become more capable and as standards develop that functionality will become “good enough” such that the more integrated solution would be seriously challenged.

    All of this is good news for Apple. As I wrote in Why Apple is Buying Beats:

    I believe the iPhone will be Apple’s chief revenue driver for at least the next five years. Something like the iWatch may be interesting, but it’s unrealistic to expect it or any other product category to drive Apple’s growth in a meaningful way, at least in the short term. So Apple needs lots of small revenue drivers in place of one big one. And that means accessories.

    To that end, there are two rumored products in Apple’s pipeline that fit this vision: the iWatch (to go with Healthbook) and an enhanced AppleTV (to go with the smart home). While I don’t expect iWatch news next week, I am hopeful for news about the AppleTV. Regardless, I expect both to be Apple’s next priorities on the hardware front.

    Perhaps more interesting is to consider how Apple might go about integrating 3rd-party devices and appliances into their vision. There will certainly be some sort of licensing program similar to the “Made for iPod” program, and the price of inclusion could be steep, particularly for big ticket items like home appliances or specialized medical devices (when “Made for iPod” launched Apple reportedly charged the greater of $10 or 10% of the retail price, although this dropped over time). This could absolutely be a material revenue stream sooner rather than later.

    More significantly, though, each item purchased that is “Made for iPhone” further locks a consumer into the Apple ecosystem; Apple will likely long struggle to generate significant growth from such a massive base, but making it even harder for people to leave would cheer investors who tend to discount AAPL based on the perceived fragility of their position.


  • Publishers’ Deal with the Devil

    Ay, we must die an everlasting death.
    What doctrine call you this, Che sera, sera,
    What will be, shall be? Divinity, adieu!

    – The Tragical History of Doctor Faustus, Christopher Marlowe

    To evoke Faust as allegory for the ongoing dispute between Amazon and book publishers is appropriate on two levels, the first being the nature of the original story.

    Faust was the protagonist of a German legend, who, dissatisfied with his life as a scholar, sold his soul to the devil in exchange for infinite knowledge and the full array of worldly pleasures. Said legend has been appropriated by multiple authors for variations on the same theme, including Christopher Marlowe’s The Tragical History of Doctor Faustus, Johann Wolfgang von Goethe’s Faust, and a host of other plays, operas, books, and symphonies. Indeed, most of you reading this have likely uttered the phrase “Make a deal with the devil,” and so have adopted the original idea and made it your own, without paying a cent to anyone.

    Ideas have always been free, but their closest cousin, words, have long been a bit more problematic. The publishers would have you believe that their words, written by authors of course, but blessed by them, are worth a premium, and certainly ought not be shared freely. And, for centuries, that was mostly true. Books – and newspapers and magazines, for that matter – were sold for a price.

    The problem, though, as newspapers and magazines have long since discovered to their peril, is that no one was ever paying for the words. Rather, it was the difficulty in distributing words that demanded a premium, whether that be the paper, the printing, the shipping, or the distributing. With the Internet, each of these proved unnecessary, leaving only the writing, editing, and publishing, and the market has dictated exactly what those are worth, all things being equal: $0.

    The issue is that writing, editing, and publishing are all fixed costs; they are accrued before an article or book is published, and increasing the distribution of said article or book is, relative to these costs, completely free. The costs the Internet obviated, on the other hand, such as paper, ink, shipping, and retail space, were all variable costs; to create one additional book (or newspaper or magazine) required money. To put it another way, before the Internet free was not an option, and once customers were already paying something, it was a whole lot easier to get them to pay just a little bit more. And, with that little bit more, publishers could cover their fixed costs, and perhaps even turn a tidy profit.

    On the Internet, though, words are much more like the mythical story of Faust, available to anyone and everyone for zero marginal cost. Each of you reading this article is creating a new version of this site on your computer, and it’s not costing anyone a cent.1 Unfortunately for those accruing those fixed costs, it’s much more difficult to convince customers to move from $0 to even $0.01 than it is to go move from $1 to $2 (or $10 to $20).

    This reality, unsurprisingly, terrifies the publishers, which is where we return to Faust: just as the doctor made a deal with the devil, so have publishers, but in this case the devil is Amazon, and Mephistophilis, the devil’s agent, is DRM.

    Unlike newspapers, which quickly placed all their content on the Internet in the 90s, massively increasing readership but ultimately hollowing out their revenue base, publishers approached the digital era much more gingerly. It’s not that the idea of ebooks was unknown – the first ebook reader launched in 1998 – but rather that publishers, having seen what happened to music with the release of Napster, were rightly terrified of a world in which books were accessible to anyone, at any time, for free. Over the next several years different publishers dabbled with different ebook readers, but it wasn’t until Amazon, with its longstanding relationship with publishers, launched the Kindle in 2007 that the publishers fully got on board, and key to the publishers embrace of the Kindle was its proprietary DRM. Over time the publishers would also launch their titles on other companies ebook readers, such as the Nook and iBooks, but always with DRM.

    The problem with DRM, as Nook owners now know all too well, is that it ties your books to a single company. If you start buying Kindle books, you will always buy Kindle books, because your books will only ever work on a Kindle. The result is that anyone who has bought Kindle books is now more loyal to Amazon than they are to any of the publishers. Not that they were ever loyal to publishers, of course; said loyalty is reserved for specific authors. And that right there is the root of the publishers’ Faustian bargain: unloved by consumers, yet unwilling to give up their position as middleperson, publishers traded away infinite distribution and the truly free exchange of ideas for the yoke of another, infinitely more powerful middleperson – Amazon.

    And now, Amazon is demanding its payment. While the specifics are unclear, publishers Hachette and Bonnier are to give up more control and money when it comes to ebooks, and to help them remember their end of the deal, Amazon is “forgetting” to keep many of their physical books in stock.

    Let me be perfectly clear here: I think what Amazon is doing is ugly and I don’t like it. And, were this 1985, I would absolutely be raising antitrust alarms around Amazon’s monopsonistic position in printed books (i.e. their position as by far the largest buyer of books gives them undue power). However, it’s not 1985; it’s 2014, and a huge percentage of the population has at least one device capable of reading ebooks. In fact, publishers could break the back of the Amazon monopsony today were they to start selling all of their books without DRM. Can’t find the book you want on Amazon? How about you simply visit the publisher’s site and buy it there. Or, as is more likely, visit the site of your favorite author.

    Ah, but that’s the rub. The publishers need Amazon because they need the Kindle’s DRM, because they know without that artificial friction their contribution to a book’s fixed costs would become untenable. As George Packer recounted in his anti-Amazon article Cheap Words:

    Amazon executives considered publishing people “antediluvian losers with rotary phones and inventory systems designed in 1968 and warehouses full of crap.” Publishers kept no data on customers, making their bets on books a matter of instinct rather than metrics. They were full of inefficiences, starting with overpriced Manhattan offices.

    I’ve worked with publishers, and here’s the thing: Amazon is right. It’s not that publishers don’t add value,2 but rather that their economics are wholly incompatible with the reality of the Internet. If publishers are to have a future free of Amazon, that future will be as a service with upside directly tied to a book’s success. Specifically:

    • Authors will hire publishers from a competitive marketplace based on reputation, quality of service, and price
    • Fees will likely be some sort of fixed price up-front, with a percentage of revenues
    • Books will be published without DRM and marketed primarily by the authors themselves, likely at lower price points but with significant upside for breakthrough works

    Some sort of DRM remains an option in this new world, but it must be controlled by the author (or, if he chooses, his publisher) directly. DRM is artificial scarcity, and whoever controls it controls the entire market (I myself have chosen to not make all my content here on Stratechery available to everyone, but I control the means by which it is distributed). The problem with publishers is that, due to their own incompetence and (understandable) unwillingness to change, they gave the keys to the castle to Amazon, and it’s no surprise they are now paying the price; the devil always has its due.


    1. OK, fine, the bandwidth and electricity cost something, but you know what I mean 

    2. As the author Charlie Stross notes:

      Forbes seem to think that Hachette is a producer and Amazon is a distributor. This isn’t quite true. I am a producer. From my perspective, Hachette is a value-added wholesale distributor: they supply editorial, production, packaging, marketing, accounting, and sales services and pay me a percentage of the revenue. (I could do this myself, and self-publish, but I don’t want to be a publisher, I want to be a writer: we have this thing called “the division of labour”, and it suits me quite well to out-source that side of the job to specialists at Hachette, or Penguin, or Macmillan.)

      Unfortunately for Stross, a division of labor neatly isolated from the market success of his work is one of those inefficiencies ruthlessly culled by the Internet; of course he thinks this is a loss to consumers, but that ignores the fact that most would-be authors never actually even had a chance. 


  • It’s Time to Kill Surface

    “The question that needs to be asked and answered is why hardware.”

    To Satya Nadella’s credit, he provided not just the answer, but the question as well. And, looked at narrowly, there were good things seen – and not seen – at Microsoft’s Surface event. Having clearly failed as a mass market device, it makes sense to focus Surface and more clearly define its use case. And, if that use case is productivity, then it also makes sense to kill Surface mini. That Nadella allegedly did just that is a great sign. Now he just needs to kill the whole line.


    I actually think a useful way to understand the Surface problem is to think about the Xbox.

    It is by focusing on the console world that one arrives at the conclusion that the Xbox, all things considered, is a success and something Microsoft can hang its hat on. In fact, that’s exactly what nearly all Microsoft employees, from executives to rank-and-file, do whenever questioned if Microsoft is innovative. “Look at the Xbox! Look at Kinect!” is their refrain.

    The problem is that winning at consoles is a small goal, one wholly different from the reason Xbox was created in the first place. For many years now Microsoft has been focused on “Three Screens and a Cloud,” the idea that they as a platform provider ought to have a presence on your desk, in your pocket, and in your living room, all tied together by the cloud. While that specific formulation arrived somewhere around 2009, that vein of thinking was central to Xbox’s creation; the console aspects were meant to be a trojan horse, giving people a reason-to-buy the Xbox (and not a Playstation); the more computer-type aspects would then be added over time until an Xbox was to living rooms what PCs were to every desk in every office and every home (running Microsoft software).

    It was this original goal that contributed to the current Xbox One disaster; Microsoft’s newest console is not only underpowered relative to the PS4, it’s also $100 more expensive (due to the mandatory Kinect), and launched with a terrible wave of publicity surrounding its always-on nature. The Kinect and connectedness were both included to help the Xbox One fulfill its goal of being the primary box in your entertainment system, controlling not just games but also live TV with your voice. Unfortunately, it doesn’t work that well for entertainment, even as it has hurt Microsoft’s ability to compete for console buyers. Thus, Microsoft has spent the last year walking back many of the main features of the console, including its DRM system, its connectedness, and, last week, offering the console without the supposedly essential Kinect to make it $100 cheaper. Now it’s the same price as a PS4, but still less powerful and with the same dark cloud.

    Here’s the bigger problem though: even if the Xbox One worked perfectly as an entertainment center, it would still cost $499.1 For those not good at math, that’s $400 more than an AppleTV, and completely unapproachable for anyone who does not care about gaming. In short, it is not enough to consider how the Xbox is doing relative to consoles; the Xbox must be evaluated based on how it is aligning with and contributing to Microsoft’s overall strategy, and in that light, it is an unmitigated disaster.2


    So what about Surface?

    What the Xbox example illustrates is it is not enough to consider whether or not Surface in isolation is a successful (i.e. profitable) product (although, like the Xbox for much of its existence, it’s not). Rather, we need to consider the overall goals for Surface. As best I can tell there are three:3

    1. Surface was the physical manifestation of Windows 8. Back when I was a category manager for the Windows 8 app store, trying to explain Windows 8 to developers,4 Surface was incredibly useful for explaining Microsoft’s vision of moving seamlessly between work and play with one device – and why you might want two operating systems on one device.5 Not that Surface was made for my personal benefit, of course; rather I believe it was intended to help sell Windows 8 to all of Microsoft’s stakeholders, including OEMs, developers, enterprises, and end customers.

    2. Microsoft did not believe their OEM partners were capable of competing with Apple. Certainly there are no public statements to this effect, but I can tell you that internally most of Microsoft’s OEM partners were viewed with disdain, being resolutely focused on the bottom line and unable or unwilling to make something of Apple-like quality. Correctly understanding that things like fit-and-finish were that much more important in a personal device like a tablet, Microsoft decided that if they wanted hardware done right, they had to do it themselves.

    3. By making their own tablet, Microsoft could reap absolute margins similar to what they enjoyed while providing software for PCs (although the margin percentage would be lower). Microsoft traditionally captured about $115 per PC (through Windows and Office licenses), but in the tablet category, $115 renders OEMs uncompetitive, especially compared to using Android. Instead, Microsoft could sell Surface for $499, with a 25% margin, which comes out to $125, about the same amount Microsoft is accustomed to earning on a device. Were this strategy successful, it would allow Microsoft to maintain its bottom line (and significantly increase its top line) even as PC sales declined in the face of tablets.

    These goals must have been important, because Surface came at a significant cost. There was the actual cost of hiring the right employees, the tooling, the factories, but more importantly, there was the cost of competing with Microsoft’s most important partners – the OEMs (and Intel, when it came to the ARM-powered Windows RT). Sure, said partners perhaps hadn’t invested as much in R&D as they could have, but part of it was because Microsoft (and Intel) had long since bled them dry; surely Surface would further demotivate them.

    So then, how has Surface fared?

    1. Windows 8 is a failure, rendering reason 1 moot. In fact, from what I understand the Windows group is laser-focused on simply making Windows acceptable to their enterprise base, which has been very vocal in their displeasure.

    2. Every version of Surface has been a high quality device, on the same general level as Apple. So I guess reason 2 is a win. The problem, though, is that Surface’s sales numbers show that device quality is not the primary sales driver for Microsoft customers. In other words, Surface is the tablet equivalent of the HTC One: it is high end hardware in a market where Apple has already taken the high end. Both Surface and One are thus stuck in the middle, appealing to no one.

    3. Contributing in a meaningful way to the bottom line entails selling at much greater numbers than Surface has to date. Remember, volume was always the goal; that’s why Microsoft made so many Surfaces that they had to eventually take that $900 million writedown on unsold inventory. Clearly this goal has failed as well.

    Meanwhile, Sony is leaving the OEM business, Dell is restructuring, HP can’t decide whether to sell or not; Acer is barely afloat; only Lenovo seems to be prospering (and now Surface Pro is aimed directly at their Thinkpad lineup). When you consider the original goals, none of which have been met, and the original dangers, all of which have come to pass, the only conclusion is that Surface is a failure.

    So then, the question must be asked, as Nadella did, “Why hardware?”

    We are not building hardware for hardware’s sake. We want to build experiences that bring together all the capabilities of our company from our cloud infrastructure to our application services to our hardware capability to build these mobile-first productivity experiences. That’s the mission.

    This here is the greatest danger of forgetting your original goal; you start making up new ones, that are basically “because we need it to exist.” The hardware capability that Nadella claims Surface leverages only exists because of the decision to make Surface. Nadella is basically saying Microsoft needs to make Surface because Microsoft makes Surface. With that sort of reasoning, you can continue on a wrong path forever, just like the Xbox.


    To be fair, from what I understand it was Nadella himself who killed Surface Mini, likely spelling the end of the road for Windows RT.6 Assuming this is true, Nadella should be applauded for making a tough decision based on the world as it is, not the world that Microsoft wishes it were.

    Moreover, Microsoft is doing just that when it comes to the cloud and application side of their business. It actually rather pains me to write something so negative, given the dramatic transformation Microsoft has undergone over the last few months.7 However, when it comes to PCs, Microsoft needs to focus on fixing Windows 8, and leave the devices up to its partners, especially Lenovo. Lenovo knows how to compete in mature markets,8 makes great hardware, and Microsoft should see them as their best partner, not a competitor (which, with the business-focused Surface, they necessarily are).

    It’s time to kill Surface.


    1. Believe it or not, until a couple of weeks ago you also had to pay for an Xbox Gold subscription to even use Netflix – which, of course, you also had to pay for 

    2. There is no group more oblivious to this reality than the Xbox team. They have long seen themselves as too cool for the rest of the company, limiting access to their buildings and generally treating other divisions like crap 

    3. While I previously worked for Windows, I had no insight into Surface 

    4. A red flag to be sure 

    5. I know that’s not technically correct, but it’s how it is perceived, and fairly so 

    6. It’s actually kind of a bummer; Windows RT was a very clever operating system once you learned it (which, admittedly, was part of the problem), but it never had the app support to make it viable. I suspect that future small tablets will be based on Windows Phone 

    7. While Nadella is getting most of the credit, obviously much of this work had to have begun under Ballmer, who deserves credit for his graciousness 

    8. More on Lenovo tomorrow