Stratechery Plus Update

  • What I Got Wrong About Apple Watch

    While I stand by last week’s opinion that the Watch presentation was poor, I’ve somehow, at least in my little corner of the Internet, become the face of people who don’t believe in Apple Watch at all. The biggest problem with that view is that I’m actually a big believer in the category, having written favorably about watches and the potential for Apple specifically here, here, and here; I even tried to buy a Pebble!1 I’m tired of how the phone pulls me away from my family, and time and notifications seemed like more than enough justification for this watch wearer. I presumed the Apple Watch would be similar, but significantly better executed with superior industrial design, plus a few additional killer features that made you just have to have one. In fact, that’s exactly how I suggested that Tim Cook should have introduced the Watch.

    I must admit, though, even as I posted that article and recorded an episode of Exponent that was probably more critical of the Watch itself than I intended,2 there was a part of me that wondered if I were being Tony Fadell to Tim-Cook-and-company’s Scott Forstall. From a 2011 BusinessWeek profile of the then Senior Vice-President of iOS:

    Around 2005, Jobs faced a crucial decision. Should he give the task of developing the [iPhone’s] software to the team that built the iPod, which wanted to build a Linux-based system? Or should he entrust the project to the engineers who had revitalized the software foundation of the Macintosh? In other words, should he shrink the Mac, which would be an epic feat of engineering, or enlarge the iPod? Jobs preferred the former option, since he would then have a mobile operating system he could customize for the many gizmos then on Apple’s drawing board. Rather than pick an approach right away, however, Jobs pitted [Forstall and Fadell] against each other in a bake-off.

    Forstall, who was head of the OS X project, obviously won, leading to the creation of a device that Blackberry executives didn’t think was possible. As a former Blackberry employee recounted:

    RIM had a complete internal panic when Apple unveiled the iPhone in 2007, a former employee revealed this weekend. The BlackBerry maker is now known to have held multiple all-hands meetings on January 10 that year, a day after the iPhone was on stage, and to have made outlandish claims about its features. Apple was effectively accused of lying as it was supposedly impossible that a device could have such a large touchscreen but still get a usable lifespan away from a power outlet.

    The iPhone “couldn’t do what [Apple was] demonstrating without an insanely power hungry processor, it must have terrible battery life,” Shacknews poster Kentor heard from his former colleagues of the time. “Imagine their surprise [at RIM] when they disassembled an iPhone for the first time and found that the phone was battery with a tiny logic board strapped to it.”

    For my part, I’ve certainly been operating under the assumption that the wrist is not yet ready for full blown computing, which is why I thought the “iPod” version of a Watch needed to come first. From a piece I wrote in March:

    Imagine a device that initially launches with limited functionality and is dependent on an iPhone (similar to the iPod, or the first iPhone). Perhaps it monitors fitness and health, and slowly, year-by-year, adds additional functionality. More importantly, assume that Moore’s Law continues, batteries make a leap forward, flexible displays improve, etc. Suddenly, instead of a phone that uses surrounding screens, like the iPhone does in the car and the living room, why might not our wrist project to a dumb screen (with a phone form-factor) in our pocket as well? Imagine all of our computing life, on our wrist, ready to project a context-appropriate UI to whichever screen is at hand. Moreover, by being with us, it’s a perfect wallet as well.

    To be clear, this is certainly years off…

    What, though, if it’s not? What if it is, once again, a “battery with a tiny logic board strapped to it”?

    The S1 computer-on-a-chip at the heart of Apple Watch
    The S1 computer-on-a-chip at the heart of Apple Watch

    And what if that logic board, – which Apple calls the S1 – is even more ahead of the industry than last year’s couldn’t-possibly-have-existed 64-bit A7? What if Apple skipped the iPod-stage of wearables and went straight to the iPhone stage?

    John Gruber captured this possibility in Apple Watch: Initial Thoughts and Observations:

    Apple Watch’s third-party integration is clearly deeper than just showing notifications from apps on your iPhone. And though it depends upon a tethered connection with your phone for Internet access, it’s far more functional while out of range of your phone than any smartwatch I’ve seen to date. It’s a full iOS computer. If it actually doesn’t do much more, or allow much more, than what they demonstrated on stage last week, I am indeed going to be deeply disappointed, and I’ll be concerned about the entire direction of the company as a whole. But I get the impression that they’ve only shown us the tip of the functional iceberg, simply because they wanted to reveal the hardware — particularly the digital crown — on their own terms. The software they can keep secret longer, because it doesn’t enter the hands of the Asian supply chain.3

    I still believe that Tim Cook missed an important opportunity to explain why the Watch existed, but, after an avalanche of tweets, emails, Gruber’s exceptionally insightful piece, and most of all, Apple’s incredible track record, I’m slowly coming around to the position that maybe, just maybe, I ought not be bullish on the Watch simply because I’m bullish on the category, but rather because it’s actually the exact product necessary to make the category succeed.

    One tweet I found particularly persuasive was this one:


    This makes the Pebble sound a lot like a smartphone circa 2006. The thing is, though, the iPhone was never targeted at 2006-era smartphone users: it was targeted at everyone, and that meant it had to destroy our expectations of what a smartphone was in order to build a new one that happened to look exactly like an iPhone. Similarly, to be the sort of tentpole product Cook promised the Watch would be it must target more than current watch wearers: it must be a product so good that non watch-wearers will put something on their wrists, put up with nightly charging, spend hundreds or thousands of dollars every few years, and all the other sorts of behavior that no one thought any rational phone buyer would tolerate just eight years ago. In other words, it must swing for the fences, just like Apple seems to have done.

    Interestingly, I suspect this reading of the Apple Watch’s capabilities suggests that from Apple’s perspective the true new iPhone is the Plus. Numerous reviews have noted that the Plus is really more of a truly portable computer than it is a phone, the only tradeoff being its reduced portability. It is, in other words, the evolutionary iPad, but with guaranteed cellular connectivity and pocketable in a pinch. That leaves room for a device where portability is paramount, and computing only needs to be good enough given those constraints. It leaves room for an Apple Watch.

    One final note: if I am (now) correct, and Apple has created something that most observers – including myself – didn’t think was possible in 2015, well, then this really is a Tim Cook breakthrough. The idea of a watch as a full-blown computer is not novel, but to create the future five years early in three different editions with all kinds of unique bands – and a buying experience to match – is something only Apple and their once-in-a-lifetime operational genius of a CEO could do, if indeed that is what they have done.


    1. Unfortunately I was defeated by their refusal to accept a U.S. credit card for a non-U.S. shipment (a nice example of the tradeoff between security and user experience, I might note)  

    2. I am a passionate person, and that sometimes gets me in trouble on podcasts in particular 

    3. The Wall Street Journal had a piece today about how exactly those leaks happen 


  • Microsoft’s Good (and Potentially Great) Minecraft Acquisition

    It’s difficult to overstate what a big deal Minecraft is. It’s the third best-selling game of all time behind Tetris and Wii Sports, and unlike the latter especially, it is a remarkably sticky experience: the vast majority of customers (over 90 percent on PC, according to Microsoft) sign in every single month. Were Microsoft to change nothing they claim their $2.5 billion purchase of Minecraft maker Mojang would pay for itself in less than five years.1

    Minecraft, though, has the potential to make a lot more money; currently, Mojang only makes money off of players once: when they buy the game. All of those additional hours of play are essentially free. Contrast this to a game like the legendary World of Warcraft, which has made somewhere between $10 and $20 billion over its lifetime through a combination of up-front purchases and subscription fees2 and you realize that Minecraft founder Notch may be gaining his sanity at the cost of a lot of potential earnings.3

    Minecraft, though, isn’t just a great financial decision; it’s a good strategic one as well that fits very nicely with Microsoft’s new vision as outlined by Satya Nadella:

    At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

    At first glance, this statement seems to reference products like Office and Azure, but it also works very well for Minecraft. Minecraft is more than just a game: it’s a community, with a huge cloud component, developers, and, at its very essence, it’s about making things. What could be more productive than that?4

    Moreover, like Office and Azure, Minecraft is truly cross-platform. It’s the best-selling paid-download game on both iOS and Android, and it also has a very popular PS3 version and a newly-released PS4 version, with a PS Vita port on the way. Xbox head Phil Spencer took care to note that this would remain the case:

    Minecraft adds diversity to our game portfolio and helps us reach new gamers across multiple platforms. Gaming is the top activity across devices and we see great potential to continue to grow the Minecraft community and nurture the franchise. That is why we plan to continue to make Minecraft available across platforms – including iOS, Android and PlayStation, in addition to Xbox and PC.

    Here’s the thing, though: how much better would this acquisition look if Microsoft didn’t own Xbox at all?

    • Microsoft would not need to reassure skittish gamers that the game would remain cross-platform (To be clear, making Minecraft an exclusive would be financially stupid. Sure, Microsoft made the first Xbox a success by buying Bungie and making Halo an exclusive, but that was for a tenth of the cost)
    • Microsoft would have a lot more latitude to capture more value from Minecraft, increasing the value of this purchase. Certainly any effort to make gamers pay more will be resisted, but when said efforts can be couched in “Microsoft is trying to help the Xbox” language it makes it that much more difficult to win the inevitable PR battle
    • Most importantly, Microsoft’s incentives would be much more aligned with the Minecraft community’s: their goal would be the success of Minecraft, full stop, without the complication of needing their own platform to succeed

    As long-time readers of this blog know, I’m a big believer in the power of incentives, and in the case of Microsoft, it’s the foundational reason why I believe the company would be better off split in two. I wrote in It’s Time to Split Up Microsoft:

    In 2000, Windows, Office, and Server were a virtuous cycle. Today, Windows and the entire devices business is nothing but a tax. Microsoft is a company that is meant to serve the entire market, and the way to do that is through services on every device. It’s all fine and well to say that you will treat devices equally, but given Microsoft’s history – and the power of culture – I just don’t believe it’s possible.

    I would create two companies: the devices side, which includes Windows, Windows Phone, and Xbox, and let them do the best they can to grow that 14%. Heck, make Kevin Turner the CEO. Windows profits will keep the company going for quite a while, and who knows, maybe they’ll nail what is next.

    The other company, the interesting company, is the services side – the productivity side, to use Nadella’s descriptor. This company would be built around Office, Azure, and Microsoft’s consumer web services including Bing, Skype and OneDrive. These products don’t need Windows; they need permission to be the best regardless of device.

    Every word here applies to Minecraft, a truly remarkable phenomenon that is not only about gamers but very much about the next generation of builders – including developers. I think it has the potential to continue to grow and, along the way, not only make Microsoft a whole bunch of money, but also enable an entire ecosystem. It really could be the Office of gaming. The danger is that, like Office did for too many years, it withers unnecessarily because Microsoft has Windows consoles to sell.5


    1. According to their press release, “Microsoft expects the acquisition to be break-even in FY15 on a GAAP basis”; on a GAAP basis is referring to the annual amortization cost. Microsoft won’t make back the entire $2.5 billion in FY15 

    2. For reference, all developers combined have made just over $20 billion on the App Store 

    3. I don’t blame Notch though; I really appreciated his resignation letter and am happy for him 

    4. It’s also a community that needs Microsoft’s help: while Mojango offers Minecraft server software, another popular option is ensconced in a licensing battle that is probably best addressed by Minecraft itself building a superior option. Microsoft can do that 

    5. The same thing applies to Microsoft Studios broadly; what a waste of resources to make Halo: Spartan Assault for touch devices only to limit it to Windows 8 and Windows Phone. Imagine how much revenue Microsoft has foregone by not developing for iPad and Android, and that’s before we even get to the potential of Halo proper and the other Microsoft Studios titles on Playstation. There’s a lot of latent revenue potential here, although Minecraft would be the crown jewel 


  • Apple Watch: Asking Why and Saying No

    Dan Frommer wrote in Quartz about The Hidden Structure of the Apple Keynote. His analysis covered 27 events since 2007, and included things like average length, laughs per executive, and the timing of iPhone reveals.

    It’s a good read, but in light of the Watch introduction, I am more interested in comparing yesterday’s keynote to only three others: the introductions of the iPod, iPhone, and iPad. Specifically, I’m interested in the exact moment when Apple revealed each device:

    • The iPod was introduced on October 23, 2001; after discussing iLife and Apple digital hub strategy, the iPod section begins at 11:30. However, the iPod itself does not actually appear on a slide until 20:48, and Jobs pulls it out of his pocket at 21:07, nearly 10 minutes after he begins his introduction. The intervening 10 minutes were spent explaining the music market, why Apple thought they could succeed in that market, and what was special about the iPod

    • The iPhone was introduced on January 9, 2007. However, the iPhone itself does not actually appear on a slide until 7:03, and only then to introduce multitouch. The rest of the device wasn’t seen until 12:20. Jobs spent all of that time explaining the smartphone market, why Apple thought they could succeed in that market, and what was special about the iPhone

    • The iPad was introduced on January 27, 2010. After a few updates, the iPad section begins at 5:15. However, the iPad itself does not actually appear on a slide until 8:55. Jobs spent the intervening time explaining that Apple saw a market between the iPhone and the Mac, but that any device that played there needed to be better than either device at a few specific use cases

    • The Apple Watch introduction was quite a bit different:

    The Apple Watch section began with the iconic “One more thing…” at 55:44,1 and these were the extent of Tim Cook’s words before we got our first glimpse of the Apple Watch:

    We love to make great products that really enrich people’s lives. We love to integrate hardware, software, and services seamlessly. We love to make technology more personal and allow our users to do things that they could have never imagined. We’ve been working incredibly hard for a long time on an entirely new product. And we believe this product will redefine what people expect from its category. I am so excited and I am so proud to share it with you this morning. It is the next chapter in Apple’s story. And here it is.

    Then came the introductory video, and we never got an explanation of why the Apple Watch existed, or what need it is supposed to fill. What is the market? Why does Apple believe it can succeed there? What makes the Apple Watch unique?2

    Now it’s very fair to note that the biggest difference between the introduction of the iPod, iPhone and iPad as compared to the Apple Watch is that Steve Jobs is no longer with us. Perhaps the long introduction was simply his personal style. But the problem is that the Smart Watch needs that explanation: what exactly is the point?

    To be clear, the hardware looks amazing, and I love the Digital Crown. It’s one of those innovations that seems so blindingly obvious in retrospect, and Cook was spot on when he noted that you can’t just shrink a smartphone UI to the wrist. But that was exactly the problem with too many of the software demos: there were multiple examples of activities that simply make no sense on the wrist. For example:

    • There were sixty-four applications on the demo watch, and the tap targets are quite small3
    • I can definitely see some compelling Siri use cases for the Watch, but scrolling through movies is not one of them. If you’re looking for a movie you’re almost certainly in a state of movement and mind that makes it possible to pull out your phone and use a screen much more suited to the task
    • “We also looked at how you can carry your photos with you.” Here’s an idea: on your phone!

    The Maps demo was the most frustrating: it included panning around, searching for a Whole Foods – including the phone number! – all activities that by definition mean you are stationary and can use your phone. But that’s when the demo got really good:

    • While you’re actually traveling, the watch will not only show directions, but will actually use the “Taptic Engine” to indicate turns by feel. That is awesome, and an amazing use case for the watch. Who hasn’t been dashing somewhere, running into things while looking at their phone? A watch is far more suited, particularly one that doesn’t even require you to look at the screen
    • I also like that you can use the Watch to control your iPhone or any other AirPlay device. This would be incredibly useful around the house, at a party, etc.
    • The Taptic Engine makes sure only you know about a notification that you have previously agreed to receive. There are smart options for replying, as well as Siri and emoticons, but you can always use “Handoff” to compose a more extensive reply on a more suitable device

    There is a clear pattern to these examples:

    • The bad demos are all activities that are better done on your phone. They are also the activities that make the Watch seem the most like a real computer
    • The good demos are all activities that extend your phone in a way that simply wasn’t possible before. They are also activities that make the Watch seem less capable as a self-contained unit

    This is why I’m worried that the lack of explanation about the Watch’s purpose wasn’t just a keynote oversight, but something that reflects a fundamental question about the product itself that Apple itself has yet to answer: is Watch an iPhone accessory, or is it valuable in its own right?4

    The question is likely more fraught than it seems: the entry price for Apple Watch is $350, nearly half the price of an iPhone (and $150 more than the up-front cost for a subsidized consumer). Moreover, I suspect Edition models will go for ten times that, if not more. Surely such a price demands a device that is capable of doing more, not less.

    In fact, I would argue the contrary. Swiss watches are less accurate, but the benefit they confer on the user are so much greater. Those benefits are about intangible things like status and fashion, but that doesn’t mean they are worth less than more technical capabilities like telling time accurately. Indeed, they are exponentially more valuable.

    Moreover, it seems clear to me that Apple wants to play in this space: Jony Ive wasn’t joking when he allegedly said that Switzerland was in trouble. I believe Apple’s long-term plan for Apple Watch is to own the wrist and to confer prestige and status with options like premium bands and 18-karat gold. To do that, though, they must compete not on technical merit but on the sort of intangible benefits that they always win with; chief among these is the user experience. A premium smart watch will win by yes, being fashionable, and yes, conferring status, but above all by doing a few things better than any other product on the market, and – this is critical – dispensing with everything else in the pursuit of simplicity.

    To me the instructive Apple product is the iPod. What made the iPod so revolutionary was not just its size and industrial design; it was that Apple’s MP3 player did less than its competitors, thanks to its symbiotic relationship with iTunes. Sure, you couldn’t really make playlists5 or buy music, but that’s what your computer was for. What remained was the very essence of a music player, and it was because of that simplicity that the iPod became such a success.

    It’s worth noting, of course, that the iPhone is in many ways the evolutionary iPod – Steve Jobs even introduced it as such in the above video. Similarly, I’m pretty convinced that one day our primary computing device will be something that we wear on our body. But that is many iterations and technical (and battery) advances down the road. Why is Apple in such a rush to get there by 2015?

    Ultimately, I’m bullish on the Apple Watch. I think the Digital Crown is a big deal, and it’s a perfect companion for the 5.5″ iPhone especially (the device that many fear will cannibalize the iPad itself necessitates another iOS device). I also think the customization and segmentation is really smart and will enable Apple to sell at multiple price points (my piece about the Veblen goods is very much applicable to Watch). Moreover, some of the demos were quite compelling, including the fitness applications and the very personal messaging; it was telling that Apple gave that functionality a dedicated button. I plan on buying one as soon as they are available.

    But I’m already a watch wearer, and a geek to boot, and heck, I can probably expense it. To ensure the Watch’s success broadly Apple needs to really articulate “Why”, not only externally in their advertising but internally to their product managers who ought to remember that Apple’s greatness is built on saying “No.”

    Note: I wrote about the iPhone and Apple Pay introductions in the Daily Update (members only)


    1. I admit, I got chills 

    2. In fact, somewhat bizarrely, Cook’s first words after the reveal were about Apple Watch’s accuracy:

      Apple Watch is the most personal device we’ve ever created. We set out to make the best watch in the world. One that is precise. It’s synchronized with the universal time standard and it’s accurate within plus or minus 50 milliseconds.

      What makes this so strange is that accurate timekeeping was the big selling point for Quartz watches. The Quartz crisis caused a significant decline of the Swiss watchmaking industry, but the primary reason for the success of the Asian manufacturers that adopted the technology was that they were so much cheaper. Today the watch industry is bifurcated between high end (relatively inaccurate) mechanical watches and inexpensive Asian offerings; I’m quite confused why Apple would be effectively aligning themselves with the latter, and with their first slide to boot! 

    3. I suspect the demo unit was “on rails”, meaning the watch was programmed to step through the demo step-by-step; it’s telling that Kevin Lynch didn’t have a single mis-tap, and the Maps demo was obviously simulated 

    4. The Watch does require an iPhone for full functionality, especially connectivity 

    5. Yes, I know you could push the middle button in a pinch 


  • Wearables, Payments, Chickens and Eggs

    I feel a bit sheepish that this is the third of what will in all likelihood be four articles about Apple in a two-week span. I figured the scale of what Apple is planning to announce necessitated at least two preview posts; one about the iPhone and this one about wearables and payments. And then the iCloud theft happened, and well, here we are.

    At the same time, it’s difficult to overstate the broad impact that Apple has on the entire tech ecosystem, whether it be establishing categories it competes in directly or in fundamentally changing assumptions in others impacted by their wake (think the rise of mobility as a critical component of enterprise software). Moreover, it’s not as if these articles won’t be read; any blogger with access to their Google analytics panel knows that anything Apple moves the needle in a way few other topics do.

    Apple, of course, knows this as well, and what is so interesting about how they are approaching wearables and payments – together – is the way in which they are leveraging their popularity to make something that could not otherwise come into being.

    The Problem With Payments

    I discussed the challenge of introducing new payment methods in April in a post called The Problem with Payments:

    • You can get more adoption by building on top of credit cards, but that leaves almost no room for any sort of meaningful profit margin. This is Square’s fundamental problem
    • Building outside of credit cards gives you more room to maneuver from a profit perspective, but the delta of improvement between your new solution and credit cards is likely much too small to overcome the double-sided network effect credit cards enjoy (merchants and customers)

    The rumors about Apple using the iPhone 6 for payments seemingly faces both of these challenges:

    • Any Apple payment solution will almost certainly be built on top of their current collection of 800 million credit cards, which means there won’t be much if any money to be made
    • The delta of improvement between pulling out your phone and pulling out your credit card may not be big enough to make a difference. It certainly hasn’t moved the needle for Android, even in countries like Japan that have invested significantly in NFC infrastructure. And, if customers won’t bother, neither will merchants

    What is needed is something a lot better than a physical credit card and a critical mass of people willing to give it a try.

    Enter The Wearable

    People often mock any new Apple product by pointing out that, “Sure, the diehards will buy whatever Apple releases” before insisting that said product will never have broad appeal. We saw it with the iPad, the iPhone, and the iPod, and we’ll almost assuredly see it with the wearable. And, when and if people complain that only Apple fans will buy the wearable, Tim Cook and company are liable to nod their heads, knowing that that’s the point.

    There is the age-old question: “What came first, the chicken or the egg?”, and it’s one that is brought up again and again when it comes to building an ecosystem. For example, the biggest problem with Windows Phone is the lack of top-notch applications, but the reason that developers don’t prioritize the platform is that there aren’t enough valuable users, but there will never be enough valuable users until there are top notch applications. It’s a chicken-and-egg problem.

    As I noted above, the same problem exists in payments: merchants won’t support a new payment method unless lots of valuable customers insist on it, but said customers won’t insist on a particular payment method unless lots of merchants support it.

    That’s where Apple’s ability to move units simply because they are Apple becomes something that is an incredible weapon: suppose 10% of iPhone customers are willing to buy a wearable with some cool fitness functionality mainly because it’s built by Apple. Boom – suddenly there are 80 million wearables with payment functionality out in the wild.1 Moreover, the customers sporting said wearable are likely to be both vocal about their desire to use said payments, and high spenders to boot. That’s a very good way to spur merchants to install what will likely be a free payment device, available at your local Apple Store. Of course it wouldn’t hurt to move the process along by having partnerships already set up with Nordstrom and Target.2

    Moreover, I’d bet the difference between using a wearable for payment and using your phone will be greater than most people expect. I have no particular evidence for this outside of my own experience with keyless ignition systems in cars; the first time we got it, I thought it was a tremendous waste of money (it was part of a package); since then, I can not imagine buying a car without it. Saving a bit of hassle and a few seconds on a daily basis really adds up; it’s the type of subtle experience improvement that is Apple’s biggest differentiation.

    Not Just Payments

    This idea – that Apple, simply by being popular, can get past the chicken-and-egg problem – applies to HomeKit as well. If we have keyless ignition in cars, why not keyless locks, automatic air conditions, or lighting? There is a whole host of items that could work well with a wearable, but that would only be built if there were a critical mass of wearables out there.3

    More broadly, this is the answer to the question of why build a wearable at all. It’s easy to see a future where a wearable is the center of an entire ecosystem of devices and services; what’s hard is seeing how on earth we get from here to there. Unless, of course, customers give the device critical mass simply because it is made by Apple (even though they’ll tell themselves they buy it for their health).

    Apple has a better chance than anyone of creating a critical mass of users for a new payment system.
    Apple has a better chance than anyone of creating a critical mass of users for a new payment system.

    Let me be perfectly clear: while most of those who argue that any Apple product will sell just because of the logo mean that as an insult, I’m saying that’s actually a really unique ability that only Apple has. Remember this line in the Cook Doctrine:

    We participate only in markets where we can make a significant contribution.

    Apple, not only because of its product capability but also because of its incredible customer loyalty, is uniquely suited to solve the payments chicken-and-egg problem and provide the killer use case for a wearable, all at the same time.

    What’s In It for Apple?

    Contrary to the expectations of some Apple bulls, I don’t think Apple has any interest in getting into financial services. I suspect they will be quite happy to sit on top of traditional credit cards, just as the iPhone sits on top of traditional carriers. In fact, when it comes to making money Apple’s strategy is remarkably straightforward: they sell devices for a profit; their services and infrastructure serve primarily to differentiate said devices.

    That is why I expect Apple to offer payment terminals to merchants for free or close to it, and to not charge any surcharge beyond what is necessary to cover credit card fees. Their payoff will be in the creation of a killer wearable use case, something that will ultimately benefit Apple as a wearable seller more than anyone.

    It will be fascinating to watch if Apple can succeed, and, if they do, what will happen to the companies in their wake. Just as the iPhone ultimately shook up all kinds of industries, including enterprise software, a true payments breakthrough will not only be the end of a whole raft of startups but could also have significant impact on all kinds of related industries. I suspect, though, all that matters to Apple is making life that much better for that many more people, something that is not only true to their mission but also results in a formidable strategic tool as well.


    1. Presuming an install base of 800 million iOS devices, which was the number reported at WWDC 

    2. So says not one but two little birdies 

    3. A wearable could also be the best possible answer to the password problem: a wearable alone is likely more secure than a password, and a wearable plus TouchID is likely the most consumer-friendly two-factor authorization possible 


  • iCloud and Apple’s Founding Myth

    From a certain perspective, what is happening to Apple this week is unfair. Both OS X and especially iOS are more secure than their competitors, and Apple has regularly prioritized security over features that customers have demanded. For example, Android has long supported custom keyboards, but Apple is only adding them in iOS 8. The difference, though, is that Apple’s solution makes it impossible for the substitute keyboard to be a keylogger sending the keys to another server (without explicit user authorization), and can’t be used in secure password fields at all. No such limitation exists in Android.

    On the other hand, Apple very much deserves all of the terrible publicity they are getting. Even if the vast majority of photos were stolen through means like password resets, phishing, and social engineering, the reality is that Apple’s reservoir of goodwill when it comes to the cloud is deservedly empty. And, having bugs like a missing rate limiter, or recommending that people use two-factor authorization that wouldn’t have actually helped, only makes the distrust worse.

    I think the most useful way to understand Apple’s troubles with the cloud is to think about the A-series chips. Jonathan Goldberg described why making chips is so difficult in a recent blog post:

    Designing hardware is much harder than designing software, and by ‘hard’ I really mean expensive. If you design a web site, the whole product can be built around the idea of iteration. The first version will not work perfectly, so plan for that, and fix it in upgrades. And upgrades on the web or in the cloud are very easy. In hardware, everything has to be right before you ship the product. And not just before you ship, but really before you even start to build it. Returns are prohibitively expensive, and hardware cannot be readily fixed on the fly or in the field. This problem is even more acute in semiconductors. First, those chips have to get designed into other things, and that means even longer preparation times. Secondly, the complexity of the semiconductor manufacturing process are hard for the human mind to grasp. The industry requires incredibly tight specifications, as you can imagine for circuits many times smaller than a human hair.

    It turns out, though, that Apple is really good at designing chips! The A7 is still the only 64-bit ARM processor on the market, and Apple looks likely to extend their lead with the presumed A8 in the iPhone 6. Still, the process of making a semiconductor – or hardware generally – is very clearly in Apple’s DNA. Just think back to the original Macintosh. From the invaluable Folklore.org:

    The Mac team had a complicated set of motivations, but the most unique ingredient was a strong dose of artistic values. First and foremost, Steve Jobs thought of himself as an artist, and he encouraged the design team to think of ourselves that way, too. The goal was never to beat the competition, or to make a lot of money; it was to do the greatest thing possible, or even a little greater…

    Since the Macintosh team were artists, it was only appropriate that we sign our work. Steve came up with the awesome idea of having each team member’s signature engraved on the hard tool that molded the plastic case, so our signatures would appear inside the case of every Mac that rolled off the production line. Most customers would never see them, since you needed a special tool to look inside, but we would take pride in knowing that our names were in there, even if no one else knew.

    To me this is one of the defining “founding myths” of Apple,1 and founding myths are really important. They define culture, and what is important and valued, and in the case of Apple, what is valued is creating the “greatest things possible” through an effort that most customers will never see.

    Other companies have founding myths too:

    • Google’s founding myth is the creation of BackRub and PageRank. These were highly iterative projects that got better the more they were used. This myth created a culture that values iteration and data
    • Microsoft’s founding myth is Bill Gates and Paul Allen selling both Altair BASIC and Windows 1.0 without an actual product. They built exactly what their customers needed after making the deal, and Microsoft’s culture has always been very responsive and heavily influenced by customer requirements
    • Facebook’s founding myth is Mark Zuckerberg in his dorm room creating a means of connecting with other students through any means possible. Even today Facebook highly values “hacking” and constantly changing things to ever more highly engage users

    I’m painting with broad strokes to be sure, but to me the most compelling evidence for these founding myths is to look at the sort of products that these companies have traditionally been bad at:

    • Facebook has been a rough go for developers, because hacking and breaking stuff doesn’t make for a good platform
    • Microsoft has always struggled with consumer software, because building for a the mass market requires having an innate understanding of what the majority needs, not simply asking them
    • Google’s products don’t always have the best fit-and-finish, because the tendency is to make something that is “good enough” and ship as soon as possible, the better to start iterating

    And, for Apple, making something perfect behind closed doors is the exact opposite sort of approach that is needed for the cloud. Effective cloud services are all about iteration based on data from usage – the Google model – and that’s not the sort of approach that Apple values.

    This also applies to security generally. Apple has a much greater degree of control over the attack vectors on its operating systems; when it comes to Internet services, on the other hand, the number of attack vectors are exponentially larger. You need a lot of people helping you harden your infrastructure, but that requires transparency, and that’s not something Apple values.


    It’s important to be clear about who exactly is to blame for this theft: the thieves. But stepping back, there is a bigger problem with Internet security generally, and that’s our reliance on passwords. Passwords are really difficult to manage even for advanced users; for normal folks it’s a real nightmare.2 That’s why Apple and other services include things like security questions to help you get into your account, even though that makes the account even more insecure. There is a fundamental tradeoff between security and ease-of-use, and that too works against Apple here; their entire differentiation is predicated on the user experience, and not having backup-by-default or enforcing two-factor authentication is counter to that, even if it would be more “secure”.

    It’s here, though, that Apple has gone the farthest down the road towards fixing what seems to be an intractable problem: TouchID. TouchID works really well, and it removes the temptation to have a simple password on your phone or for your Apple account. When and if Apple has Touch ID everywhere – on iPads and on Macs3 – it would be much more plausible for them to enforce very strong passwords, encryption everywhere, two-factor authorization, etc., because most people would just use their fingerprint most of the time.

    Moreover, TouchID is something that is very Apple-esque, for lack of a better term. It’s something that is hardware-based, and it requires incredible attention to detail on both the implementation and experience. It’s not something that lends itself to a minimum viable product + iteration approach; instead it depends on deep integration of hardware and software – Apple’s forte.

    The problem, though, is that TouchID isn’t there yet. We’re still in the world of passwords and security questions, and while Apple has often let problem issues fester until they can come up with something perfect – think copy-and-paste or multitasking in iOS – that sort of approach is irresponsible when it comes to the cloud. This break-in may not ultimately be Apple’s direct responsibility, but the lack of trust so many have in anything related to Apple’s cloud is.4

    There’s no question the cloud is critical to Apple’s future and is a major piece of the iPhone value proposition; from a strategic perspective it is core to Apple. But Tim Cook needs to seriously evaluate if Apple has or ever will have the cultural DNA to do the cloud right. And, if not, it may be time to work even more closely with a company whose very survival depends on delivering superior cloud services; I’m quite sure Satya Nadella would answer the call.


    1. Yes, I know the Macintosh was not even close to being the first Apple product, and that the Apple II made most of the money for years after the Macintosh came out. However, to my mind it was the Macintosh that laid the foundation for what Apple stood for 

    2. Anecdotally, multiple Stratechery members request a password reset every week, and that’s for a more technically inclined audience 

    3. I’m generally skeptical of the whole OS X on ARM argument, but Touch ID and its secure enclave are an argument in favor 

    4. Honestly, the rate limiter bug – whether or not it had anything to do with this specific theft – was grossly negligent 


  • The iPhone 6: From Louis Vuitton to Chanel

    Yves Carcelle, who, in the words of the New York Times “transformed Louis Vuitton from a staid French maker of handbags and travel trunks into one of the world’s most recognizable luxury brands” died over the weekend. I have a short note in today’s Daily Update (members only) that compared Louis Viutton to Apple:

    LVMH broadly and Louis Vuitton specifically democratized luxury with a clear focus on the (upper) middle class. It’s the same market Apple targets, with much the same value proposition: luxury quality at accessible prices. To be sure, the comparison isn’t perfect; LV bags are both more expensive than, say, an iPhone, yet also not even close to being the best in their category, and many people believe that LV has overly expanded – and thus diluted – its brand. Of course that is always a concern for Apple as well.

    It’s the second point – how LV’s position in the market differs from Apple’s – that is particularly worth exploring when it comes to next week’s iPhone announcement.


    Last year all of the talk running up to the iPhone announcement was about the iPhone 5C and how much it might cost. In Thinking About iPhone Pricing, I argued that the price would be on the high side of expectations:

    The fact the 5C needs to be sold in both subsidized and unsubsidized markets makes the pricing tricky; in subsidized markets, Apple is currently receiving a subsidy of around $450 on the iPhone 5. It wouldn’t make sense to unilaterally lower that – after all, it’s not like the carriers are going to lower iPhone service bills. This sets a floor of $450 for the unsubsidized 5C ($0 with contract). This also lets Apple dump the 4S, with its 3.5″ screen, 30-pin connector, and lack of LTE.

    $0, though, is problematic from a branding perspective. While a new phone, heavily advertised (unlike the old iPhone 4) and sold for $0 would likely move an incredible number of units, it would also create consumer expectations around $0 and associate Apple with “cheap.” I would imagine Apple is very hesitant to go there. $99 makes more sense. (This point on branding applies to the unsubsidized cost as well; in Asia, in particular, the iPhone’s biggest selling point is brand prestige, not apps or user experience. Apple will be happy to err on the side of more expensive.)

    This final point about the iPhone’s positioning in Asia was a reference to the idea of a Veblen good. From Wikipedia:

    A Veblen good is a member of a group of commodities whose demand is proportional to their price; an apparent contradiction of the law of demand. A Veblen good is often also a positional good.

    The Veblen effect is named after economist Thorstein Veblen, who first identified the concepts of conspicuous consumption and status-seeking in 1899.

    Conspicuous consumption and status-seeking are major drivers of the Asian market in particular, and are why Asians make up over 50% of the luxury market by nationality.1 In the case of handbags, you absolutely are saying something with your selection: a Louis Vuitton bag is many people’s first luxury purchase, and shows you have some means; a Chanel bag, on the other hand, signifies you are at least upper middle class, maybe even rich. At the top of the heap, though, is Hermès: sport a Birkin bag and there is no question as to your status. A whole host of other brands – Prada, Céline, Balenciaga, and many more – say similar things about not just your status but also your taste and the kind of person you wish to be. And, not surprisingly, the most desirable brands are also the most expensive. Handbags are Veblen goods.

    The question for Apple is whether the iPhone is a Veblen good as well:

    • There is no question that the market for smartphone has bifurcated. There is a high end with a stable average selling price of around $650; the iPhone has a significant majority here, but there are also phones from Samsung, HTC, LG, and others. The rest of the market competes primarily on price, and very good Android smartphones can be had for prices approaching $100. The middle of the market, meanwhile, is supported primarily by $0 subsidized phones and, to a lesser degree, demand for older iPhones.
      The smartphone market has bifurcated between the high and low end. The low end is motivated primarily by price.
      The smartphone market has bifurcated between the high and low end. The low end is motivated primarily by price.
    • On the high end (say, $450 and up), the evidence actually suggests that people strongly prefer higher prices. Consider the 5S’s success relative to the 5C, or the fact that Samsung and HTC still launch their flagship phones at $650. Now, to be fair, there are two countervailing factors at play:

      • The more expensive iPhone is better, so this is an apples and oranges comparison
      • The prices of all these phones are influenced by the subsidy structure of the U.S. market in particular

      That aside, just as I argued last year that Apple didn’t really know the price elasticity of the iPhone, they don’t really know just how much people are willing to pay to have the best possible model. It’s at least worth finding out just how much the market can bear.

      For a normal good, the quantity increases as the price decreases. A Veblen good, though, curves in the opposite direction.
      For a normal good, the quantity increases as the price decreases. A Veblen good, though, curves in the opposite direction.

    And so, in a very long and roundabout way, we have arrived at what I think is the more interesting of the two rumored iPhone models: the 5.5″ iPhone 6. I believe the phone does exist (some don’t) for no other reason than Apple would have planted a leak if it didn’t; they surely know how devastating its absence would be in certain segments of Asia in particular.

    Make no mistake: there are a good number of people who will buy the 5.5″ iPhone because they truly want a big phone; moreover, these customers are probably more likely than any other group to have switched to Android simply for screen size. I think this phone will steal customers back from Android and really hurt Samsung.

    However, I also think that this phone will cost $100 more ($750 to start), and that it will in some small ways be superior to the 4.7″ iPhone. And, the reason for that premium will be because the 5.5″ phone will be a Veblen good. It will be the phone to have for anyone who cares to demonstrate just how well-off they are, especially in Asia – the Chanel to the 4.7″ Louis Vuitton (Vertu can keep the Hermès folks).

    There is one more fact that makes this strategy compelling: your average Chanel bag is $4,000 (compared to LV’s $2,500 or so). That is more than 5x the price I’m predicting for the 5.5″ iPhone. As I’ve noted previously absolute numbers matter just as much if not more than percentages, and the truth is paying $750 for the best is incredibly accessible relative to just about any other luxury good in the world. Sure, lots of folks even in China may not afford Chanel, but anyone who can afford even a Longchamp bag can buy the best iPhone. It’s a rather nice trick Apple has pulled: being accessible and the best of breed all at the same time.2

    I expect the rest of the phone lineup to fall into place behind the 5.5″ flagship:

    • The 4.7″ iPhone will take the 5S’s place at the $650 price point ($199 subsidized). I suspect it will have the same processor, RAM, and camera as the big iPhone; Apple will market it as being completely the same except for the screen (I suspect John Gruber is right that the 5.5″ will have a 3x display, while the 4.7″ will have only 2x)
    • The 5S will move down a notch into the current 5C slot and be sold for $550 ($99 subsidized)
    • The 5C will be the low end with a price of $450 ($0 subsidized)
    • The 4S will probably stick around in select markets like India and China for $350, but Apple is probably waiting for the 5C to move down one more notch before they really push this price point

    This is an incredibly compelling lineup from a shareholder perspective:

    • More people are on the new form-factor upgrade cycle than on the ‘S’ upgrade cycle
    • New screen sizes are a very tangible reason to upgrade for those who have kept their phones longer than average
    • Selling the top-end phone for $100 more will be a big boost to both the average selling price and to margins.

    This last point is Apple’s final answer to those still holding out hope for a truly inexpensive iPhone. Apple could:

    • Sell a $300 phone with a $100 margin (and likely cannibalize a higher-end model with bigger margins), or…
    • Substituted a $750 phone for a $650 phone for the same $100 margin addition3

    It’s a bit like how the iPhone cannibalized the iPod: by being more expensive and higher margin. Nice business if you can get it!

    (Later this week: wearables and payments)


    1. Figures are from this presentation by Bain; note that Asia as a geography is only about 33% of the market. Most of the difference is accounted for by Asian tourists traveling to Europe in particular 

    2. Don’t pay any attention to stats about the average wage in China; the country is both huge and has massive income disparity, which means there are a huge number of people who can afford whatever Apple deigns to charge. More here  

    3. Minus the additional cost of the superior components, of course 


  • Amazon: Not an E-commerce Company

    Let’s start with the premise that Twitch, the video-game watching network, is the next ESPN – you know, the jewel in Disney’s crown that, by itself, is worth $50.8 billion. Like ESPN, Twitch is about live competition, and, like ESPN, Twitch does exceptionally well in the highly desirable young male demographic.1 Obviously this is the best possible outcome, far-fetched though it may sound. It is certainly an outcome that would make Amazon’s purchase of Twitch for $970 million an amazing deal. It would not, however, have anything to do with e-commerce.

    Just a few weeks ago I wrote in Losing my Amazon Religion about Amazon’s focus on Prime Video in particular:

    It’s this focus on original and exclusive content – and devices that deliver it – that concerns me, and not because it’s expensive. Rather, what exactly does this have to do with e-commerce?

    Needless to say, the Twitch acquisition hasn’t exactly quelled my concerns. It has, though, led me to question my premise; if Amazon is behaving, shall we say, erratically, the issue is perhaps not with Amazon but with my understanding of the company. So I went back and reread the origin story of Amazon in Brad Stone’s excellent The Everything Store:

    [John] Doerr’s optimism about the Web mixed with Bezos’s own bullish fervor and sparked an explosion of ambitions and expansion plans. Bezos was going to do more than establish an online bookstore; now he was set on building one of the first lasting Internet companies.

    Over the following pages Stone documents how Amazon expanded from books to music and then to DVDs. These categories, along with packaged software (including games) eventually made up the “Media” category in Amazon’s earnings. Today this media category is about 25% of Amazon’s revenue, but, according to my understanding, almost all of Amazon’s “profits.” Said profits are reinvested into all the other parts of Amazon’s business, but, it must be asked, to what ends? Is Amazon really an e-commerce company? Or are they a company bent on dominating the world?


    Returning to Twitch, I can think of three possible reasons for Amazon’s purchase:

    • Amazon is looking to buttress their media business – That Media business that underpins the Amazon machine is not in the best of shape; traditional media forms are going away, and, except for books, Amazon does not have a ready-made replacement from a revenue standpoint. In this view, Twitch offers a new revenue model (ads, primarily, although there are also premium subscriptions) that can help fill this gap.

    • Amazon wants to challenge Valve and/or Sony and Microsoft – I think this is a very underreported aspect of this deal. Steam in particular has taken a significant bite out of Amazon’s packaged software business, and I know that Amazon has at least internally considered building a direct challenger. Amazon has also included gaming capability into the Fire TV, including an optional controller, and has bought their own gaming studio, basically following the script I laid out in How Apple TV Might Disrupt Microsoft and Sony. However, as I insinuated in Gaming and Good Enough, hard core gamers are very unlikely to so easily abandon the established players. In this view Twitch is a backdoor way to “get in” with hardcore gamers; imagine a Fire TV built around Twitch and Amazon’s own games.

    • Amazon wants to rule the world – I put it this way only partly in jest, because I’m starting to suspect this is a bigger factor than anyone – including Amazon’s everpatient investors – fully appreciates. Remember, Bezos sold books not because he was obsessed with being a bookseller, but because he identified a dominant strategy; as Stone’s book suggests, perhaps Bezos’s goal was simply to build a dominant company, and e-commerce has only ever been a means to an end.

    The second reason, that this deal was about gaming, is interesting from a tactical perspective, but the far more intriguing question is the weight one gives to reasons one and three. If you buy reason three – that Bezos wants to rule the world – then there is even more urgency attached to reason one. To be clear: Amazon’s continued expansion is built on the profits from its media category, but it is that category that is the most under threat from the digitalization of said media. In other words, what if Twitch is both offense and defense?

    Regardless, the takeaway for me – and what should be the takeaway for all of Amazon’s investors – is that Amazon is not an e-commerce company. No more pointing at the fact that e-commerce is only 6% of U.S. retail, or that Amazon’s multi-sided network of merchants and customer base are the key factors in determining their future success. No, the company is going for something a whole lot bigger, even as their foundation is being slowly watered down by the same Internet that made Bezos feverish nearly 20 years ago.


    1. Twitch’s video game playing “athletes”, though, peak far earlier than professional athletes according to this fascinating article in The Verge 


  • Games and Good Enough

    Two months ago I wrote How Apple TV Might Disrupt Microsoft and Sony. Then, about a month later, I went and bought a Wii U. And, a month after that, I bought a 3DS. And now I’m writing another article about gaming, and I think I’ve changed my mind.

    Still, it’s always dangerous to write about anything based on little more than your personal experience, so I’ve been trying to get up to speed on what is happening with gaming. And it’s actually pretty darn encouraging. Sony has sold 10 million PS4s, while Microsoft has sold at least 5 million Xbox Ones. Nintendo is still hurting, but Mario Kart 8 has moved 2.82 million copies while the 3DS now has 9 titles that have sold more than 1 million units. Meanwhile, in PC land Nvidia beat expectations largely because of continued growth in demand for their GeForce graphics processors. At the same time, mobile game companies like King are struggling, and the iPad, which so many – including myself – presumed would take a big chunk out of consoles, has seen its sales slow dramatically (last quarter it was down nine percent year-over-year).

    So why did I buy not one but two new consoles? And what, if anything, might that have to do with these rather impressive results?


    Last fall I wrote what is probably still my favorite piece on this site: What Clayton Christensen Got Wrong. In the piece I took the idea of low-end disruption head-on. Basically, the theory states that in an immature market, the integrated solution has the advantage, but as a market matures, modular solutions become “good-enough” and are able to leverage a price advantage – and, over time, a scale advantage – to take over the market.

    My fundamental contention was that this theory primarily applied to business markets where the buyer was not the user and prices and feature lists reigned supreme. In consumer markets, on the other hand, where the buyer and user are the same person, there would always be a significant part of the population that prioritized the user experience only an integrated solution can deliver, making the high end a profitable segment despite higher prices. My prime example was, of course, the continued success of the iPhone in the face of good-enough Android (please do read the whole thing).

    And yet, when I wrote How Apple TV Might Disrupt Microsoft and Sony, I basically built my entire argument on the idea of low-end disruption. My thesis was that a general purpose Apple TV would offer good enough gaming that would appeal to a significant part of the population, and, over time, peel away even those at the high end. That’s what made my 3DS purchase in particular so interesting.


    John Gruber perfectly articulated why the 3DS and any future Nintendo handheld is doomed in More on Nintendo and Handheld Gaming:

    What’s different about the post-iPhone world of mobile computing is that the buying decision is no longer about or, it’s about and. Pre-iPhone, someone interested in a handheld game device would choose between Nintendo’s offering or someone else’s. Nintendo did well in that world, selling more than enough devices to succeed. Today, though, someone deciding to buy a dedicated handheld game device is, more likely than not, deciding whether to buy something to carry in addition to the mobile device they already carry everywhere. This is an entirely new scenario for Nintendo, and as I see it, they are on course to head right over a cliff.

    It’s actually worse than Gruber likely realized: the 3DS is a pretty atrocious piece of hardware relative to an iPhone. Because of the silly inclusion of 3D, the effective resolution is only 400×240 on the DS’s main screen, and it is absolutely brutal to look at. This is not a situation where post-PC devices are on pace to deliver superior graphics: they are already years ahead.

    And yet, screen quality notwithstanding, I have probably put in more gaming hours on the 3DS in the last two weeks than I have in the previous two years on the iPhone. Because here’s the thing: touch sucks for playing games.1 The experience of using a dedicated device with built-in gaming controls and games designed specifically for said device mean a great deal to this user and buyer. It means enough that, especially when I’m traveling, I will gladly carry an additional device.


    Again, as I noted at the top, I very much hesitate to read too much into my own personal experience. But I’m beginning to suspect that consoles may be a bit more resilient than many of us in tech may have first believed. And, by extension, I suspect my critique of low-end disruption may have legs: when users are buyers the user experience matters, immensely. And the user experience of a console is, and likely will remain, far ahead of any sort of touch device when it comes to many (but not all) types of games. Moreover, I now suspect that an Apple TV that supports gaming will be less disruptive than I suggested as well; as long as the controller is optional, as I suspect it would be, the immersive experience of a dedicated console will be optional as well.

    That’s not to say the gaming business is going to thrive: in this Nintendo is indeed a cautionary tale. It seems increasingly clear that the Wii’s incredible success was the worst thing that could have happened to the company. What made the Wii such a hit was that it dramatically increased the market for consoles: lots of people who would not have normally been interested in a PS3 or Xbox 360-type device couldn’t resist Wii Sports. The problem, though, is that the Wii market, by virtue of not being people who particularly valued the traditional gaming experience, was the exact same market likely to see touch gaming as good enough. Keep in mind the Wii launched at the end of 2006, just weeks before the iPhone. In retrospect it was the last hurrah of the gaming middle ground, of a piece with the iPod, point-and-shoot cameras, and other dedicated but low-end devices.

    What has happened in all of those markets – indeed, what is happening to smartphones as well – is a bifurcation between the high and low ends. Cameras is a particularly good example: DSLR sales have remained strong2 even as the point-and-shoot cateogry has all but disappeared, replaced by good enough smartphone cameras. That’s the exact same pattern we’re seeing in gaming: the PS4 (and to a lesser degree, the Xbox One) are doing much better than expected, while the lower-priced and lower-specced Wii U is hurting. Nintendo’s mistake was not realizing that the Wii’s market was devoured by touch devices; they should have built a console that was top-of-the-line.

    There is one more fascinating parallel between Android/iOS and touch gaming/console gaming: even though Android has far greater market share, the best apps are generally found on iOS largely because the most money is there. Similarly, while gaming as a whole was worth $93 billion last year, only $13 billion of that was in mobile, and much of that in free-to-play games like Candy Crush Saga that appeal to very different players than traditional gamers. In other words, it’s not at all a given that publishers will abandon consoles simply because the market share of mobile devices is greater.

    In short, I believe there are factors more important than just market share, at least when it comes to smartphones. Why not when it comes to games?


    1. Board games on the iPad being the big exception, at least for me 

    2. They did start to slip last Christmas 


  • Is BuzzFeed a Tech Company?

    It’s telling that Chris Dixon, in a blog post explaining Andreessen Horowitz’s $50 million investment, goes out of his way to explain that BuzzFeed is not really a media company, but a technological one:

    We see BuzzFeed as a prime example of what we call a “full stack startup”. BuzzFeed is a media company in the same sense that Tesla is a car company, Uber is a taxi company, or Netflix is a streaming movie company. We believe we’re in the “deployment” phase of the internet. The foundation has been laid. Tech is now spreading through every industry and every part of the world. The most interesting tech companies aren’t trying to sell software to other companies. They are trying to reshape industries from top to bottom.

    BuzzFeed has technology at its core. Its 100+ person tech team has created world-class systems for analytics, advertising, and content management. Engineers are 1st class citizens. Everything is built for mobile devices from the outset…BuzzFeed takes the internet and computer science seriously.

    The issue is that, generally speaking, media companies don’t make for good venture capital investments. VC firms like Andreessen Horowitz aren’t looking to fund nicely profitable companies; they are searching for home runs, the one or two investments that make a fund profitable despite lots of failures. This means a focus on companies that can scale. Marc Andreessen told Adam Lashinsky in Fortune:

    We describe it is we invest in Silicon Valley style companies. So we invest in the kind of companies that Silicon Valley seems uniquely good at producing at scale, you know, large numbers over time.

    What makes technology companies – software companies, especially – different from media companies is the distribution of costs. Even before the Internet, for a software company, almost all of the costs were up-front fixed costs: you spent money primarily on salaries to develop a piece of software, and you spent that money well before you knew whether or not said software would sell.

    The payoff, though, was that the software itself had minimal marginal costs: it cost basically nothing to produce one more copy (discs and packaging, basically). Thus, the vast majority of revenue for every single copy sold went straight to the bottom line. Moreover, most software is universal: it can be used anywhere (although localization can add to the fixed costs), and it’s useful for a long period of time. That results in the sort of scale that Andreessen was referring to.

    Media companies, on the other hand, have traditionally differed from technology companies in three ways:

    • Created content had a very short shelf life, which leaves a very small amount of time to recoup the fixed costs that went into its creation
    • Media’s marginal costs (paper, ink, delivery) were higher than the marginal costs for software, at least in relative terms
    • Media was generally limited in its geographic availability

    In this pre-Internet world, media did have an ace-in-the-hole: their significant up-front costs often resulted in geographic monopolies that made them the primary option for advertisers. This made media companies interesting investments for hedge funds, but the limited upside meant they were much less attractive to VCs.

    Fast forward to today, and the Internet has seemingly made the differences between technology and media companies even more stark:

    • Packaging is no longer necessary, reducing the marginal cost of software to zero
    • Multiple new business models have emerged for software, such as attracting massive user bases for free which can then be monetized through advertising or premium services1
    • Media, meanwhile, has lost its local monopoly, and advertisers have fled for platforms that have more scale – there’s that word again – and better targeting

    So why on earth is Andreessen Horowitz investing in a media company? Or is Dixon right – is BuzzFeed really a technological company that can use software to succeed in everything from listicles to hard news to now, their own movie production company? What has changed since Andreessen wrote in his post introducing Andreessen Horowitz:

    We are almost certainly not an appropriate investor for any of the following domains: “clean”, “green”, energy, transportation, life sciences (biotech, drug design, medical devices), nanotech, movie production companies, consumer retail, electric cars, rocket ships, space elevators. We do not have the first clue about any of these fields.

    I suspect what Andreessen and company have come to realize in the five years since that post was written is that because of the Internet media is more like technology than it might first appear, and that what Andreessen Horowitz cares about is not the software but the potential scale.

    • Like software, media has zero marginal cost
    • Multiple new business models have emerged for media, such as attracting massive user bases for free which can then be monetized through advertising or premium services
    • The addressable market for media is the connected population of the world, and content is itself self-selecting when it comes to effective targeting

    These are all points that are overlooked by those in the media kvetching about the death of journalism: everything that is hurting traditional media companies – zero marginal costs, “free” expectations, unlimited competition because of global distribution – are opportunities for new media companies unencumbered by traditional thinking.

    So, for example, as Dixon writes about BuzzFeed:

    Internet native formats like lists, tweets, pins, animated GIFs, etc. are treated as equals to older formats like photos, videos, and long form essays.

    And why shouldn’t they be? The only reason to treat a tweet differently than a pull-quote, or an animated GIF differently than a photo, is if you are worried how they will appear in print. Remove those shackles and you realize there is no difference at all. What Dixon didn’t say, though, is that this sort of liberation also applies to monetization, and that includes native advertisements. I’m quite bullish on native advertising, and I think the ethical concerns are overstated. Specifically:

    • “Native” advertisements are how every medium monetizes free content: newspaper ads are stories and pictures, magazine ads are beautiful imagery, radio ads are jingly voice-overs, TV ads are scripted stories, so on and so forth. Still, it took each of these mediums time to figure it out – they all went through their banner advertisement stage, i.e. ineffectually using an advertising format that worked on the old medium.

      In the case of the Internet, content consumption is primarily about either the timeline – think Facebook, Twitter, or even blogs – or the irresistible atomic unit that spreads on social media. We should expect – and applaud – advertising adapting itself to these formats.

    • Newspapers in particular have been the most conscientious about maintaining a “wall” between the business and editorial sides of the businesses. Newspapers, though, as I noted above, were de facto monopolies. So while it certainly benefited journalists that they need not worry about how the newspaper made money, there was absolutely a political benefit to trumpeting the objectivity and impartiality of the editorial side. Newspapers could declare themselves to be above reproach even as they made money hand over fist.

      The situation is far different on the Internet. Anyone anywhere has access to everything on the web,2 which means there are no monopolies on either the news or on advertising. Quite the contrary, in fact: the Internet is the closest thing in human history to a true marketplace of ideas, and the currency is user attention. Ultimately, well-functioning markets are a much better police of ethical lapses than self-rightous arbiters.3

      Moreover, the truth is that bias lurks in any author, or in any ownership structure, something that is of particular concern when it comes to the consolidation of traditional media. One can absolutely make the case that an organization like BuzzFeed, with clearly labeled native advertising, is a lot more trustworthy than any reporting that may come out of an organization like NBC (which is owned by Comcast). Oh sure, NBC journalists will object to that statement, but how can we every truly know?4

    This is what makes BuzzFeed so interesting: absent legacy, media absolutely benefits from Internet economics as long as you can figure out effective monetization, and it’s possible BuzzFeed has done just that, and, just like their product, they have done so by abandoning that which primarily mattered in the old medium.

    This begs a deeper question, then: what is a technology company? I actually don’t buy the idea that BuzzFeed has some sort of magic algorithm that makes what they do possible, and if that’s the basis on which Andreessen Horowitz is investing, then I have a bridge they may be interested in as well. However, the entire premise of this blog is that product is only one part of what matters: so does channel, distribution, advertising, business model and the addressable market. And that is what makes BuzzFeed a “tech” company: the world is their addressable market, and they make money by scaling for free.


    1. Obviously data centers and the like cost money, but again, those are fixed costs, not marginal one: each additional user is “free” 

    2. Absent government intervention, of course 

    3. Obviously lots of markets are not well-functioning; I’m not an absolutist here. However, when it comes to what is read online, it is much more of a level playing field than almost anything you can compare it to. That this blog is read at all is testament to that; hopefully, the fact I am monetized by my readers is a competitive advantage 

    4. To be fair, the same criticism applies to Andreessen Horowitz’s involvement in BuzzFeed, and this aspect makes me just as uncomfortable as Comcast owning NBC. Moreover, it certainly is convenient that Marc Andreessen sits on the board of Facebook, BuzzFeed’s most important channel 


  • How Technology is Changing the World (P&G Edition)

    I’ve been surprised at the amount of attention my little corner of Twitter has given to the news P&G, the largest CPG company in the world, is making significant cuts to its brand portfolio (a Marc Andreessen tweetstorm certainly helped). From the Wall Street Journal:

    Procter & Gamble Co. will shed more than half its brands, a drastic attempt by the world’s largest consumer-products company to become more nimble and speed up its growth.

    The move is a major strategy shift for a company that expanded aggressively for years. It reflects concerns among investors and top management that P&G has become too bloated to navigate an increasingly competitive market.

    Chief Executive A.G. Lafley, who came out of retirement last year for a second stint at the company’s helm, said P&G will narrow its focus to 70 to 80 of its biggest brands and shed as many as 100 others whose performance has been lagging. The brands the Cincinnati-based company will keep—like Pampers diapers and Tide detergent—generate 90% of its $83 billion in annual sales and over 95% of its profit.

    The obvious way to interpret this news is to assume, as the WSJ did, that the reason for this move is to “become more nimble” and that P&G has “become too bloated.” This has certainly been the take of most of the folks in my Twitter feed, who have long been regaled by tales of Apple’s focus in particular:

    I think, though, there is something much deeper at play here, and it’s far more of a tech story than a superficial Apple comparison might suggest.


    One of the more interesting – and telling – factoids about the consumer packaged goods (CPG) market is that there are no product managers; rather, there is a very similar position called a “brand manager.” The nomenclature is no accident: while tech products have traditionally differentiated themselves by their product attributes, the distinguishing feature of your typical consumer product is its branding and positioning.

    Take something like health and grooming products: on a product level there are not massive differences between, say, Axe and Dove. But their branding could not be more different. Dove has had massive success with their “Real Beauty” campaign that fights against highly sexualized stereotypes that only serve to make most women feel worse about themselves:

    An ad from Dove's 'Real Women' campaign
    An ad from Dove’s ‘Real Women’ campaign

    Axe, on the hand, in an attempt to appeal to young men, heavily emphasizes exactly the sort of stereotypes Dove is objecting to:

    Not exactly a 'real' woman
    Not exactly a ‘real’ woman

    Here’s the kicker, though: Axe and Dove are both owned by Unilever, the Anglo-Dutch CPG conglomerate.


    When we in tech talk about identity, we’re usually talking about the ability to manage individuals, whether that be for connecting to corporate networks or for effectively running ad networks. In social science, however, the concept of identity is about a person’s own personal conception of who one is and one’s place in the world.1 It is this definition of identity that is at the root of effective branding. What both Dove and Axe are doing in the above ads is appealing to identity: to use Dove products is to reject society’s expectations and to embrace your identity as a woman; to use Axe is to drown insecurity and affirm your manliness, whatever that means.

    In fact, if you squint, you can see that both Dove and Axe are trying to accomplish basically the same thing but for two totally different audiences; while the ends may be similar, the means are necessarily different. Moreover, identity is not just about demographics: it is also about psychographics – things like personality, opinions, lifestyles, etc. This means that, by definition, one brand can not fit all. That is why Unilever sells both Dove and Axe, and it’s the primary reason why P&G has nearly 200 brands of its own: when you can’t differ hugely on product2, you find growth through winning niche by ever-more-specialized niche.

    There is one more factor that explains P&G’s brand proliferation: shelf space. The most effective way to beat out competition is to have your product in front of the customer – and to ensure your competitors’ are no where to be found. Buying decisions for low cost/relatively undifferentiated items are not made through extensive research and online price comparisons; rather, you need body wash, so you go to the body wash aisle, and pick from what is available. P&G leveraged its size and ownership of dominant must-stock brands like Tide, Pampers and Gillette to finagle the maximum amount of shelf space possible, and then filled that shelf space with a cornucopia of specialized brands that not only appealed to specific niches, but also kept competitors away from P&Gs real breadwinners.


    So how, then, have changes in technology forced P&G into a different direction?

    The first change has been the massive increase in noise. It is so much more difficult today for a brand to break through, especially as compared to the halcyon days of one local newspaper and three broadcast channels. Today there are not only TV channels galore, but display advertising, search advertising, Facebook, Twitter, and more. While it is true that uber-specialized brands can now more easily hone in one specific niches, that takes real money and is much more difficult to pull off across 200 brands. P&G has likely realized that many of its brands were simply getting drowned out, rendering the money spent marketing them effectively worthless. Thus P&G has decided it needs to “go big or go home” – either spend a lot of money to make sure a brand stands out, or simply get rid of the brand.

    This is a phenomenon that is playing out across multiple industries. For example, it is significantly easier today to get a startup off the ground; however, that actually means startups need more venture capital, not less, because the real challenge is marketing and/or sales (and thus, by extension, venture capital is bifurcating between very large and very small). The same thing is playing out in the app store. Similarly, there are a few big winners when it comes to journalism and attention, with many medium-sized players fighting for survival. In music stars like Beyoncé are richer and more powerful than ever before, while many smaller acts are struggling to survive. The ease with which information flows means we all get a whole lot more of it, which actually makes it more likely we glom onto whatever it is that stands out, which makes it stand out even more.

    The other big technological change that is affecting P&G’s strategy is e-commerce. As I’ve previously noted in the context of Amazon:

    Jeff Bezos’ critical insight when he founded Amazon was that the Internet allowed a retailer to have both (effectively) infinite selection AND lower prices (because you didn’t need to maintain a limited-in-size-yet-expensive-due-to-location retail space).

    That’s great for Amazon, but not so great for P&G: remember, dominating shelf space was a core part of their strategy, and while I’m no mathematician, I’m pretty sure dominating an infinite resource is a losing proposition. What matters now is dominating search. That is the primary way people arrive at product pages like this:

    Most customers arrive at this page via search, not browsing
    Most customers arrive at this page via search, not browsing

    There are two big challenges when it comes to winning search:

    • Because search is initiated by the customer, you want that customer to not just recognize your brand (which is all that is necessary in a physical store), but to recall your brand (and enter it in the search box). This is a much stiffer challenge and makes the amount of time and money you need to spend on a brand that much greater
    • If prospective customers do not search for your brand name but instead search for a generic term like “laundry detergent” then you need to be at the top of the search results. And, the best way to be at the top is to be the best-seller. In other words, having lots of products in the same space can work against you because you are diluting your own sales and thus hurting your search results

    The way to deal with both challenges is the same way you break through the noise: you put more focus on fewer brands.


    There is a lot that tech companies can learn from companies like P&G. Probably the biggest one is that brand matters. It is the key to breaking through the noise and a major part of sustainable differentiation. However, it’s also worth noting that even after this cull P&G is still going to have nearly 100 brands: that’s because identifying and serving specific groups matters as well. P&G is trying to figure out the balance between specialization and reach that makes sense for them as a Fortune 50 company, and right now that balance is leaning towards less specialization and more reach.

    However, I think that means the opposite is the case for smaller players: the Internet may be noisy, but it also makes it possible to identify and reach niches that were previously too hard to segment or reach at a scale great enough to support a business. As I wrote last week, independent app developers ought to pursue a niche strategy, but so should writers, musicians, and even CPG startups.

    More broadly, I strongly believe P&G’s changes are yet another example of how technology is touching – and massively changing – every single industry. To be sure, P&G has been at the forefront of using technology in its business practices, but now technology is changing the very foundation of how they approach business itself. And, in a way, it speaks to how impressive P&G is as a company that they are among the first to significantly alter their business in the face of these changes; they won’t be the last.


    1. It really is fascinating how “identity” as used in tech is the total inverse of “identity” as used in social science. The former effectively reduces people to a row in a database; the latter is about expressing uniqueness 

    2. Although, to be clear, P&G spends a lot of money and effort on R&D