Ello and Consumer-Friendly Business Models

Vox introduced Ello this way:

A brand-new social networking startup — Ello — has gone viral. At one point on Thursday, the site was acquiring 31,000 new users an hour — many of whom flocked to there because of a disagreement with Facebook over its policy requiring real names, which some say is unfair to LGBTQ and transgender users.

Ello might be the new Facebook or the new Twitter or the new social media flop. It’s too early to tell.

Actually, no, it’s not too early. Ello will fail, deservedly so. It has a consumer hostile business model.


Discussion of how a company makes money – its business model – is often completely divorced from discussion about the product at hand, but I think that’s a mistake; business models fundamentally impact product, if not now, then assuredly in the future. To their credit, Ello is quite up-front about the fact that many of their decisions are driven by business models, specifically, their opposition to ads. From their WTF document:

Many other social networks (like Twitter, Facebook, Tumblr, Google+, Instagram, etc. etc.) started out ad-free, then suddenly switched gears. They modified their privacy policies, started selling information about their users to data brokers, and bombarded us with ads. Many users of those networks feel betrayed.

Ello’s entire structure is based around a no-ad and no data-mining policy. Quite frankly, were we to break this commitment, we would lose most of the Ello community. Including ourselves, because we dislike ads more than almost anyone else out there. Which is why we built Ello in the first place.

OK, so how exactly will Ello make money?

Very soon we will begin offering special features to our users. If we create a special feature that you like, you can choose to pay a very small amount of money to add it to your Ello account forever. We believe that everyone is unique and that we all want and need different things from a social network. So, we are going to offer all sorts of ways for users to customize their Ello experience.

I have no idea what these features might be – a mobile app and an API would be good places to start – but the gist is clear: to get the optimal Ello experience you had better pay up, but only once, and you’ll have it forever.

This is a terrible idea.

Here’s how this policy will play out in practice:

  • The initial experience of using Ello will be a poor one because you won’t have access to all of the features
  • A poor initial experience will lead to high rates of abandonment among the few friends you manage to convince to try the service
  • You will complain to Ello and they will have exactly zero incentive to make things better

It’s this final point that is critical for me whenever I evaluate a new product or service: does the product’s business model incentivise the developer to be responsive to my needs as a user?

The way this drives my decision-making in hardware is the easiest to understand:

  • Businesses predicated on selling high margin products are highly incentivized to differentiate their products to attract my purchase, and also highly incentivized to ensure quality to guarantee that I stay loyal
  • Businesses predicated on achieving the lowest prices are highly incentivized to drive down costs, and are much more likely to sacrifice quality

Thus, I almost always buy high margin products, especially for products I use regularly. The incentives are better for me as a customer, according to the criteria that I consider important.

Things are a little more complex when it comes to software, but the same guiding principle is still in place: I like companies that are incentivized to make and keep me happy:

  • My favorite business model is a subscription: I pay every month for a piece of software or a service, which means the software or service provider is always under pressure to earn my money

  • Advertising is actually not far off from a subscription-style service: while in a very narrow view the adage “you’re the product that’s being bought and sold” is certainly true, the reality is that the Google and Facebooks of the world are arguably even more incentivized to make sure the user experience is great. After all, the value they offer has to be sufficient to overcome the negative effects of advertising (and in some case, particularly Google search, there are times when advertising is actually additive to the user experience)

  • Up-front payments can go either way:

    • I’m a fan of up-front payments if the developer has plans to release new versions of the software that require me to pay to upgrade. This sort of business is similar to high-margin hardware: not only must this developer offer something very compelling to earn my up-front payment, they must also deliver something of quality to ensure I’m willing to pay for versions two, three, and four
    • On the other hand, if the developer will never charge for upgrades, then I think this business model isn’t consumer friendly at all. A developer of such an app is incentivized to garner as many up-front payments as possible with no regard for existing customers
  • “Unlock”-type schemes are the worse. These can be products where you need to pay for features or assistance to accomplish some given task (free-to-play definitely falls in this category). Developers who use these schemes are incentivized to make the experience of their product frustrating so that I might be willing to pay to avoid the frustration. But, once I pay, there is no incentive to keep me happy

That said, my business model preference is impacted by the type of product that is being offered. For example, while I particularly like subscriptions for productivity-focused products that I use on a regular basis, games are more singular experiences that I take in at a particular moment in time; in that case I like paid downloads that let me experience the game on my own schedule. When it comes to social networks, on the other hand, advertising is clearly the best option: after all, a social network is only as good as the number of friends that are on it, and the best way to get my friends on board is to offer a kick-ass product for free. In other words, the exact opposite of the feature-limited product that Ello is proposing.

Make no mistake: I am very much aware that Facebook is tracking everything I do – and that it’s getting worse. As I wrote on Monday in my Daily Update (members-only), the killer feature of the just-relaunched Atlas is not buying ads outside of Facebook. Rather:

What Facebook is proposing with Atlas is that advertisers can connect the dots between online advertising – on Facebook or off – to actual purchases made by customers no matter where those purchases are made. This means that ads served through Atlas will, in the long run, be much more effective for marketers, even as Facebook improves their targeting which will allow them to command ever higher rates across all of their ad offerings.

Not only is that a marketer’s dream, it’s also profoundly creepy.

Here’s the thing though: the reason Facebook can pull that off is because companies like Datalogix, Epsilon, Acxiom, and Bluekai – all Facebook partners since 2013 – have been tracking what I do and buy for years. Privacy died a long time ago; pretending like Facebook killed it is naive (just ask Richard Stallman). If you truly care about privacy then don’t use the Internet, credit cards, a mobile phone, the list goes on-and-on.

If, on the other hand, you care about making a successful social network that users will find useful over the long run, then actually build something that is as good as you can possibly make it and incentivize yourself to earn and keep as many users as possible.

As for Ello, well, co-founder Paul Budnitz told Mashable:

“The advertisers are the customer and the user is the product that’s being bought and sold,” he told Mashable. “We don’t see ourselves competing with [Facebook], because what we’re doing feels so different.”

I completely agree; it feels like a political statement not a product that I – and more importantly, none of my friends – would want to use, and I’m pretty certain that Mark Zuckerberg doesn’t see them as competition either.

Podcast: Exponent Episode 019 – Me, Myself, and Minecraft

On the newest episode of Exponent, the podcast I co-host with James Allworth:

In this week’s episode Ben and James reflect about last week’s episode and the personal feedback that resulted, then move on to discuss why Minecraft is a big deal as well as a bit about Apple’s new focus on privacy.

Links

  • Ben Thompson: Why Now For Apple Watch – Stratechery
  • Ben Thompson: Microsoft’s Good (and Potentially Great) Minecraft Acquisition – Stratechery
  • Ben Thompson: American Girl, Minecraft, and the Next Generation of Builders – Stratechery (members-only)
  • Ben Thompson: Apple Takes Aim at Google – Stratechery (members-only)

Listen to the episode here

Podcast Information: Feed | iTunes | SoundCloud | Twitter | Feedback

Why Now for Apple Watch

The impression I get is that many people don’t really understand why I changed my mind about the Apple Watch.

  • In Apple Watch: Asking Why and Saying No I criticized the lack of an explicit “why” in the Watch presentation and questioned parts of the demo, particular those which replicated phone functionality. After all, if you are going to have your phone with you anyways, why not develop the watch accordingly? I doubled-down on this position in How Tim Cook Might Have Introduced the Apple Watch

  • Then, a week ago, I wrote What I Got Wrong About Apple Watch that laid out a much more ambitious vision for the watch that in my mind explained many of the problems I originally had. In retrospect, though, I perhaps spent too much time explaining the context for changing my mind, and not enough explaining exactly what I think Apple is up to

This post seeks to rectify that. Here, point-by-point, is why I believe Apple is launching the Watch in 2015.

  • The Watch will eventually be Digital Hub 3.0 – This is perhaps the most controversial assertion I will make, and if you disagree with me here, then the rest of my argument doesn’t really matter. I believe that in the long run – i.e. not this version of the Apple Watch, but the one several iterations down the line – the Watch will have cellular capability and the ability to interface with any number of objects, including accessories that have larger screens and/or superior input methods,1 and will be the center of your computing existence. From Apple’s perspective, that means the Watch category is the very long-term replacement for the iPhone, at least for some segment of the population. Again, I’m not talking about 2015 or probably anytime in the next five years, but rather the very long term

  • The Watch’s competition is the iPhone – This may seem a bit strange at first glance – isn’t the Apple Watch competing against Android Wear devices? – but the truth is that the number of people who will start with the premise they want a smart watch and then decide which one to buy is miniscule. Rather, the Apple Watch is competing with non-consumption: people who don’t wear watches because their smartphone is “good-enough” at telling time. For the Apple Watch to achieve the level of success that would justify it as a tentpole product for Apple, it must appeal to far wider audience than those who are already interested in smart watches; to put it another way, the Watch must be clearly superior to the iPhone in your pocket in enough ways to justify not only the additional expense of buying it but also the hassle of wearing it and charging it nightly. This means a vibrant app ecosystem that unlocks a wide array of functionality that no one company could ever come up with on its own

  • There Is no iPod market anymore – I previously argued that the Watch should be more like the iPod: explicitly dependent on the iPhone for complex functionality, with only simple essentials on the device itself. The iPod, though, arose in a world where those simple essentials were completely unique and clearly useful; you obviously weren’t going to carry a computer with you everywhere to listen to all of your music (an activity that appeals to almost everyone). In contrast (and per my previous point), everyone already carries a phone with them. A pure notifications device and health tracker would only ever be a niche device.

In sum, while I believe there is a long-term market for an even more personal computer on our wrist, I don’t believe that market will grow out of an accessory the way the iPhone grew out of the iPod. Rather, the device that makes this market must be fully formed: it must have as many of the ingredients of Digital Hub 3.0 as possible.

The question, then, is why 2015? After all, there are some key ingredients missing in the Watch, the most obvious being the lack of cellular capability. To my mind Apple had three alternatives:

  1. Release an accessory-like Watch today, then transform it into a standalone device once it had its own cellular stack
  2. Wait until the technology was ready and release a fully functional Watch in two or three years time
  3. Release a Watch in 2015 that is designed as if it is a fully functional device, even though for the next few years it needs an iPhone for full functionality

Each of these alternatives has clear tradeoffs:

  • Alternative #1: Release an accessory-like Watch – This approach has the advantage of “making sense” – since it needs an iPhone anyway, it would assume the iPhone’s presence in its design decisions, off-loading things like picture viewing and searching for movie times to the phone, and focusing on Watch-specific activities like maps, health tracking, etc.

    There are two big problems, though:

    • As I noted above, I don’t think the market for this device would be very large
    • Everything about the software – including all 3rd-party applications – would need to be completely re-thought and re-built once the constraint that the phone be present was removed. In fact, what would more likely happen is that the Watch would never fully develop into Digital Hub 3.0 because it would always in some way presume the presence of a phone. This would leave Apple open to disruption from another watch that had no such constraints (see, for example, the compromises Microsoft made with Windows 8 because they needed it to run on traditional PCs)

    Ultimately, this alternative is appealing from a perceived simplicity and elegance angle, but it would be the most detrimental to the long-term potential of the Watch by including a temporary constraint in the fundamental design of the product. I believe I was wrong to so strongly call for this approach originally

  • Alternative #2: Release the Watch when cellular technology is ready – This approach avoids the dangers of designing in temporary constraints that limit the long-term potential of the device, and it ensures that the intended role and capabilities of the Watch is clear from the get-go.

    However, there are again two significant tradeoffs:

    • While Apple is better than most at iterating and fine-tuning a product internally, there are a whole host of things that can only be improved by having a device – and a user interface, especially – out in the open. The iPhone is a perfect example of this: the first several versions of iPhone OS were very limited from an interface perspective; it was only around the iPhone 4 that the user interface was fully realized and perfected. Were Apple to wait to launch the Watch, that time-consuming work would only begin in 2017 or 2018 or whenever the Watch was ready
    • Relatedly, an app ecosystem takes time to build. Sure, there were a decent number of apps when the App Store opened in 2008, but few if any of those apps are still used today. It took a few years for developers to iterate and figure out just how apps ought to work. Again, though, were Apple to wait to launch the Watch that work of building and iterating the ecosystem would also have to wait

    I can very much appreciate the argument for this alternative, but the reality is that a fully realized Watch is not just about being complete from a technical perspective, but also being complete from a UI and app ecosystem perspective. This approach would push out the year when everything is in place to 2019 or 2020 at best

  • Alternative #3: Release a Watch that is fully functional but for cellular connectivity – This approach – the one that Apple chose – allows the hard work of UI iteration and app ecosystem development to begin in 2015. Moreover, that iteration and development will happen with the clear assumption that the Watch is a standalone device, not an accessory. Then, whenever the Watch truly is standalone, it will be a complete package: cellular connectivity, polished UI, and developed app ecosystem. It will be two years closer to Digital Hub 3.0 than Alternative #1 or #2.

    The tradeoff is significant confusion in the short-term: the Watch that will be released next year is not a standalone device. It needs the iPhone for connectivity. To be clear, this is no small matter: the disconnect certainly tripped me up for a week, and if the feedback I’ve gotten is any indication, it continues to befuddle a lot of very smart people. How on earth are normal folks who don’t follow this sort of stuff for a living going to grok the idea of a standalone Watch that actually needs an iPhone?

So why did Apple choose Alternative #3? Confusing people seems so very un-Apple-like.

In fact, I think that this tradeoff is actually a lot less serious than we who approach products from a technological perspective appreciate. Put aside the technology for a second and look at how you actually live your life: how often do you go anywhere without your smartphone? I would bet almost never. Crucially, “normal” people are the exact same: no one goes anywhere without their smartphone (remember, that’s the entire reason an accessory-like device probably wouldn’t have a big market).

What I think Apple realized was that they could, in jujitsu-like fashion, use this reality to their advantage: it’s OK – not ideal, but OK – for the Watch to use the iPhone for connectivity because the iPhone is always present anyways. Apple is not asking anyone to change their behavior in order to get the full functionality of a Watch – it is entirely additive to your day-to-day experience. To put it another way, a standalone Watch that actually needs an iPhone is incongruent only from a technical perspective; from a real-life perspective it is a non-issue.

On the flip-side, in return for making technically-oriented thinkers uncomfortable, Apple gets to reap the UI and ecosystem benefits of launching today, so that when, in a few years, the cellular technology is ready, the Watch will be a fully developed product complete with a polished UI and developed app ecosystem that taken as a whole is far ahead of anything else on the market.

Then, over many years, I believe we will use and carry our smartphones less and less even as they become bigger and more capable (in this regard, the iPhone Plus may have the additional moniker, but I believe it’s the true future iPhone) because we will have an even more portable and personal device with us all of the time. And, in true Apple fashion, they will be ok with that, because we will be replacing their central product with another one from Apple that is potentially even more lucrative.


As an addendum, I am very aware that there are many points in this analysis where I may be wrong:

  • The smartphone may be the perfect device, never to be supplanted by the Watch (just as, for example, many believe that iPads will never fully supplant laptops). Still, even if this is the case, I think Apple would consider iPad-level sales a success
  • The Watch may not be technically capable of being a fully-featured device. However, I highly doubt this true; given how far ahead of the competition the A8 is, I see no reason to doubt the capabilities of the S1
  • The confusion about a standalone Watch that is technically not standalone may be too much to overcome from a marketing perspective. I definitely think this is why the presentation was so muddled: Apple wanted to convey that this was a standalone device that would one day be the only device we need all of the time, but they couldn’t actually say that

In the end, this all comes back to my first point: I believe the future of computing will always track towards more personal and more portable, and the Watch is really the perfect device. As recounted in Bloomberg Businessweek:

Ive, 47, immersed himself in horological history. Clocks first popped up on top of towers in the center of towns and over time were gradually miniaturized, appearing on belt buckles, as neck pendants, and inside trouser pockets. They eventually migrated to the wrist, first as a way for ship captains to tell time while keeping their hands firmly locked on the wheel. “What was interesting is that it took centuries to find the wrist and then it didn’t go anywhere else,” Ive says. “I would argue the wrist is the right place for the technology.”

Moreover, if this is true, this is the perfect place for Apple; in retrospect, the iPod, an accessory that was always very price-competitive, was an aberration. Apple makes ever more personal general purpose computers at a handsome premium that is justified by their superior user experience. Thinking the watch would not be in that vein was the mistake I have since rectified.


  1. I described this vision in Digital Hub 2.0 

Don’t Blame Uber

At the risk of painting too broad a stroke, it seems to me that much of the opposition to changes wrought by the Internet undervalue the positive impact said changes have on normal people. For example, people despair over newspapers closing without appreciating the explosion in quality content freely available to anyone anywhere in the world, the net result of which means those who choose to be can be far more informed about far more things than just a few years ago. Others gripe about Facebook’s frivolity or it and Google’s collection of data without acknowledging that both have fundamentally changed how we relate to both those we know as well as anything we wish to know. Probably the most charged group of companies, though, are those which most closely touch the real world: the “sharing” companies. And, of those, none is more controversial than Uber.

The benefit of Uber for consumers is really quite remarkable. Everything about an Uber experience is superior to the taxis it is obsoleting: it is easier to get an Uber, it is more pleasant to ride in it, it is easier to pay. In places with heavy coverage it is possible to not use a personal car for days at a time or to completely go without, with all of the financial and environmental advantages such a decision entails. And so, my position, at least to start, is to presume that the existence of such a service is a good thing.

Critiques of Uber, particularly from the left, rather stridently disagree; consider this piece by Avi Asher-Schapiro from Jacobin:

Uber is part of a new wave of corporations that make up what’s called the “sharing economy.” The premise is seductive in its simplicity: people have skills, and costumers want services. Silicon Valley plays matchmaker, churning out apps that pair workers with work. Now, anyone can rent out an apartment with AirBnB, become a cabbie through Uber, or clean houses using Homejoy.

But under the guise of innovation and progress, companies are stripping away worker protections, pushing down wages, and flouting government regulations. At its core, the sharing economy is a scheme to shift risk from companies to workers, discourage labor organizing, and ensure that capitalists can reap huge profits with low fixed costs.

There’s nothing innovative or new about this business model. Uber is just capitalism, in its most naked form.

First off, as I noted at the beginning, I’m put off by the lack of acknowledgment of the very real benefit Uber is providing to people who use their service; while I quoted only the conclusion, actual consumers were not mentioned once in the article. The reason this matters for Uber in particular is that if Uber were to actually hire all of its drivers as I presume Asher-Schapiro would prefer (and something Kevin Roose warned the IRS might make happen) the impact on consumers would be significant:

  • Because Uber’s cost per driver would increase significantly, the geographic reach of Uber would be dramatically curtailed
  • Because Uber would not have the flexibility of drawing more drivers onto the roads through surge pricing, availability during peak demand would likely suffer
  • Were Uber to hire drivers as Uber employees, they could also restrict said employees from driving for any other car service; this would actually increase the advantages Uber gains from being reportedly 12 times bigger than its nearest competitor, Lyft, which would ultimately reduce competition and result in higher prices

Moreover, what exactly would drivers gain from being employed by Uber? Clarity on insurance and liability is a big one, and I absolutely think that Uber should be more proactive here, particularly since their scale should give them an opportunity to demand better rates. The bigger gain though – and the biggest reason for Uber not to straight-up hire their drivers – are benefits, particularly health insurance. As Roose notes in his piece:

For start-ups trying to make it in a competitive tech industry, the benefit of opting for 1099 contractors over W-2 wage-earners is obvious. Doing so lowers your costs dramatically, since you only have to pay contract workers for the time they spend providing services, and not for their lunch breaks, commutes, and vacation time. Contract workers aren’t eligible for health benefits, unemployment insurance, worker’s compensation, or retirement plans.

This is the biggest hangup for me in the Uber spin that they are, as the Uber blog put it, enabling entrepreneurship:

Drivers around the world are seizing Uber’s economic opportunity by building small businesses for community needs long forgotten by the taxi industry: high quality, safe, reliable and affordable transportation options. At its current rate, the Uber platform is generating 20,000 new driver jobs every month. UberX driver partners are small business entrepreneurs demonstrating across the country that being a driver is sustainable and profitable…Our powerful technology platform delivers turnkey entrepreneurship to drivers across the country and around the world.

Entrepreneurship is nice and all – I’m obviously a fan – but the truth is it is a risky proposition in the United States. Estimates for the cost of health care for a family of four range from $16,000 to $22,000, and medical bankruptcy accounts for the majority of personal bankruptcies. If these numbers are shocking to you, it’s likely because your employer is paying the lion’s share of your costs; the cost of such payments is a significant factor in income staganation, particularly in the lower to middle classes. The real world implication of having employers provide health care is even more pernicious though: it dramatically increases the stakes when it comes to entrepreneurship, Uber-style or more traditional.1

This is the chief reason why I am so frustrated by the left-wing attacks on Uber. Beyond the lack of regard for consumers, the truth is the venom is misplaced: it’s not that Uber is bad for not hiring workers and giving them attendant benefits, it’s that said benefits shouldn’t be Uber’s – or any employer’s – responsibility at all. It’s employer-based health care that is the problem, and in ways that go beyond the economic benefits of universal health care (the most obvious of which is the broadest possible risk pool, not to mention unmatched buying power). It’s that people are afraid to leave or lose their jobs because they lack the most basic of safety nets.

This is quite personal for me; one of the chief reasons I took the risk of launching Stratechery is that my family lives in Taiwan, home of one of the best health care systems in the world. As someone who is self-employed I pay my fair share, but in return I could take a chance on this site knowing that while I might fail (I haven’t), I at least would not endanger or bankrupt my family in the process. I wish this opportunity on everyone.

And, as for those Uber drivers, the truth is they are Uber’s weak point (members-only); they recently forced Uber to change its policies in New York – how much more might they accomplish if they had the sort of safety net afforded to citizens of every other developed country? Imagine that: the freedom to leave a job you didn’t like because you knew that at least your family would have its most basic needs met. That’s what alleged advocates like Asher-Schapiro should be focused on.

As an addendum, I find it frankly bizarre that I write this article concerned about being characterized as both a right-wing fanatic (It’s not Uber’s problem!) as well as a left-wing socialist (Go universal health care!). And, to be honest, I’d be lying if I said I weren’t concerned about making some of my paying customers unhappy. Politics are fraught like that. But it seems like this issue in particular is one in which alleged right-wingers like Uber CEO Travis Kalanick and left-wingers like Asher-Schapiro could find common ground.

Moreover, I believe that Silicon Valley broadly should make the adoption of a social safety net one of their top political priorities. I do believe that services like Uber and technologies like automation will ultimately progress humanity, but as someone who grew up in the Midwest I know personally the wrenching cost this progress can exact. The political and regulatory challenges that Uber is facing are only the beginning for Silicon Valley, and if we as an industry are not proactive in ensuring that everyone benefits from progress then we will have only ourselves to blame for the inevitable backlash.


  1. Obamacare has, in my opinion, improved the situation, but seeing as how it still relies on employer-based health care it’s not nearly as friendly to entrepreneurship as true universal health care would be 

Podcast: Exponent Episode 018 – Agree to Disagree

On the newest episode of Exponent, the podcast I co-host with James Allworth:

In this week’s episode Ben explains why he has changed his mind about Apple Watch. James is not convinced. We go on for a while.

Links

  • Ben Thompson: What I Got Wrong About Apple Watch – Stratechery
  • John Gruber: Apple Watch: Initial Thoughts and Observations – Daring Fireball

Listen to the episode here

Podcast Information: Feed | iTunes | SoundCloud | Twitter | Feedback

What I Got Wrong About Apple Watch

While I stand by last week’s opinion that the Watch presentation was poor, I’ve somehow, at least in my little corner of the Internet, become the face of people who don’t believe in Apple Watch at all. The biggest problem with that view is that I’m actually a big believer in the category, having written favorably about watches and the potential for Apple specifically here, here, and here; I even tried to buy a Pebble!1 I’m tired of how the phone pulls me away from my family, and time and notifications seemed like more than enough justification for this watch wearer. I presumed the Apple Watch would be similar, but significantly better executed with superior industrial design, plus a few additional killer features that made you just have to have one. In fact, that’s exactly how I suggested that Tim Cook should have introduced the Watch.

I must admit, though, even as I posted that article and recorded an episode of Exponent that was probably more critical of the Watch itself than I intended,2 there was a part of me that wondered if I were being Tony Fadell to Tim-Cook-and-company’s Scott Forstall. From a 2011 BusinessWeek profile of the then Senior Vice-President of iOS:

Around 2005, Jobs faced a crucial decision. Should he give the task of developing the [iPhone’s] software to the team that built the iPod, which wanted to build a Linux-based system? Or should he entrust the project to the engineers who had revitalized the software foundation of the Macintosh? In other words, should he shrink the Mac, which would be an epic feat of engineering, or enlarge the iPod? Jobs preferred the former option, since he would then have a mobile operating system he could customize for the many gizmos then on Apple’s drawing board. Rather than pick an approach right away, however, Jobs pitted [Forstall and Fadell] against each other in a bake-off.

Forstall, who was head of the OS X project, obviously won, leading to the creation of a device that Blackberry executives didn’t think was possible. As a former Blackberry employee recounted:

RIM had a complete internal panic when Apple unveiled the iPhone in 2007, a former employee revealed this weekend. The BlackBerry maker is now known to have held multiple all-hands meetings on January 10 that year, a day after the iPhone was on stage, and to have made outlandish claims about its features. Apple was effectively accused of lying as it was supposedly impossible that a device could have such a large touchscreen but still get a usable lifespan away from a power outlet.

The iPhone “couldn’t do what [Apple was] demonstrating without an insanely power hungry processor, it must have terrible battery life,” Shacknews poster Kentor heard from his former colleagues of the time. “Imagine their surprise [at RIM] when they disassembled an iPhone for the first time and found that the phone was battery with a tiny logic board strapped to it.”

For my part, I’ve certainly been operating under the assumption that the wrist is not yet ready for full blown computing, which is why I thought the “iPod” version of a Watch needed to come first. From a piece I wrote in March:

Imagine a device that initially launches with limited functionality and is dependent on an iPhone (similar to the iPod, or the first iPhone). Perhaps it monitors fitness and health, and slowly, year-by-year, adds additional functionality. More importantly, assume that Moore’s Law continues, batteries make a leap forward, flexible displays improve, etc. Suddenly, instead of a phone that uses surrounding screens, like the iPhone does in the car and the living room, why might not our wrist project to a dumb screen (with a phone form-factor) in our pocket as well? Imagine all of our computing life, on our wrist, ready to project a context-appropriate UI to whichever screen is at hand. Moreover, by being with us, it’s a perfect wallet as well.

To be clear, this is certainly years off…

What, though, if it’s not? What if it is, once again, a “battery with a tiny logic board strapped to it”?

The S1 computer-on-a-chip at the heart of Apple Watch
The S1 computer-on-a-chip at the heart of Apple Watch

And what if that logic board, – which Apple calls the S1 – is even more ahead of the industry than last year’s couldn’t-possibly-have-existed 64-bit A7? What if Apple skipped the iPod-stage of wearables and went straight to the iPhone stage?

John Gruber captured this possibility in Apple Watch: Initial Thoughts and Observations:

Apple Watch’s third-party integration is clearly deeper than just showing notifications from apps on your iPhone. And though it depends upon a tethered connection with your phone for Internet access, it’s far more functional while out of range of your phone than any smartwatch I’ve seen to date. It’s a full iOS computer. If it actually doesn’t do much more, or allow much more, than what they demonstrated on stage last week, I am indeed going to be deeply disappointed, and I’ll be concerned about the entire direction of the company as a whole. But I get the impression that they’ve only shown us the tip of the functional iceberg, simply because they wanted to reveal the hardware — particularly the digital crown — on their own terms. The software they can keep secret longer, because it doesn’t enter the hands of the Asian supply chain.3

I still believe that Tim Cook missed an important opportunity to explain why the Watch existed, but, after an avalanche of tweets, emails, Gruber’s exceptionally insightful piece, and most of all, Apple’s incredible track record, I’m slowly coming around to the position that maybe, just maybe, I ought not be bullish on the Watch simply because I’m bullish on the category, but rather because it’s actually the exact product necessary to make the category succeed.

One tweet I found particularly persuasive was this one:


This makes the Pebble sound a lot like a smartphone circa 2006. The thing is, though, the iPhone was never targeted at 2006-era smartphone users: it was targeted at everyone, and that meant it had to destroy our expectations of what a smartphone was in order to build a new one that happened to look exactly like an iPhone. Similarly, to be the sort of tentpole product Cook promised the Watch would be it must target more than current watch wearers: it must be a product so good that non watch-wearers will put something on their wrists, put up with nightly charging, spend hundreds or thousands of dollars every few years, and all the other sorts of behavior that no one thought any rational phone buyer would tolerate just eight years ago. In other words, it must swing for the fences, just like Apple seems to have done.

Interestingly, I suspect this reading of the Apple Watch’s capabilities suggests that from Apple’s perspective the true new iPhone is the Plus. Numerous reviews have noted that the Plus is really more of a truly portable computer than it is a phone, the only tradeoff being its reduced portability. It is, in other words, the evolutionary iPad, but with guaranteed cellular connectivity and pocketable in a pinch. That leaves room for a device where portability is paramount, and computing only needs to be good enough given those constraints. It leaves room for an Apple Watch.

One final note: if I am (now) correct, and Apple has created something that most observers – including myself – didn’t think was possible in 2015, well, then this really is a Tim Cook breakthrough. The idea of a watch as a full-blown computer is not novel, but to create the future five years early in three different editions with all kinds of unique bands – and a buying experience to match – is something only Apple and their once-in-a-lifetime operational genius of a CEO could do, if indeed that is what they have done.


  1. Unfortunately I was defeated by their refusal to accept a U.S. credit card for a non-U.S. shipment (a nice example of the tradeoff between security and user experience, I might note)  

  2. I am a passionate person, and that sometimes gets me in trouble on podcasts in particular 

  3. The Wall Street Journal had a piece today about how exactly those leaks happen 

Microsoft’s Good (and Potentially Great) Minecraft Acquisition

It’s difficult to overstate what a big deal Minecraft is. It’s the third best-selling game of all time behind Tetris and Wii Sports, and unlike the latter especially, it is a remarkably sticky experience: the vast majority of customers (over 90 percent on PC, according to Microsoft) sign in every single month. Were Microsoft to change nothing they claim their $2.5 billion purchase of Minecraft maker Mojang would pay for itself in less than five years.1

Minecraft, though, has the potential to make a lot more money; currently, Mojang only makes money off of players once: when they buy the game. All of those additional hours of play are essentially free. Contrast this to a game like the legendary World of Warcraft, which has made somewhere between $10 and $20 billion over its lifetime through a combination of up-front purchases and subscription fees2 and you realize that Minecraft founder Notch may be gaining his sanity at the cost of a lot of potential earnings.3

Minecraft, though, isn’t just a great financial decision; it’s a good strategic one as well that fits very nicely with Microsoft’s new vision as outlined by Satya Nadella:

At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

At first glance, this statement seems to reference products like Office and Azure, but it also works very well for Minecraft. Minecraft is more than just a game: it’s a community, with a huge cloud component, developers, and, at its very essence, it’s about making things. What could be more productive than that?4

Moreover, like Office and Azure, Minecraft is truly cross-platform. It’s the best-selling paid-download game on both iOS and Android, and it also has a very popular PS3 version and a newly-released PS4 version, with a PS Vita port on the way. Xbox head Phil Spencer took care to note that this would remain the case:

Minecraft adds diversity to our game portfolio and helps us reach new gamers across multiple platforms. Gaming is the top activity across devices and we see great potential to continue to grow the Minecraft community and nurture the franchise. That is why we plan to continue to make Minecraft available across platforms – including iOS, Android and PlayStation, in addition to Xbox and PC.

Here’s the thing, though: how much better would this acquisition look if Microsoft didn’t own Xbox at all?

  • Microsoft would not need to reassure skittish gamers that the game would remain cross-platform (To be clear, making Minecraft an exclusive would be financially stupid. Sure, Microsoft made the first Xbox a success by buying Bungie and making Halo an exclusive, but that was for a tenth of the cost)
  • Microsoft would have a lot more latitude to capture more value from Minecraft, increasing the value of this purchase. Certainly any effort to make gamers pay more will be resisted, but when said efforts can be couched in “Microsoft is trying to help the Xbox” language it makes it that much more difficult to win the inevitable PR battle
  • Most importantly, Microsoft’s incentives would be much more aligned with the Minecraft community’s: their goal would be the success of Minecraft, full stop, without the complication of needing their own platform to succeed

As long-time readers of this blog know, I’m a big believer in the power of incentives, and in the case of Microsoft, it’s the foundational reason why I believe the company would be better off split in two. I wrote in It’s Time to Split Up Microsoft:

In 2000, Windows, Office, and Server were a virtuous cycle. Today, Windows and the entire devices business is nothing but a tax. Microsoft is a company that is meant to serve the entire market, and the way to do that is through services on every device. It’s all fine and well to say that you will treat devices equally, but given Microsoft’s history – and the power of culture – I just don’t believe it’s possible.

I would create two companies: the devices side, which includes Windows, Windows Phone, and Xbox, and let them do the best they can to grow that 14%. Heck, make Kevin Turner the CEO. Windows profits will keep the company going for quite a while, and who knows, maybe they’ll nail what is next.

The other company, the interesting company, is the services side – the productivity side, to use Nadella’s descriptor. This company would be built around Office, Azure, and Microsoft’s consumer web services including Bing, Skype and OneDrive. These products don’t need Windows; they need permission to be the best regardless of device.

Every word here applies to Minecraft, a truly remarkable phenomenon that is not only about gamers but very much about the next generation of builders – including developers. I think it has the potential to continue to grow and, along the way, not only make Microsoft a whole bunch of money, but also enable an entire ecosystem. It really could be the Office of gaming. The danger is that, like Office did for too many years, it withers unnecessarily because Microsoft has Windows consoles to sell.5


  1. According to their press release, “Microsoft expects the acquisition to be break-even in FY15 on a GAAP basis”; on a GAAP basis is referring to the annual amortization cost. Microsoft won’t make back the entire $2.5 billion in FY15 

  2. For reference, all developers combined have made just over $20 billion on the App Store 

  3. I don’t blame Notch though; I really appreciated his resignation letter and am happy for him 

  4. It’s also a community that needs Microsoft’s help: while Mojango offers Minecraft server software, another popular option is ensconced in a licensing battle that is probably best addressed by Minecraft itself building a superior option. Microsoft can do that 

  5. The same thing applies to Microsoft Studios broadly; what a waste of resources to make Halo: Spartan Assault for touch devices only to limit it to Windows 8 and Windows Phone. Imagine how much revenue Microsoft has foregone by not developing for iPad and Android, and that’s before we even get to the potential of Halo proper and the other Microsoft Studios titles on Playstation. There’s a lot of latent revenue potential here, although Minecraft would be the crown jewel 

How Tim Cook Might Have Introduced Apple Watch, and Exponent Episode 017: Let’s End it There

In 2010, John Gruber wrote an article for Macworld called This is How Apple Rolls:

They take something small, simple, and painstakingly well considered. They ruthlessly cut features to derive the absolute minimum core product they can start with. They polish those features to a shiny intensity. At an anticipated media event, Apple reveals this core product as its Next Big Thing, and explains—no, wait, it simply shows—how painstakingly thoughtful and well designed this core product is. The company releases the product for sale.

Then everyone goes back to Cupertino and rolls. As in, they start with a few tightly packed snowballs and then roll them in more snow to pick up mass until they’ve got a snowman. That’s how Apple builds its platforms. It’s a slow and steady process of continuous iterative improvement—so slow, in fact, that the process is easy to overlook if you’re observing it in real time. Only in hindsight is it obvious just how remarkable Apple’s platform development process is.

I really like the idea of a Wearable generally, and I think the Apple Watch looks fantastic. But when it comes to software I’m concerned that Apple got away from this powerful process. That was at the root of my concern in Apple Watch: Asking Why and Saying No. As a follow-up to that article, I wrote in my subscriber-only Daily Update how I thought Tim Cook should have introduced the Watch:

If you’ll forgive my presumptiveness, this is what I would have liked to hear:

There is one more thing.

We just showed you the best phones ever. They are bigger in every way, allowing you to do more than ever before. But sometimes you want to do less.

For example, suppose you are walking to a place you haven’t been before; you don’t want to look at a phone screen, you simply want to know where to go.

Or maybe for you the iPhone is your primary computer, so you buy the new iPhone 6 Plus, and you keep it in your bag. How, then, do you ensure you don’t miss that important call, or quickly respond to a text? Perhaps you are at the park with your children, or out to eat with your partner. You want to stay in that moment, with those you care about, yet still be reachable.

I just showed you Apple Pay with an iPhone, but even then you still need to get something out of your pocket or purse. What if there were something even more convenient and natural?

For me, fitness is really important, but an iPhone, even with our new M8 chip, is at best a blunt instrument for tracking your fitness and health. Wouldn’t it be better to have something that was always on you, even while exercising?

For our customers, the iPhone is their life: where they work, play, and everything in between. But all of us have just a few people that mean so much more, to whom we are as close as can be even if we are miles apart. What if we could connect with those most important to us in a much more personal way?

We love the iPhone; it’s the best phone on the planet, and it lets you do almost anything. But, for just a few key things, we think there is a better way. A better product, one that is the next chapter in the Apple Story.

Cue video

And then, a demo of these five use cases, and nothing else, with a clear emphasis that the Watch makes the iPhone better by doing just a couple of things really well, and looks absolutely fantastic to boot. No searching for movies, no SDK, just a simple and compelling reason to exist, with the patience to know that all of the other good ideas – and apps – will come to the platform in due course.

(To read the rest of this piece and to receive the Daily Update every morning, you can sign up here)

On the latest episode of Exponent, the podcast I co-host with James Allworth, we go deep on this same point: why is Apple trying to do so much with Watch, and obscuring the parts that are truly remarkable?

Plus, luxury in Asia and console follow-up. You can listen to the episode here.

Podcast Information: Feed | iTunes | SoundCloud | Twitter | Feedback

Apple Watch: Asking Why and Saying No

Dan Frommer wrote in Quartz about The Hidden Structure of the Apple Keynote. His analysis covered 27 events since 2007, and included things like average length, laughs per executive, and the timing of iPhone reveals.

It’s a good read, but in light of the Watch introduction, I am more interested in comparing yesterday’s keynote to only three others: the introductions of the iPod, iPhone, and iPad. Specifically, I’m interested in the exact moment when Apple revealed each device:

  • The iPod was introduced on October 23, 2001; after discussing iLife and Apple digital hub strategy, the iPod section begins at 11:30. However, the iPod itself does not actually appear on a slide until 20:48, and Jobs pulls it out of his pocket at 21:07, nearly 10 minutes after he begins his introduction. The intervening 10 minutes were spent explaining the music market, why Apple thought they could succeed in that market, and what was special about the iPod

  • The iPhone was introduced on January 9, 2007. However, the iPhone itself does not actually appear on a slide until 7:03, and only then to introduce multitouch. The rest of the device wasn’t seen until 12:20. Jobs spent all of that time explaining the smartphone market, why Apple thought they could succeed in that market, and what was special about the iPhone

  • The iPad was introduced on January 27, 2010. After a few updates, the iPad section begins at 5:15. However, the iPad itself does not actually appear on a slide until 8:55. Jobs spent the intervening time explaining that Apple saw a market between the iPhone and the Mac, but that any device that played there needed to be better than either device at a few specific use cases

  • The Apple Watch introduction was quite a bit different:

The Apple Watch section began with the iconic “One more thing…” at 55:44,1 and these were the extent of Tim Cook’s words before we got our first glimpse of the Apple Watch:

We love to make great products that really enrich people’s lives. We love to integrate hardware, software, and services seamlessly. We love to make technology more personal and allow our users to do things that they could have never imagined. We’ve been working incredibly hard for a long time on an entirely new product. And we believe this product will redefine what people expect from its category. I am so excited and I am so proud to share it with you this morning. It is the next chapter in Apple’s story. And here it is.

Then came the introductory video, and we never got an explanation of why the Apple Watch existed, or what need it is supposed to fill. What is the market? Why does Apple believe it can succeed there? What makes the Apple Watch unique?2

Now it’s very fair to note that the biggest difference between the introduction of the iPod, iPhone and iPad as compared to the Apple Watch is that Steve Jobs is no longer with us. Perhaps the long introduction was simply his personal style. But the problem is that the Smart Watch needs that explanation: what exactly is the point?

To be clear, the hardware looks amazing, and I love the Digital Crown. It’s one of those innovations that seems so blindingly obvious in retrospect, and Cook was spot on when he noted that you can’t just shrink a smartphone UI to the wrist. But that was exactly the problem with too many of the software demos: there were multiple examples of activities that simply make no sense on the wrist. For example:

  • There were sixty-four applications on the demo watch, and the tap targets are quite small3
  • I can definitely see some compelling Siri use cases for the Watch, but scrolling through movies is not one of them. If you’re looking for a movie you’re almost certainly in a state of movement and mind that makes it possible to pull out your phone and use a screen much more suited to the task
  • “We also looked at how you can carry your photos with you.” Here’s an idea: on your phone!

The Maps demo was the most frustrating: it included panning around, searching for a Whole Foods – including the phone number! – all activities that by definition mean you are stationary and can use your phone. But that’s when the demo got really good:

  • While you’re actually traveling, the watch will not only show directions, but will actually use the “Taptic Engine” to indicate turns by feel. That is awesome, and an amazing use case for the watch. Who hasn’t been dashing somewhere, running into things while looking at their phone? A watch is far more suited, particularly one that doesn’t even require you to look at the screen
  • I also like that you can use the Watch to control your iPhone or any other AirPlay device. This would be incredibly useful around the house, at a party, etc.
  • The Taptic Engine makes sure only you know about a notification that you have previously agreed to receive. There are smart options for replying, as well as Siri and emoticons, but you can always use “Handoff” to compose a more extensive reply on a more suitable device

There is a clear pattern to these examples:

  • The bad demos are all activities that are better done on your phone. They are also the activities that make the Watch seem the most like a real computer
  • The good demos are all activities that extend your phone in a way that simply wasn’t possible before. They are also activities that make the Watch seem less capable as a self-contained unit

This is why I’m worried that the lack of explanation about the Watch’s purpose wasn’t just a keynote oversight, but something that reflects a fundamental question about the product itself that Apple itself has yet to answer: is Watch an iPhone accessory, or is it valuable in its own right?4

The question is likely more fraught than it seems: the entry price for Apple Watch is $350, nearly half the price of an iPhone (and $150 more than the up-front cost for a subsidized consumer). Moreover, I suspect Edition models will go for ten times that, if not more. Surely such a price demands a device that is capable of doing more, not less.

In fact, I would argue the contrary. Swiss watches are less accurate, but the benefit they confer on the user are so much greater. Those benefits are about intangible things like status and fashion, but that doesn’t mean they are worth less than more technical capabilities like telling time accurately. Indeed, they are exponentially more valuable.

Moreover, it seems clear to me that Apple wants to play in this space: Jony Ive wasn’t joking when he allegedly said that Switzerland was in trouble. I believe Apple’s long-term plan for Apple Watch is to own the wrist and to confer prestige and status with options like premium bands and 18-karat gold. To do that, though, they must compete not on technical merit but on the sort of intangible benefits that they always win with; chief among these is the user experience. A premium smart watch will win by yes, being fashionable, and yes, conferring status, but above all by doing a few things better than any other product on the market, and – this is critical – dispensing with everything else in the pursuit of simplicity.

To me the instructive Apple product is the iPod. What made the iPod so revolutionary was not just its size and industrial design; it was that Apple’s MP3 player did less than its competitors, thanks to its symbiotic relationship with iTunes. Sure, you couldn’t really make playlists5 or buy music, but that’s what your computer was for. What remained was the very essence of a music player, and it was because of that simplicity that the iPod became such a success.

It’s worth noting, of course, that the iPhone is in many ways the evolutionary iPod – Steve Jobs even introduced it as such in the above video. Similarly, I’m pretty convinced that one day our primary computing device will be something that we wear on our body. But that is many iterations and technical (and battery) advances down the road. Why is Apple in such a rush to get there by 2015?

Ultimately, I’m bullish on the Apple Watch. I think the Digital Crown is a big deal, and it’s a perfect companion for the 5.5″ iPhone especially (the device that many fear will cannibalize the iPad itself necessitates another iOS device). I also think the customization and segmentation is really smart and will enable Apple to sell at multiple price points (my piece about the Veblen goods is very much applicable to Watch). Moreover, some of the demos were quite compelling, including the fitness applications and the very personal messaging; it was telling that Apple gave that functionality a dedicated button. I plan on buying one as soon as they are available.

But I’m already a watch wearer, and a geek to boot, and heck, I can probably expense it. To ensure the Watch’s success broadly Apple needs to really articulate “Why”, not only externally in their advertising but internally to their product managers who ought to remember that Apple’s greatness is built on saying “No.”

Note: I wrote about the iPhone and Apple Pay introductions in the Daily Update (members only)


  1. I admit, I got chills 

  2. In fact, somewhat bizarrely, Cook’s first words after the reveal were about Apple Watch’s accuracy:

    Apple Watch is the most personal device we’ve ever created. We set out to make the best watch in the world. One that is precise. It’s synchronized with the universal time standard and it’s accurate within plus or minus 50 milliseconds.

    What makes this so strange is that accurate timekeeping was the big selling point for Quartz watches. The Quartz crisis caused a significant decline of the Swiss watchmaking industry, but the primary reason for the success of the Asian manufacturers that adopted the technology was that they were so much cheaper. Today the watch industry is bifurcated between high end (relatively inaccurate) mechanical watches and inexpensive Asian offerings; I’m quite confused why Apple would be effectively aligning themselves with the latter, and with their first slide to boot! 

  3. I suspect the demo unit was “on rails”, meaning the watch was programmed to step through the demo step-by-step; it’s telling that Kevin Lynch didn’t have a single mis-tap, and the Maps demo was obviously simulated 

  4. The Watch does require an iPhone for full functionality, especially connectivity 

  5. Yes, I know you could push the middle button in a pinch