Disney and Fox

It’s always a risk writing about a deal before it is official: CNBC reported a month ago that Disney was in talks to acquire many of 21st Century Fox’s assets, including its eponymous movie studio, TV production company, cable channels, and international assets (but not the Fox broadcast network, Fox News, FS1 — Fox’s sports channel — and Fox Business). All was quiet until last week, when CNBC again reported that the deal was on, and now included 21st Century Fox’s Regional Sports Networks.

As I write this it is now widely reported that the deal is imminent; most notably, Comcast has dropped out of the bidding, which means the only question is whether or not Disney can close the deal: they would be crazy not to.

The Logic of Acquisition

The standard reason given for most acquisitions is so-called “synergy”: the idea that the two firms together can generate more revenue with lower costs than they could independently; most managers point towards the second half of that equation, promising investors significant cuts through reducing the number of workers doing the same thing. Certainly that is an argument in Disney’s favor: nearly everything 21st Century Fox does Disney does as well.

Still, it’s not exactly a convincing argument; acquisitions also incur significant costs: the price of the acquired asset includes a premium that usually more than covers whatever cost savings might result, and there are significant additional costs that come from integrating two different companies. Absent additional justification, the cost-savings argument comes across as justification for management empire-building, not value creation.

That’s not always the reason though: the cost-savings argument is often a fig-leaf for an acquisition that reduces competition; better for management to claim synergies in costs than synergies that result in cornering a market. The result is managers who routinely make weak arguments in public and strong arguments in the boardroom.

The best sort of acquisitions, though, are best described by the famous Wayne Gretzky admonition, “Skate to where the puck is going, not where it has been”; these are acquisitions that don’t necessarily make perfect sense in the present but place the acquirer in a far better position going forward: think Google and YouTube, Facebook and Instagram, or Disney’s own acquisition of Capital Cities (which included ESPN).

What makes this potential acquisition so intriguing is that it is a mixture of all three — and which of the three you pick depends on the time frame within which you view the deal.

The Not-so-distant Past

Go back to that Capital Cities acquisition: the 1995 deal was, at the time, the second largest acquisition ever, and it primed Disney to dominate the burgeoning cable television era.

I sketched out the structure of cable TV earlier this year in The Great Unbundling:

The key takeaway should be a familiar one: economic power came from controlling distribution. The cable line was the only way for consumers to obtain the fullest possible array of in-home entertainment, which meant that distributors were able to charge consumers as much as they could bear.

The real negotiations in this value chain took place between content producers and distributors, which ultimately determined who made the most profit, and here Capital Cities + Disney was a powerful combination:

  • First and foremost, ESPN was well on the way to establishing itself as the most indispensable cable channel amongst consumers, allowing it to command carriage fees that were multiples higher than any other channel.
  • Secondly, Disney was able feed content to ABC just in time for the FCC having loosened regulations on broadcast networks producing their own content (instead of acquiring it).
  • Third, Disney built a bundle within the bundle: distributors had to not only pay for ESPN, but also for the Disney channel, A&E, Lifetime, and the host of spinoff channels that followed; any proper accounting for ESPN’s ultimate contribution to Disney’s bottom line should include the above-average carriage fees charged by all of Disney’s properties.

This all seems obvious in retrospect, but at the time most of the attention was on ABC, then the most-profitable broadcast channel. The puck, though, was already moving towards cable.

The Fading Present

The dominant story in media over the last few years has been the slow-but-steady breakdown of that cable TV model. In August 2015, now-Disney CEO Bob Iger — who joined the company in that Capital Cities deal — admitted on an earnings call that ESPN and the company’s other networks, which had previously generated 45% of Disney’s revenue and 68% of its profit, were losing customers:

We are realists about the business and about the impact technology has had on how product is distributed, marketed and consumed. We are also quite mindful of potential trends among younger audiences, in particular many of whom consume television in very different ways than the generations before them. Economics have also played a part in change and both cost and value are under a consumer microscope. All of this has and will continue to put pressure on the multichannel ecosystem, which has seen a decline in overall households as well as growth in so-called skinny or cable light packages.

In fact, as I detailed earlier this year, Disney had not been realistic at all about the “impact technology [had] had on how product is distributed, marketed and consumed”:

Back in 2012 the media company signed a deal with Netflix to stream many of the media conglomerate’s most popular titles…Iger’s excitement was straight out of the cable playbook: as long as Disney produced differentiated content, it could depend on distributors to do the hard work of getting customers to pay for it. That there was a new distributor in town with a new delivery method only mattered to Disney insomuch as it was another opportunity to monetize its content.

The problem now is obvious: Netflix wasn’t simply a customer for Disney’s content, the company was also a competitor for Disney’s far more important and lucrative customer — cable TV. And, over the next five years, as more and more cable TV customers either cut the cord or, more critically, never got cable in the first place, happy to let Netflix fulfill their TV needs, Disney was facing declines in a business it assumed would grow forever.

That business was predicated on cable’s monopoly on in-home entertainment; what Netflix offered was an alternative:

Netflix’s path to a full-blown cable TV competitor is one of the canonical examples of a ladder strategy:

Netflix started by using content that was freely available (DVDs) to offer a benefit — no due dates and a massive selection — that was orthogonal to the established incumbent (Blockbuster). This built up Netflix’s user base, brand recognition, and pocketbook

Netflix then leveraged their user base and pocketbook to acquire streaming rights in the service of a model that was, again, orthogonal to incumbents (linear television networks). This expanded Netflix’s user base, transformed their brand, and continued to increase their buying power

With an increasingly high-profile brand, large user base, and ever deeper pockets, Netflix moved into original programming that was orthogonal to traditional programming buyers: creators had full control and a guarantee that they could create entire seasons at a time

Each of these intermediary steps was a necessary prerequisite to everything that followed, culminating in yesterday’s announcement: Netflix can credibly offer a service worth paying for in any country on Earth, thanks to all of the IP it itself owns. This is how a company accomplishes what, at the beginning, may seem impossible: a series of steps from here to there that build on each other. Moreover, it is not only an impressive accomplishment, it is also a powerful moat; whoever wishes to compete has to follow the same time-consuming process.

Another way to characterize Netflix’s increasing power is Aggregation Theory: Netflix started out by delivering a superior user experience of an existing product (DVDs) to a dedicated set of customers, leveraged that customer base to gain new kinds of supply (streaming content), gaining more customers and more supply, and ultimately leveraged those customers to modularize supply such that the streaming service now makes an increasing amount of its content directly.

What Disney is seeking to prove, though, is that it can compete with Netflix directly by following a very different path.

The Onrushing Future

I’ve long argued that the only way to break away from the power of aggregators is through differentiation; it’s why I argued after that Iger earnings call that Disney would be OK — after all, differentiated content is Disney’s core competency, as demonstrated by its ability to extract profits from cable companies.

The implication of Netflix’s shift to original programming, though, isn’t simply the fact that the streaming company is a full-on competitor for cable TV: it is a competitor for differentiated content as well. That gives Netflix far more leverage over content suppliers like Disney than the cable companies ever had.

Consider the comparison in terms of BATNA (Best Alternative to a Negotiated Agreement): for distributors the alternative to not carrying ESPN was losing a huge number of customers who cared about seeing live sports; that’s not much of an alternative! Netflix, on the other hand, can — and is! — going straight to creators for content that viewers can watch instead of whatever Disney may choose to withhold if Netflix’s price is unsatisfactory.1 Clearly it’s working: Netflix isn’t simply adding customers, it is raising prices at the same time, the surest sign of market power.

Therefore, the only way for Disney to avoid commoditization is to itself go vertical and connect directly with customers: thus the upcoming streaming service, the removal of its content from Netflix, and, presuming it is announced, this deal. When the acquisition was rumored last month, I wrote in a Daily Update:

This gets at why this deal makes so much sense for Disney. The company already announced that Star Wars and Marvel content would indeed be a part of the streaming service (that is what was still up in the air when I wrote Disney’s Choice), but the company is absolutely right to not stop there: being a true Netflix competitor means having more content, not less — and that content doesn’t necessarily have to be fresh! Streaming shifted television from a world based on scarcity — there are only 24 hours in the day times however many channels there are, and a channel can only show one thing at a time — to one based on abundance: you can watch anything you want at anytime and it can be different from everyone else.

Moreover, not only does 21st Century Fox have a lot of content, it has content that is particularly great for filling out a streaming library: think The Simpsons, or Family Guy; according to estimates I’ve seen, in terms of external content Fox owns eight of Netflix’s most streamed shows — more than Disney’s six. This content is useful not only for driving sign-ups with certain audiences, but especially for reducing churn; the latter requires a different content strategy than the former.

Whereas Netflix laddered-up to its vertical model and used its power as an aggregator of demand to gain power over supply, Disney is seeking to leverage — and augment — its supply to gain demand. The end result, though, would look awfully similar: a vertically integrated streaming offering that attracts and keeps customers with exclusive content, augmented with licensing deals.

If Disney is successful, it will be a truly remarkable shift: away from a horizontal content company predicated on leveraging its investment in content across as many outlets as possible, to a vertical streaming company that uses its content to achieve higher average revenue from a smaller number of customers willing to pay directly — smaller in the United States, that is; as Netflix is demonstrating, owning it all means the ability to extend the model worldwide.2

The Antitrust Question

I suspect the final hangup in Disney and 21st Century Fox’s negotiations are termination fees: who pays whom if the deal falls through. There is an obvious reason for concern — antitrust. That, of course, gets at some the reasons (but not all) as to why the deal makes sense in the first place. What is fascinating, though, is that the nature of the concern changes depending on the time frame through which one views this deal.

If one starts with a static view of the world as it is at the end of 2017, then there may be some minor antitrust concerns, but probably nothing that would stop the deal. Disney might have to divest a cable channel or two (the company’s power over distributors would be even stronger; basically the opposite of the some of the concerns that halted the Comcast acquisition of Time Warner), and potentially be limited in its ability to make operational decisions about Hulu (Disney would have a controlling stake after the merger; Comcast was similarly restricted after acquiring NBC Universal, but there the concern was more about Comcast’s conflict of interest with regards to its cable TV business competing with Hulu). The Hulu point is interesting in its own right: Disney could choose to focus its streaming efforts there instead of building its own service, but I suspect it would rather own it all.

In addition, Disney and 21st Century Fox combined for 40% of U.S. box office revenue in 2016; that probably isn’t enough to stop the deal, and as silly as it sounds, don’t underestimate the clamoring of fans for the unification of the Marvel Cinematic Universe in swaying popular opinion!

The view changes, though, if you look only a year or two ahead: what I just described above — the “truly remarkable shift” in Disney’s business model — is a shift to vertical foreclosure. The entire point of Disney vastly increasing its content library is to offer that library exclusively on its own streaming service, not competitors’ — especially not on Netflix. Given the current state of antitrust law, which has ignored vertical mergers for years, this would normally be an academic point, except the current state was fundamentally shifted just a few weeks ago, when the Department of Justice sued to block AT&T’s acquisition of Time Warner due to vertical foreclosure concerns.

It’s not a perfect comparison: for one thing, AT&T’s distribution service (DirecTV) already exists, for another, it is impossible to see that acquisition as anything but a vertical one; as I just noted, though, today the Disney-Fox acquisition is a horizontal one. Would the Justice Department sue based on Disney’s potential, as opposed to its reality? And there’s a political angle too: if the AT&T-Time Warner acquisition were indeed blocked as retaliation by the Trump administration against CNN, then it would follow that the administration would be willing to accommodate 21st Century Fox Chairman Rupert Murdoch.

What is most interesting, though, is the long-term view: I have been writing for years that Netflix’s status as an aggregator was positioning the company to dominate entertainment, and it was only eight months ago that I despaired of Disney and the other entertainment companies ever figuring out how to fight back. What has been so impressive over the last few months is the extent and speed with which Disney has seemingly figured it out — and acted accordingly.

Is that a bad thing? Note how much the situation changed once Netflix became a viable competitor for cable TV: competition is a wonderful thing, most of all for consumers. To that end, might it be better for consumers, not-so-much today but ten years from now, if Disney were fully empowered to compete with Netflix? What is preferable? A dominant streaming company and a collection of content companies trying to escape the commoditization trap, or two dominant streaming companies that can at least try to hold each other accountable?

It’s not a great choice, to be honest; certainly Amazon Prime Video is a possible competitor, although the service is both empowered by its business model and also held back. Other tech companies are making noises in the area, but more tech company dominance hardly seems like an answer!

Frankly, I’m not sure of the answer: I am both innately suspicious of these huge mergers and also sympathetic because I see so clearly the centralizing power of the Internet. The big are combining because the giants are coming: if anything, they are already here.


  1. The primary reason Netflix doesn’t have sports content is because it is not evergreen and thus doesn’t provide a cumulative advantage in terms of lowering customer acquisition costs over time; however, not being subject to the one-sided negotiations inherent to sports rights is a nice side benefit 

  2. Just as interesting is the prospective acquisition of regional sports networks and what that means for the future for ESPN; I will discuss this on tomorrow’s Daily Update  

The Pollyannish Assumption

There was an interesting aside about Apple’s bad week that I wrote about yesterday. It turns out that a user posted the macOS login-as-root bug to Apple’s support forums back on November 13:

On startup, click on “Other”

Enter username: root and leave the password empty. Press enter. (Try twice)
If you’re able to log in (hurray, you’re the admin now), then head over to System Preferences>Users & Groups and create a new Admin account.

Now restart and login to the new Admin Account (you may need a new Apple Id). Once you’re logged into this new Admin Id, you can again proceed to your System Preferences>Users & Groups. Open the Lock Icon with your new Admin ID/Password. Assign “Allow user to administer this computer” to your original Apple ID. Restart.

Most of the discussion about this tidbit has centered on the fact that this user later noted that they had found this solution on some other forum — they couldn’t remember which (this reply has now been hidden on the original thread, but Daring Fireball quoted it here); observers have largely given Apple a pass on having missed the posting on their own forums because those forums are mostly user-generated content (both questions and answers) and Apple explicitly asks posters to file bug reports with Apple directly. It’s understandable that the company missed this post two weeks ago.

For the record, I agree. Managing user-generated content is really hard.

The User-Generated Content Conundrum

Three recent bits of news bring this point about user-generated content home.

First, Twitter; from Bloomberg:

Twitter Inc. said it allowed anti-Muslim videos that were retweeted by President Donald Trump because they didn’t break rules on forbidden content, backtracking from an earlier rationale that newsworthiness justified the posts. On Thursday, a Twitter spokesperson said “there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability.”

Second, Facebook; from The Daily Beast:

In the wake of the #MeToo movement, countless women have taken to Facebook to express their frustration and disappointment with men and have been promptly shut down or silenced, banned from the platform for periods ranging from one to seven days. Women have posted things as bland as “men ain’t shit,” “all men are ugly,” and even “all men are allegedly ugly” and had their posts removed…In late November, after the issue was raised in a private Facebook group of nearly 500 female comedians, women pledged to post some variation of “men are scum” to Facebook on Nov. 24 in order to stage a protest. Nearly every women who carried out the pledge was banned…

When reached for comment a Facebook spokesperson said that the company is working hard to remedy any issues related to harassment on the platform and stipulated that all posts that violate community standards are removed. When asked why a statement such as “men are scum” would violate community standards, a Facebook spokesperson said that the statement was a threat and hate speech toward a protected group and so it would rightfully be taken down.

Third, YouTube. From BuzzFeed:

YouTube is adding more human moderators and increasing its machine learning in an attempt to curb its child exploitation problem, the company’s CEO, Susan Wojcicki, said in a blog post on Monday evening. The company plans to increase its content moderation workforce to more than 10,000 employees in 2018 in order to help screen videos and train the platform’s machine learning algorithms to spot and remove problematic children’s content. Sources familiar with YouTube’s workforce numbers say this represents a 25% increase from where the company is today.

In the last two weeks, YouTube has removed hundreds of thousands of videos featuring children in disturbing and possibly exploitative situations, including being duct-taped to walls, mock-abducted, and even forced into washing machines. The company said it will employ the same approach it used this summer as it worked to eradicate violent extremist content from the platform.

I’m going to be up front with you: I don’t have any clear cut answers here. One of the seminal Stratechery posts is called Friction, and while I’ve linked it many times this line is particularly apt:

Friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

This is exactly the root of the problem: I don’t believe these platforms so much drive this abhorrent content (the YouTube videos are just horrible) as they make it easier than ever before for humans to express themselves, and the reality of what we are is both more amazing and more awful than most anyone ever appreciated.

This is something I have started to come to grips with personally: the exact same lack of friction that results in an unprecedented explosion in culture, music, and art of all kinds, the telling of stories about underrepresented and ignored parts of the population, and yes, the very existence of a business like mine, also results in awful videos being produced and consumed in shocking numbers, abuse being widespread, and even the upheaval of our politics.

The problem is that the genie is out of the bottle: lamenting the loss of friction will not only not bring it back, it makes it harder to figure out what to do next. I think, though, the first place to start — for me anyways — is to acknowledge and fully internalize what I wrote back then: focusing on the upsides without acknowledging the downsides is to misevaluate risk and court disaster. And, for those inclined to see the negatives of the Internet, focusing on the downsides without acknowledging the upsides is to misevaluate reward and endanger massive future opportunities. We have to find a middle way, and neither side can do that without acknowledging and internalizing the inevitable truth of the other.

Content Policing

Go back to the Apple forum anecdote: policing millions of comments posted by hundreds of thousands of posters (I’m guesstimating on numbers here) is really hard, and it’s understandable that Apple missed the post in question; as bad as this bug was, it is still the case that the return on the investment that would have been required to catch this one comment simply doesn’t make sense.

Apple is the easy one, and I started with them on purpose: using a term like “return on investment” gets a whole lot more problematic when dealing with abuse and human exploitation. That doesn’t mean it isn’t a real calculation made by relevant executives though: in the case of Apple, I think most people would agree that whatever investment in forum moderation would be effective enough to catch this post before it was surfaced on Twitter a couple of weeks later would be far better spent buttressing the internal quality control teams that missed the bug in the first place.

That the post was surfaced on Twitter is relevant too; the developer who tweeted about the bug wrote a post on Medium explaining his tweet:

A week ago the infrastructure staff at the company I work for stumbled on the issue while trying to help one of my colleagues recover access to his local admin account. The staff noticed the issue and used the flaw to recover my colleague’s account. On Nov 23, the staff members informed Apple about it. They also searched online and saw the issue mentioned in a few places already, even in Apple Developer Forum from Nov 13. It seemed like the issue had been revealed, but Apple had not noticed yet.

The tweet certainly got noticed, and the bug was fixed within the day. Now to be clear, this isn’t the appropriate way to disclose a vulnerability (to that point, Apple should clarify what exactly happened around that November 23rd disclosure), but broadly speaking, the power of social media is what got this bug fixed as quickly as it was.

Outside visibility and public demands for accountability are what drove the YouTube changes as well: BuzzFeed reported on the child exploitation issue last month after being tipped off by an activist named Matan Uziel who had been rebuffed in his own efforts to contact YouTube. That YouTube was allegedly not receptive to his reach-outs is a bad thing; that there are plenty of ways to raise a ruckus such that they must respond is a good one.

It also gives some outline about how YouTube can better approach the problem in the future: yes, the company is building machine learning algorithms, and yes, the company provides an option for viewers to report content — although it is buried in a submenu:

The point of user reports is to leverage the scale of the Internet to police its own unfathomable scale: there are far more YouTube viewers than there could ever be moderators; meanwhile, there are 400 hours of video uploaded to YouTube every minute.

That approach, though, clearly isn’t enough: it is rooted in the pollyannish view of the Internet I described above — the idea that everything is mostly good but for some bad apples. A more realistic view — that humanity is capable of both great beauty and tremendous evil, and that the Internet makes it easier to express both — demands a more proactive approach. And, conveniently, YouTube already has tools in place.

YouTube’s Flawed Approach

On Google’s last earnings call CEO Sundar Pichai said:

YouTube now has over 1.5 billion users. On average, these users spend 60 minutes a day on mobile. But this growth isn’t just happening on desktop and mobile. YouTube now gets over 100 million hours of watch time in the living room every day, and that’s up 70% in the past year alone.

A major factor driving this growth is YouTube’s machine-learning algorithm for watching more videos; as BuzzFeed noted:

Thanks to YouTube’s autoplay feature for recommended videos, when users watch one popular disturbing children’s video, they’re more likely to stumble down an algorithm-powered exploitative video rabbit hole. After BuzzFeed News screened a series of these videos, YouTube began recommending other disturbing videos from popular accounts like ToysToSee.

The recommendation works hand-in-hand with search which — as you would expect given its parent company — YouTube is very good at. Individuals that want disturbing content can find what they’re looking for, and then, in the name of engagement and pushing up those viewing numbers, YouTube gives them more.

This should expose the obvious flaw in YouTube’s current reporting-based policing strategy: the nature of search and recommendation algorithms is such that most YouTube viewers, who would be rightly concerned and outraged about videos of child exploitation, never even see the videos that need to be reported. In other words, YouTube’s design makes its attempt to leverage the Internet broadly as moderator doomed to fail.

Those exact same search and algorithmic capabilities, though, made it trivial for Uziel and BuzzFeed to find a whole host of exploitive videos. The key difference between Uziel/BuzzFeed and generic YouTube viewers is that the former was looking for them.

Herein lies the fundamental failing of YouTube moderation: to date the video platform has operated under the assumption that 1) YouTube has too much content to review it all and 2) The best way to moderate is to depend on its vast user base. It is a strategy that makes perfect sense with the pollyannish assumption that the Internet by default produces good outcomes with but random exceptions.

A far more realistic view — because again, the Internet is ultimately a reflection of humanity, full of both goodness and its opposite — would assume that of course there will be bad content of YouTube. Of course there will be extremist videos recruiting for terrorism, of course there will be child exploitation, of course there will all manner of content deemed unacceptable by the vast majority of not just the United States but humanity generally.

Such a view would engender a far different approach to moderation. Consider this paragraph from YouTube CEO Susan Wojcicki about YouTube’s latest changes:

We understand that people want a clearer view of how we’re tackling problematic content. Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced. That’s why in 2018 we will be creating a regular report where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.

Make no mistake, transparency is a very good thing (more on this in a moment). What is striking, though, is the reliance on flags: YouTube’s current moderation approach is inherently reactive, whether it be to viewer reports or, increasingly, to machine learning algorithms flagging content. Machine learning is a Google strength, without question, but ultimately the company is built on giving people what they want — including bad actors.

Understanding Demand

A core precept of Aggregation Theory is that digital markets are driven by demand, not supply. This, by extension, is why Google and Facebook in particular dominate: in a world of effectively infinite web pages, the search engine that can pick out the proverbial needle in a haystack is king. It follows, then, that a content moderation approach that starts with supply is inherently inferior to one that starts with demand.

This is why it is critical that YouTube lose its pollyannish assumptions: were the company’s moderation approach to start with the assumption of bad actors, then child exploitation would be perhaps the most obvious place to look for problematic videos. Moreover, we know it works: that is exactly what Uziel and BuzzFeed did. If you know what you are looking for, you will, thanks to Google/YouTube’s search capabilities and recommendation algorithms, find it.

And then you can delete it.

Moreover, you can delete it efficiently. Despite my lecture about humanity containing both good and evil, I strongly suspect that the vast majority of those 400 hours uploaded every minute contain unobjectionable — even beautiful, or educational, or entertaining — content. What is the point, then, of even trying to view it all, a Sisyphean task if there ever was one? Starting with the assumption of bad actors and actively looking for their output — using YouTube and Google’s capabilities as aggregators — makes far more sense.

That, though, means letting go of the convenient innocence inherent to the worldview of most tech executives. I know the feeling: I want to believe that the Internet’s removal of friction and enabling of anyone to publish content is an inherent good thing, because I personally am, like these executives, a massive beneficiary. Reality is far more complicated; accepting reality, though, is always the first step towards policies that actually work.

Facebook, Twitter, and Politics

I would like to end this essay here; alas, most content moderation policies are not so clean cut at YouTube and child exploitation. That is why I included the Twitter and Facebook excerpts above. Both demonstrate the potential downside of the approach I am recommending for YouTube: being proactive is a sure recipe for false positives.

I am reminded, though, of the famous Walt Whitman quote:

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

It is impossible to navigate the Internet — that is, to navigate humanity — without dealing in shades of gray. And the challenges faced by Twitter and Facebook are perfect examples. I, for one, found President Trump’s retweets disgusting, and Facebook’s bans unreasonable. On the other hand, who is Twitter to define what the President of the United States can or cannot post, and Facebook is at least acting consistently with their policies.

Indeed, these two examples are exactly why I have consistently called on these platforms to focus on being neutral. Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.


The phrase “With great power comes great responsibility” is commonly attributed to Spider-Man, but it in fact stems from the French Revolution:

Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir.

English translation: They must consider that great responsibility follows inseparably from great power.

Documenting why and how these platforms have power has, in many respects, been the ultimate theme of Stratechery over the last four-and-a-half year: this is a call to exercise it, in part, and a request to not, in another. There is a line: what is broadly deemed unacceptable, and what is still under dispute; the responsibility of these new powers that be is to actively search out the former, and keep their hands — and algorithms and policies — off the latter. Said French Revolution offers hints at fates if this all goes wrong.

Give Stratechery as a Gift

The holiday season is approaching, and a Stratechery subscription can be given as a gift! You can specify the date of delivery (like, say, December 25, to pick a random example), and include a personalized message to be delivered by email.

To buy a gift visit this page (or send it to someone else to make a request!). Or buy a subscription for yourself here.

Thanks as always for your support.

Stitch Fix and the Senate

There was an interesting line of commentary around the news that Stitch Fix, the personalized clothing e-commerce company, was going to IPO: these numbers are incredible! Take this article in TechCrunch as an example (emphasis mine):

Stitch Fix has filed to go public, finally revealing the financial guts of the startup which will be a test of modern e-commerce businesses that are looking to hit the market — and the numbers look pretty great!

Let’s start off really quick with profits: aside from the last two quarters, Stitch Fix posted a six-quarter streak of positive net income. We talk a lot about companies that are planning to go public that show pretty consistent (or even increasing) losses, but Stitch Fix looks like a company that has actually managed to build a healthy business. The company finally lost money in the last two quarters, but even then, its losses decreased quarter-over-quarter — with the company only losing around $4.5 million in the second quarter this year.

Compare this to the TechCrunch article written when Box, a company that ultimately IPO’d at a similar market-cap as Stitch Fix will (~$2 billion), first filed its IPO:

Box has long been rumored to have quickly growing revenues and large losses, which has proven to be the case. For the full-year period that ended January 2014, Box’s revenues grew to $124 million, up from $58.8 million the year prior. However, the company’s net loss also expanded in the period, with Box posting losses of $168 million for the full-year period that ended January 2014, more than its total top line for the period. In the period prior, Box lost a more modest $112 million.

What is driving Box’s yawning losses? Sales and marketing. The company’s line item for those expenses expanded from $99.2 million for the year ending January 2013, to $171 million for the year ending January 31, 2014. That was the lion’s share of Box’s $100 million increasing in operating costs during the period. Or, put more simply, Box spent more dollars on selling its products in the year than it brought in revenue during the period. This could indicate customer churn, or merely a tough market for cloud products.

In fact, both explanations were completely wrong: Box’s losses were due to the company investing in future growth; a detailed look at cohorts revealed that Box was increasing profitability over time because churn was negative (because existing customers were increasing spend by more than the revenue lost by those leaving), and the share of cloud spending amongst enterprise broadly is only going in one direction — up. To that end, investing more in future growth — even though it made the company unprofitable in the short-term — was an obviously correct decision.

Stitch Fix Concerns

To that end, I find Stitch Fix’s number more concerning than I did Box’s:

  • First, the average revenue per client has decreased over time: according to the numbers provided in Stitch Fix’s S-1, the average client in 2016 generated $335 in revenue in the first six months, and $489 in revenue for the first 12 months; there is not a comparable set of numbers for earlier cohorts (itself a red flag), but the average 2015 client generated $718 in revenue over two years. To the extent these cohorts can be compared, that means $335 in the first six months, $154 in the second six months, and an average of $115 in the third and fourth six-month periods.
  • Second, these clients are increasingly expensive to acquire. Stitch Fix increased its ‘Selling, General and Administrative Expenses’ by 56% last year, but revenue increased by only 34%; advertising spend specifically increased from $25.0 million to $70.5 million (182%), vastly outpacing revenue growth.
  • Third, revenue growth is slowing substantially, despite the fact Stitch Fix has expanded its product offerings, both within its core women’s market as well as expansions to Petite, Maternity, Men’s, and Plus apparel. I noted last year’s revenue growth was 34%; the previous year’s growth was 113%, and the year before that 368%.

The problem for Stitch Fix is the same bugaboo encountered by the majority of consumer companies: the lack of a scalable advantage in customer acquisition costs. I wrote about this earlier this year in the context of Uber:

Uber’s strength — and its sky-high valuation — comes from the company’s ability to acquire customers cheaply thanks to a combination of the service’s usefulness and the effects of aggregation theory: as the company acquires users (and as users increases their usage) Uber attracts more drivers, which makes the service better, which makes it easier to acquire marginal users (not by lowering the price but rather by offering a better service for the same price). The single biggest factor that differentiates multi-billion dollar companies is a scalable advantage in customer acquisition costs; Uber has that.

On the other hand, it seems likely Stitch Fix does not, even though the company did argue in its S-1 that it benefited from network effects:

We believe we are the only company that has successfully combined rich client data with detailed merchandise data to provide a personalized shopping experience for consumers. Clients directly provide us with meaningful data about themselves, such as style, size, fit and price preferences, when they complete their initial style profile and provide additional rich data about themselves and the merchandise they receive through the feedback they provide after receiving a Fix. Our clients are motivated to provide us with this detailed information because they recognize that doing so will result in a more personalized and successful experience. This perpetual feedback loop drives important network effects, as our client-provided data informs not only our personalization capabilities for the specific client, but also helps us better serve other clients.

This may be true — it makes sense that it would be — but while it may help Stitch Fix better serve new customers it is not clear how it helps the company acquire said customers in the first place. Instead, like most consumer companies, it seems likely that Stitch Fix leveraged word-of-mouth to own its core market of women who value convenience in shopping for clothing, but struggled to break beyond that segment — and to the extent it did, found consumers who spend less and churn more. In other words, unlike successful aggregators, the improvement generated by the network effect, such that it was, was less than the increase in acquisition cost:

The key to truly break-out consumer companies is having these lines reversed: the network should be generating an improvement in benefits that exceeds the cost of acquiring customers, fueling a virtuous cycle.

Stitch Fix’s Success

That said, I fully expect Stitch Fix’s IPO to be successful: how could it not be? In stark contrast to many would-be aggregators, Stitch Fix has taken a shockingly small amount of venture capital — only $42.5 million. Instead the company has been profitable — on an absolute basis in 2015-2016, and quite clearly on a unit basis (including acquisition costs) throughout — and cash flow positive. That increased marketing expenditure is being paid by current customers, not venture capitalists. To that end, a $2 billion IPO would be a massive win for Stitch Fix’s investors and Katrina Lake, the founder.

Moreover, while Stitch Fix’s growth may be slowing, that is by no means fatal: the company is a perfectly valid business to own exactly as it is. Indeed, I am in fact deeply impressed by Stitch Fix: it seems quite clear that early on Lake realized that the company was not an aggregator, which meant building a business, well, normally. That means making real profits, particularly on a unit basis. Even then, though, the company was clearly worthy of venture capital: Baseline Ventures and Benchmark will see a 10x+ return.

To that end, Stitch Fix is a more important company than it may seem at first glance: it proves there is a way to build a venture capital-backed company that is not an aggregator, but still a generator of outsized returns. The keys, though, are positive unit economics from the get-go, and careful attention to profitability. The reason this matters is that these sorts of companies are by far the more likely to be built: Google and Facebook are dominating digital advertising, Amazon is dominating undifferentiated e-commerce, Microsoft and Amazon are dominating enterprise, and Apple is dominating devices. To compete with any of them is an incredibly difficult proposition; better to build a real differentiated business from the get-go, and that is exactly what Stitch Fix did.

RSUs, Options, and Taxes

The other winners in Stitch Fix’s IPO are all of its employees that hold stock options and restricted stock units (RSUs): they too have benefited from the company’s reticence in raising money; those options and RSUs are priced significantly below the company’s IPO price. And, when that IPO happens later this week, said employees will benefit tremendously — rightly alongside the IRS. When the IPO happens those stock options and RSUs will become taxable, and the majority of Stitch Fix employees will have to sell some portion of their holdings to cover their bill. This is entirely reasonable: they will have earned their reward for building Stitch Fix into the impressive company it has become, and they will pay taxes on that reward.

What would not have been reasonable, though, would have been to pay those taxes before the IPO. After all, when Stitch Fix started it was not at all certain the company would reach this milestone: there are a whole host of companies that raised far more than Stitch Fix’s $42.5 million that ended up going out of business, or being sold off as an acquihire such that employees earned nothing.

Indeed, I suspect that startup employees are, on balance, terribly underpaid: most take on jobs with lower salaries relative to established companies simply for the chance of making an outsized return should the company they work for IPO; the odds of that happening mean the expected value of the options and RSUs they receive are quite low.

To that end, it is tempting to be skeptical about venture capital protestations about a Senate tax provision that would tax stock options and RSUs at the moment they vest; given the uncertain value, any startup seeking to attract employees would have to significantly up their cash compensation, which would be better for most employees (and thus worse for most venture capitalists) — in the short term, anyways.

The Startup Ecosystem

The problem with this point of view is that the startup employee frame is much too narrow: leaving aside the fact than anyone qualified to work at a startup is already far better off than nearly everyone on earth, the broader issue is that the scope for building successful venture-backed companies is narrowing.

Stitch Fix is a perfect example: I just explained that the company has uncertain growth prospects, but is still a big success thanks in large part to its disciplined approach to the bottom line. That should be a model for more companies: quickly determine if your business can be an aggregator with the scalable acquisition cost advantages that come with it, and if not, build a sustainable business sooner rather than later. That will allow everyone to benefit: founders, venture capitalists, employees, and most importantly, consumers.

A disciplined approach to the bottom line, though, means taking full advantage of a start-up’s number one recruiting tool: stock options and RSUs, in lieu of fully competitive salaries. Had Stitch Fix had to pay its employees in cash the company would have likely had to raise more money, reducing the likelihood of a successful outcome for everyone — including the IRS.

The downside, though, is even more acute for the companies that might seek to become aggregators themselves and so challenge companies like Google, Facebook, or Amazon directly. Any such would-be disruptor — and keep in mind, disruption is the only means by which these companies might ever be threatened — would need to raise huge amounts of capital, likely over an extended period of time. Moreover, the odds of success would be commensurately lower, making it even more likely associated stock options and RSUs might be worthless. To that end, taxing said options and RSUs would make start-up jobs even less attractive, and any sort of alternative — including increased cash compensation — would not only reduce the likelihood of startup success but also deny employees the chance to share in a successful outcome.

Tech’s Constituencies

Like most commentators, I am often guilty of lumping all of technology into one broad bucket; that makes sense when considering the impact of technology on society broadly. This tax bill, though, is a reminder that tech has two distinct constituencies with concerns that don’t always align:

  • Incumbents have successful business models that throw off oodles of cash; their concern is about protecting those models, and they will spend to do so
  • Venture capitalists and founders are seeking to build new businesses that, more often than not, threaten those incumbents; their edge is the opportunity to build businesses perfectly aligned to the problem they are seeking to solve

Tech employees get different benefits from each camp: the former provides high salaries and great perks; you can have a very nice life working for Facebook or Google. Startups, on the other hand, offer a chance to own a (small) piece of something substantial, at the cost of short-term salary — and that is worth preserving. Not only is it important to offer an accessible route up the economic ladder, former startup employees are a key part of the Silicon Valley ecosystem, often providing the initial funding for other new companies.

To that end, what is critical to understand about this proposed tax change is that incumbent companies won’t be hurt at all: sure, they may have to change their compensation to be more cash-rich and RSU-light, but cash isn’t really a constraint on their business. Higher salaries are a small price to pay if it means startups that might challenge them are handicapped; small wonder none of the big companies are lobbying against this provision.

Towards a Startup Lobby

This isn’t the first time the needs of the big tech companies has diverged from startups: net neutrality, for example, is much more important if you don’t have the means to pay to play. The same thing applies to tax laws more broadly, including corporate tax reform and offshore holdings.

Perhaps the most clear example, though, is antitrust: the companies that are hurt the most by Google, Facebook, and Amazon dominance are not analog publishers or retailers, but more direct competitors for digital advertising or e-commerce — mostly startups. Nearly all of the lobbying about this issue, though, is funded by the incumbents, for all of the reasons noted above: they have cash to burn, and business models to protect.

To that end it might behoove the startup community — and to be more specific, venture capitalists — to start building a counterweight. I am optimistic this Senate provision will ultimately be stripped from the proposed tax bill, but that the very foundation of startup compensation was so suddenly threatened should serve as a wake-up call that depending on Google or Apple largesse to represent the tech industry is ultimately self-defeating.

Apple at Its Best

The history of Apple being doomed doesn’t necessarily repeat, but it does rhyme.

Take the latest installment, from Professor Mohanbir Sawhney at the Kellogg School of Management (one of my former professors, incidentally):

Have we reached peak phone? That is, does the new iPhone X represent a plateau for hardware innovation in the smartphone product category? I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims…

As we have seen, when the vector of differentiation shifts, market leaders tend to fall by the wayside. In the brave new world of AI, Google and Amazon have the clear edge over Apple. Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration, such as real-time language translation when used with Google’s special headphones…The shifting vector of differentiation to AI and agents does not bode well for Apple…

Sheets of glass are simply no longer the most fertile ground for innovation. That means Apple urgently needs to shift its focus and investment to AI-driven technologies, as part of a broader effort to create the kind of ecosystem Amazon and Google are building quickly. However, Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.

It is an entirely reasonable argument, particularly that last line: I myself have argued that Apple needs to rethink its organizational structure in order to build more competitive services. If the last ten years have shown us anything, though, it is that discounting truly great hardware — and the sort of company necessary to deliver that — is the surest way to be right in theory and wrong in reality.

The Samsung Doom

When Stratechery started in 2013, Samsung was ascendent, and the doomsayers were out in force. The arguments were, in broad strokes, the same: hardware innovation was over, and Android’s good enough features, broader hardware base, and lower prices would soon mean that the iPhone would go the way of the Mac relative to Windows.1

At that time the flagship iPhone was the iPhone 5; Apple was still only making one iPhone a year. That phone — the one that, many claimed, was the peak of hardware innovation — featured a larger (relative to previous iPhones) 4 inch LED screen, 8MP rear camera and 1.2 MP front camera, and Apple’s A6 32-bit system-on-a-chip, the first from the company that was not simply a variation on a licensed ARM design. To be sure, the relatively small screen size was a readily apparent problem: one of my first articles argued that Samsung’s focus on larger-screens was a meaningful advantage that Apple should copy.

Obviously Apple eventually did just that with the iPhones 6 and 6 Plus, but screen size is hardly the only thing that changed: later that year Apple introduced the iPhone 5S, which included the A7 chip that blew away the industry by going 64-bit years ahead of schedule; Apple has enjoyed a massive performance advantage relative to the rest of the industry ever since. The iPhone 5S also included Touch ID, the first biometric authentication method that worked flawlessly at scale (and enabled Apple Pay), the usual camera improvements, as well as a new ‘M7’ motion chip that laid the groundwork for Apple’s fitness focus (and the Apple Watch).

And, even as critics clamored that the pricing of the iPhone 5C, launched alongside the 5S, meant the company was going to be disrupted, the iPhone 5S sold in record numbers — just like every previous iPhone had.

The iPhone X

I’m spoiled, I know: gifted with the rationalization of being a technology analyst, I buy an iPhone every year. Even so, I thought the iPhone 7 was a solid upgrade: it was noticeably faster, had an excellent screen, and the camera was great; small wonder it sold in record number everywhere but China.2 What it lacked, though — and I didn’t fully appreciate this until I got an iPhone X — was delight:

Face ID isn’t perfect: there are a lot of edge cases where having Touch ID would be preferable. By its fourth iteration in the iPhone 7, Touch ID was utterly dependable and, like the best sort of technology, barely noticeable.

FaceID takes this a step further: while it takes a bit of time to change engrained habits, I’m already at the point where I simply pick up the phone and swipe up without much thought;3 authenticating in apps like 1Password is even more of a revelation — you don’t have to actually do anything.

In these instances the iPhone X is reaching the very pinnacle of computing: doing a necessary job, in this case security, better than humans can.4 The fact that this case is security is particularly noteworthy: it has long been taken as a matter of fact that there is an inescapable trade-off between security and ease-of-use; TouchID made it far easier to have effective security for the vast majority of situations, and FaceID makes it invisible.

The trick Apple pulled, though, was going beyond that: the first time I saw notifications be hidden and then revealed (as in the GIF above) through simply a glance produced the sort of surprise-and-delight that has traditionally characterized Apple’s best products. And, to be sure, surprise-and-delight is particularly important to the iPhone X: so much is new, particularly in terms of the interaction model, that frustrations are inevitable; in that Apple’s attempt to analogize the iPhone X to the original iPhone is more about contrasts than comparisons.

The Original iPhone and Overshooting

While the iPod wheel may be the most memorable hardware interface in modern computing, and the mouse the most important, touch is, for obvious reasons, the most natural. That, though, only elevates the original iPhone’s single button: everything about touch interfaces needed to be invented, discovered, and figured out; it was that button that made it accessible to everyone — when in trouble, hit the button to escape.

Over the years that button became laden with ever more functionality: app-switching, Siri, TouchID, reachability. It was the physical manifestation of another one of those seemingly intractable trade-offs: functionality and ease-of-use. Sure, the iPhone 5 I referenced earlier was massively more capable than the original iPhone, and the iPhone X vastly more capable still, but in fact an argument based on specifications makes the critics’ point: the more technology that gets ladled on top, the more inaccessible it is to normal users. Clayton Christensen, in the The Innovators’ Dilemma, called this “overshooting”:

Disruptive technologies, though they initially can only be used in small markets remote from the mainstream, are disruptive because they subsequently can become fully performance-competitive within the mainstream market against established products. This happens because the pace of technological progress in products frequently exceeds the rate of performance improvement that mainstream customers demand or can absorb. As a consequence, products whose features and functionality closely match market needs today often follow a trajectory of improvement by which they overshoot mainstream market needs tomorrow. And products that seriously underperform today, relative to customer expectations in mainstream markets, may become directly performance-competitive tomorrow.

This was the reason all of those iPhone critics were so certain that Apple’s days were numbered. “Good-enough” Android phones, sold for far less an iPhone, would surely result in low-end disruption. Here’s Christensen in an interview with Horace Dediu:

The transition from proprietary architecture to open modular architecture just happens over and over again. It happened in the personal computer. Although it didn’t kill Apple’s computer business, it relegated Apple to the status of a minor player. The iPod is a proprietary integrated product, although that is becoming quite modular. You can download your music from Amazon as easily as you can from iTunes. You also see modularity organized around the Android operating system that is growing much faster than the iPhone. So I worry that modularity will do its work on Apple.

Shortly after the iPhone 5S/5C launch, I made the case that Christensen was wrong:

Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated…

Not all consumers value — or can afford — what Apple has to offer. A large majority, in fact. But the idea that Apple is going to start losing consumers because Android is “good enough” and cheaper to boot flies in the face of consumer behavior in every other market. Moreover, in absolute terms, the iPhone is significantly less expensive relative to a good-enough Android phone than BMW is to Toyota, or a high-end bag to one you’d find in a department store…

Apple is — and, for at least the last 15 years, has been — focused exactly on the blind spot in the theory of low-end disruption: differentiation based on design which, while it can’t be measured, can certainly be felt by consumers who are both buyers and users.

Needless to say, in 2013 we weren’t anywhere close to peak iPhone: in the quarter I wrote that article — 4Q 2013, according to Apple’s fiscal calendar, the weakest quarter of the year — the company sold 34 million iPhones; the next quarter Apple booked $58 billion in revenue. We are now four years on, and last quarter — 4Q 2017, again according to Apple’s fiscal quarter — the company sold 47 million iPhones; next quarter Apple is forecasting between $84 and $87 billion in revenue.

More importantly, the experience of using an iPhone X, at least in these first few days, has that feeling: consideration, invention, and yes, as the company is fond to note, the integration of hardware and software. Look again at that GIF above: not only does Face ID depend on deep integration between the camera system, system-on-a-chip, and operating system, but the small touch of displaying notifications only when the right person is looking at them depends on one company doing everything. That still matters.

Moreover, it’s worth noting that the iPhone X is launching into a far different market than the original iPhone did: touch is not new, but rather the familiar; changing many button paradigms into gestures certainly presents a steeper learning curve for first-time smartphone users, but for how many users will the iPhone X be their first smartphone?

Artificial Intelligence and New Market Disruption

Still, I noted that while Apple doom-sayers rhyme, they don’t repeat. The past four years may have thoroughly validated my critique of low-end disruption and the iPhone, but there is another kind of disruption: new market disruption. Christensen explains the difference in The Innovator’s Solution:

Different value networks can emerge at differing distances from the original one along the third dimension of the disruption diagram. In the following discussion, we will refer to disruptions that create a new value network on the third axis as new-market disruptions. In contrast, low-end disruptions are those that attack the least-profitable and most overserved customers at the low end of the original value network.

Christensen ultimately concluded that the iPhone was a new market disruptor of the PC: it was seemingly less capable yet simpler to use, and thus attracted non-consumption, and eventually gained sufficient capabilities to attract PC users as well. This is certainly true as far as it goes;5 certainly there are an order of magnitude more smartphone users than there ever were PC users.

And, to that end, Sawhney’s argument is in this way different from the doomsayers of old: it’s not that Apple will be disrupted by “good-enough” cheap Android, but rather because a new vector is emerging — artificial intelligence:

The vector of differentiation is shifting yet again, away from hardware altogether. We are on the verge of a major shift in the phone and device space, from hardware as the focus to artificial intelligence (AI) and AI-based software and agents.

This means nothing short of redefinition of the personal electronics that matter most to us. As AI-driven phones like Google’s Pixel 2 and virtual agents like Amazon Echo proliferate, smart devices that understand and interact with us and offer a virtual and/or augmented reality will become a larger part of our environment. Today’s smartphones will likely recede into the background.

Makes perfect sense, but for one critical error: consumer usage is not, at least in this case, a zero sum game. This is the mistake many make when thinking about the way in which orthogonal businesses compete:

The presumption is that the usage of Technology B necessitates no longer using Technology A; it follows, then, that once Technology B becomes more important, Technology A is doomed.

In fact, though, most paradigm shifts are layered on top of what came before. The Internet was used on PCs, social networks are used alongside search engines. Granted, as I just noted, smartphones are increasingly replacing PCs, but even then most use is additive, not substitutive. In other words, there is no reason to expect that the arrival of artificial intelligence means that people will no longer care about what smartphone they use. Sure, the latter may “recede into the background” in the minds of pundits, but they will still be in consumers’ pockets for a long time to come.

There’s a second error, though, that flows from this presumption of zero-summedness: it ignores the near-term business imperatives of the various parties. Google is the best example: were the company to restrict its services to its own smartphone platform the company would be financially decimated. The most attractive customers to Google’s advertisers are on the iPhone — just look at how much Google is willing to pay to acquire them6 — and while Google could in theory convince them to switch by keeping its superior services exclusive, in reality such an approach is untenable. In other words, Google is heavily incentivized to preserve the iPhone as a competitive platform in terms of Google’s own services; granted, Android is still better in terms of easy access and defaults, but the advantage is far smaller than it could be.

Apple, meanwhile, is busy building competing services of its own, and while it’s easy — and correct — to argue that they aren’t really competitive with Google’s, that doesn’t really matter because competition isn’t happening in a vacuum. Rather, Apple not only enjoys the cost of switching advantage inherent to all incumbents, but also is, as the iPhone X shows, maintaining if not extending the user experience advantage that comes from its integrated model. That, by extension, means that Apple’s services need only be “good enough” — there’s that phrase! — to let the company’s other strengths shine.

This results in a far different picture: the “hurdle rate” for meaningful Android adoption by Apple’s customer base is far greater than the doom-sayers would have you think.

Apple’s Durable Advantage

I am no Apple pollyanna: I first made the argument years ago that the ultimate Apple bear case is the disappearance of hardware that you touch (which remains the case); I also complimented the company for having the courage to push towards that future.

Indeed, Apple’s aggressiveness in areas like wearables and, at least from a software perspective, augmented reality, suggest the company will press its hardware advantage to get to the future before its rivals, establishing a beachhead that will be that much more difficult for superior services offerings to dislodge. Moreover, there is evidence that Google sees the value in Apple’s approach: the company’s push into hardware may in part be an attempt to find a new business model, but establishing the capabilities to compete in hardware beyond the smartphone is surely a goal as well.

What is fascinating to consider is just how far might Apple go if it decided to do nothing but hardware and its associated software: if Google Assistant could be the iPhone default, why would any iPhone user even give a second thought to Android? I certainly don’t expect this to happen, but that giving away control of what seems so important might, in fact, secure Apple’s future more strongly than anything else, is the most powerful signal possible that the integration of hardware and software — and the organizational knowledge, structure, and incentives that come from that being a company’s primary business model — remains a far more durable competitive advantage than many theorists would have you think.


  1. Which, for the record, is a misreading of history  

  2. Speaking of China, the point of that article was that hardware differentiation mattered more there than anywhere else; I expect the iPhone X to sell very well indeed 

  3. Many of those edge cases are in cases where you are not picking up the phone and thus triggering wake-on-rise; the car, for example, or the desk 

  4. To be clear, this is all relative; in fact, Face ID is arguably even less secure than Touch ID. Sure, 1 in a million chances of a match are better than 1 in 50,000 if the sample is fully random, but given that close siblings, for example, can overcome it in theory is a reminder that relevant samples are not always random. The broader point, though, is that security people use is better than superior solutions they do not. 

  5. Christensen never did explain why the iPhone defeated Nokia et al, who he originally expected to overcome the iPhone; I put forward my theory in Obsoletive  

  6. What I wrote in this Daily Update about Google’s acquisition costs almost certainly explains the bump in Apple’s services revenue last quarter; more on this in tomorrow’s Daily Update