Why Twitter Must Be Saved

It is election day in the United States, and the tech figure who had one of the biggest impacts on the current cycle is perhaps a non-obvious one: Jeff Bezos.

Back in 2013 Bezos bought the Washington Post, whose coverage of the campaign has been exemplary. The august newspaper’s reporting, particularly the work of David Fahrenthold, has uncovered stories that have had a far bigger impact than any number of tweets or blog posts or calls for days-off-work in Democrat-safe California ever could have had. What Bezos understood is a technology industry truism: impact is made at scale through the construction of repeatable processes. In the case of the Washington Post, facilitating a strong, confident newsroom has reaped far greater returns than any of us can accomplish on our own.

When Bezos made his purchase, I wrote an article entitled Rebuilding the World Technology Destroyed. It is, by Stratechery standards, pretty short, so I hope you will forgive my taking the unusual step of quoting it in full:


The Washington Post was headed for bankruptcy, and was finally sold for a pittance. Its buyer began his career on Wall Street, only to move into a burgeoning new industry, where he truly made his wealth. The newspaper he bought has a noble history, but will certainly earn losses for years to come.

I’m talking not about Jeff Bezos, who bought the Washington Post yesterday, but rather Eugene Meyer, who bought the Post in 1933. Meyer left a lucrative career on Wall Street in 1920 to seize the burgeoning opportunity in industrial chemicals and founded Allied Chemical (today’s Honeywell). After making millions, Meyer spent the rest of his life both in public service and building the Post, spending millions of his own money in the process.

Meyer was in many ways following the established playbook for industrial magnates. Families like the Vanderbilts, Rockefellers, and Carnegies, who made their fortunes in railroads, oil, and steel, respectively, plowed money into universities, museums, and a host of other cultural touchstones.

It’s this tradition that makes Bezos’s purchase feel momentous, a crossing of the Rubicon of sorts. The tech industry is now producing its own magnates, who are following the Rockefeller playbook. See Mark Zuckerberg giving $100 million to the Newark school district, or Chris Hughes buying the New Republic. Neither though, feels as momentous as Jeff Bezos, the preeminent tech magnate, buying the Washington Post, the nation’s third most important newspaper.


The ironic thing, of course, about a tech magnate buying the Washington Post is that technology has destroyed the traditional newspaper business model. Not that newspapers are particularly special in this regard. As I wrote a month ago in a piece called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction.

With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

While the struggles of the Washington Post and other newspapers fall squarely into the “value” bucket, the particularly devastating effect of our new world order is seen much more strongly in its effect on livelihoods. This piece on the Crumbling American Dream is a must-read:

But just beyond the horizon a national economic, social and cultural whirlwind was gathering force that would radically transform the life chances of the children and grandchildren of the graduates of the P.C.H.S. class of 1959. The change would be jaw dropping and heart wrenching, for Port Clinton turns out to be a poster child for changes that have engulfed America.

Port Clinton’s demise was largely about the demise of manufacturing, but to my mind, the story of manufacturing is the story of technology. The relentless pursuit of productivity has created massive wealth in the aggregate, even as it has destroyed the foundations of many of our institutions.

In this respect, what Bezos is doing feels almost obligatory. Technology — and I’m using the term very broadly here — has torn so much down; surely it’s the responsibility of technologists to build it back up.

And yet, I fear we as an industry are woefully unprepared for this responsibility. We glorify dropouts, endorse endless hours at work, and subscribe to a libertarian ideal that has little to do with reality. We say that ideas don’t matter, and yet, as Chris Dixon wrote in The Idea Maze:

The reality is that ideas do matter, just not in the narrow sense in which startup ideas are popularly defined. Good startup ideas are well developed, multi-year plans that contemplate many possible paths according to how the world changes.

But do we as an industry understand the world?


It’s here this essay turns personal.

My life is just about the exact opposite of what you would expect from a technologist. I studied political science as an undergrad, was an editor of one of the largest student newspapers in the country, and planned to work in politics. After graduating I took off for Taiwan to travel and teach English, and ended up with a family. Six years later I managed to finagle my way into a top-tier MBA program, only to be rejected by every tech company (but one) when it came time for internships. I didn’t have the right background — I hadn’t lived my life in the technology industry.

Yet I had lived life! I had lived life so fully, and gained so much perspective. And it turned out there was one company that valued that: Apple hired me within 24 hours of my first interview.

I think my being hired had something to do with this:

"It's in Apple's DNA that technology alone is not enough — it's technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing." -- Steve Jobs
“It’s in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.” — Steve Jobs

It turned out that a life lived outside of technology was my greatest asset, at least at the company most every founder claims to idolize. But how many take this image and this philosophy seriously? It seems most are more closely aligned with Peter Thiel, who suggested that the best way to increase technological progress was to “Discourage people from pursuing humanities majors.”

Thiel may be right about the best way to “increase technological progress,” but progress is an objective fact; whether its effect is positive or negative remains to be determined.


There was a third article I read this weekend, about the social scientist Daniel Kahneman, called The Anatomy of Influence:

Kahneman’s career tells the story of how an idea can germinate, find far-flung disciples, and eventually reshape entire disciplines. Among scholars who do citation analysis, he is an anomaly. “When you look at how many areas of social science he’s put his fingers in, it’s just ridiculous,” says Jevin West, a postdoctoral researcher at the University of Washington, who has helped develop an algorithm for tracing the spread of ideas among disciplines. “Very rarely do you see someone with that amount of influence.”

But intellectual influence is tricky to define. Is it a matter of citations? Awards? Prestigious professorships? Book sales? A seat at Charlie Rose’s table? West suggests something else, something more compelling: “Kahneman’s career shows that intellectual influence is the ability to dissolve disciplinary boundaries.”

Influence lives at intersections. Yet, as an industry, it at times feels the boundaries we have built around who makes an effective product manager, or programmer, or designer, are stronger than ever, even as the need to cross those boundaries is ever more pressing. It’s not that Thiel was wrong about what types of degrees push progress forward; rather, it’s the blind optimism that technology is an inherent good that is so dangerous.

Technology is destroying the world as it was; do we have the vision and outlook to rebuild it into something better? Do we value what matters?

I’m confident in Jeff Bezos. I’m a little more worried about the rest of us.


To say that this election cycle has only deepened those worries would be a dramatic understatement. This is not a partisan statement, just an objective statement that technology has made objective truth a casualty to the pursuit of happiness — or engagement, to use the technical term — and now life and liberty hang in the balance.

A few weeks ago, during the keynote of the Oculus Connect 3 developer conference, Facebook founder and CEO Mark Zuckerberg articulated a vision for Facebook that I found chilling:

At Facebook, this is something we’re really committed to. You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much much better than it is today. Anything, whether it’s hardware, or software, a company, a developer ecosystem, you can take anything and make it much, much better. And as I look out today, I see a lot of people who share this engineering mindset. And we all know where we want to improve and where we want virtual reality to eventually get…

The magic of VR software is this feeling of presence. The feeling that you’re really there with another person or in another place. And helping this community build this software and these experiences is the single thing I am most excited about when it comes to virtual reality. Because this is what we do at Facebook. We build software and we build platforms that billions of people use to connect with the people and things that they care about.

Leave aside the parts about virtual reality; what bothers me is the faint hints of utopianism inherent in Zuckerberg’s declaration: engineers can make things better by sheer force of will — and that Facebook is an example of just that. In fact, Facebook is the premier example of just how efficient tech companies can be, and just how problematic that efficiency is when it is employed in the pursuit of “engagement” with no regard to the objective truth specifically, or the impact on society broadly.

Last spring Facebook was caught up in a ginned-up controversy about alleged bias: a solitary member of Facebook’s contracted Trending Topics editorial team claimed that conservative news stories were suppressed thanks to team members’ liberal bias. After an investigation Facebook found no evidence of said suppression, but went ahead and laid off the entire team anyways in favor of an algorithm; within days a fake news story was in Trending Topics, and at least four more followed in the next few weeks.

Granted, trending topics has always been a sideshow; what is much more disturbing are the revelations that fake news is widespread in Facebook’s news feed; unsurprisingly, given they are human, many Facebook users wish to connect with people and things that confirm their pre-existing opinions, whether or not they are true.

Make no mistake, this results in a great business: I have written effusively about Facebook’s financial potential and noted that the News Feed algorithm is a big reason why Facebook Squashed Twitter. Giving people what they want to see will always draw more attention than making them work for it, in rather the same way that making up news is cheaper and more profitable than actually reporting the truth.

And yet it is Twitter that has reaffirmed itself as the most powerful antidote to Facebook’s algorithm: misinformation certainly spreads via a tweet, but truth follows unusually quickly; thanks to the power of retweets and quoted tweets, both are far more inescapable than they are on Facebook. Twitter is a far preferable manifestation of Supreme Court Justice Louis Brandeis’ famous concurrence in Whitney vs California (emphasis mine):

Those who won our independence believed that the final end of the State was to make men free to develop their faculties, and that, in its government, the deliberative forces should prevail over the arbitrary. They valued liberty both as an end, and as a means. They believed liberty to be the secret of happiness, and courage to be the secret of liberty. They believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that, without free speech and assembly, discussion would be futile; that, with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine; that the greatest menace to freedom is an inert people; that public discussion is a political duty, and that this should be a fundamental principle of the American government. They recognized the risks to which all human institutions are subject.

Brandeis’ concurrence was a defense of free speech, the right of which applies to government action; private companies are free to police their platforms at they wish. What, though, does free speech mean in an era of abundance? When information was scarce, limiting speech was a real danger; when information is abundant shielding people from speech they might disagree with has its own perverse effects.

To be clear, Twitter has a real abuse problem that it has been derelict in addressing, a decision that is costly in both human and business terms; there is real harm that comes from the ability to address anyone anonymously, including the suppression of viewpoints by de facto vigilantism. But I increasingly despair about the opposite extreme: the construction of cocoons where speech that intrudes on one’s world view with facts is suppressed for fear of what it does to the bottom line, resulting in an inert people incapable of finding common ground with anyone else.

This is why Twitter must be saved: the combination of network and format is irreplaceable, especially now that everyone knows it might not be a great business. For all the good that the Washington Post has done it is but one publication among many; the place where those publications disseminate information is the true scale, but Facebook has made its priorities clear: engagement and dollars, leavened with the certainty that engineers can make it all better; the externalities that result from a focus on making people feel good are not their concern.

The weakness of Twitter, in contrast, is its unwieldy reliance on humans, to build their own feeds, to find a new network, to broadcast to potentially no one what they think. The payoff, though, is the capability of spreading information more widely and more quickly than has ever before been possible; the societal benefit is an externality that needs to be preserved.

Apple Should Buy Netflix

While much of the focus of last Thursday’s Apple announcement was on the new MacBook Pros (and the Macs that were not updated), the more interesting announcement from a strategic perspective was about Apple TV. Tim Cook stated Apple’s goals plainly:

We want Apple TV to be the one place to access all of your television. A unified TV experience. That’s one place to access all of your TV shows and movies. One place to discover great new content to watch. So today we’re announcing a new app and we simply call it ‘TV’…

After the app demo, Cook concluded:

Apple TV, iPhone and iPad have become the primary ways that many of us enjoy watching television, and now with the TV app there’s really no reason to watch TV anywhere else.

Unless, of course, you want to watch Netflix.

Apple’s Leverage Playbook

There is a bit of a playbook to the way Apple comes to dominate industries, and it is founded on customer loyalty. The best example is, naturally, the iPhone:

  • Back in 2006 Apple sought to release the original iPhone on Verizon; the leading carrier in the U.S., though, was wary of Apple’s demands that there be no Verizon branding, no Verizon control of the user experience, and no Verizon relationship with iPhone users beyond managing their data plan. Therefore, Apple launched the iPhone on the second-place carrier (AT&T née Cingular); AT&T accepted Apple’s demands in full with the hope that Apple’s famously loyal customers would see the iPhone as a reason to switch.
  • That, of course, is exactly what happened: in the five years following the iPhone launch, AT&T went from trailing Verizon by $400 million in wireless revenue to leading by $700 million; that’s a $1.1 billion switch thanks in large part to Apple loyalists’ willingness to switch carriers to get an iPhone. The effect was even greater on smaller carriers, which had no choice but to accede to Apple’s increasingly demanding terms: not only would Apple own the customers, but carriers had to agree to significant marketing outlays and guaranteed sales to carry the iPhone.1
  • Apple repeated this formula in market after market: in Japan, for example, Softbank leveraged the iPhone to huge increases in market share, forcing NTT Docomo to finally give in to Apple’s terms. Apple’s leverage also played a role in bringing China Mobile to the negotiating table, along with Apple’s ability to drive higher average selling prices for China Mobile’s then new 4G network.

The iPhone wasn’t the first time Apple used this approach: perhaps the most famous example of Apple coming to dominate its suppliers was the iPod and iTunes Music Store, when Apple was able to leverage its loyal users to dictate terms to the music industry. In some respects this was the more impressive achievement, because while carriers are largely undifferentiated (presuming you live in a location with comparable coverage), music labels have exclusive rights to huge catalogs of music. This should, in theory, provide strong leverage in negotiations, but especially when the iTunes Music Store got started, the labels were terrified of the effect music piracy was having on their business. Apple offered a better alternative to piracy, and then grew so big the labels couldn’t afford to not have their music on the iTunes Music Store.

The problem Apple has in premium video — and given that the company has been trying and failing to secure video content on its terms for years now, it definitely has a problem — is that its executives seem to have forgotten just how important the piracy leverage was to the iTunes Music Store’s success. This Wall Street Journal story from this past summer is one of many similar stories over the years detailing Apple’s take-it-or-leave-it approach to premium video content:

[Senior Vice President of Internet Software and Services Eddy] Cue is also known for a hard-nosed negotiating style. One cable-industry executive sums up Mr. Cue’s strategy as saying: “We’re Apple”…TV-channel owners “kept looking at the Apple guys like: ‘Do you have any idea how this industry works?’” one former Time Warner Cable executive says…Mr. Cue has said the TV industry overly complicated talks. “Time is on my side,” he has told some media executives.

Time may be on Apple’s side, but the bigger issue for Cue and Apple is that leverage is not; that belongs to the company that is actually threatening premium content makers: Netflix. Netflix is the “piracy” of video content, but unfortunately for Apple they are a real company capable of using the leverage they have acquired.

Netflix’s Rise

Much like Apple vis-à-vis the music industry in the 2000s, Netflix got its start by being a friend to the industry it would eventually threaten: DVD sales simply added to the premium video industry’s bottom line, and when the company made the leap to streaming it was through a deal with Starz that the latter basically viewed as found money. Streaming, though, was transformative to the user experience: while Starz had an 11,000 movie catalog, the effective catalog size was one — whatever was showing on the Starz linear TV channel. On Netflix, though, the effective size of the catalog was 11,000: Netflix customers could watch whichever movie they wished whenever they wished on any device they wished.

That superior user experience drove Netflix’s ever-expanding user base, which gave the company the capability to acquire more content with money that was still pure profit for network owners; by the time said networks woke up to the fact that Netflix was devouring attention they were, like the music industry relative to Apple in the 2000s, increasingly captive to what was one of their biggest buyers. Netflix, meanwhile, was investing in its own original content, making deals with content creators directly; this strengthened Netflix’s value proposition to customers, further weakened the negotiating power of networks, and laid the groundwork for Netflix to leverage the Internet to offer its service to nearly every customer on Earth.

Netflix’s strategy has been a textbook example of Aggregation Theory; Netflix has built leverage and monopsony power over the premium video industry not by controlling distribution, at least not at the beginning, but by delivering a superior customer experience that creates a virtuous cycle: Netflix earns the users, which increases its power over suppliers, which brings in more users, which increases its power even more.

It is this virtuous cycle that drives Netflix’s $54 billion valuation, which implies a sky-high price-to-earnings ratio of 340; the company is spending billions on an ever increasing amount of original content that threatens to cut out networks entirely, leading Hollywood to fear a content monopoly. The big question is if Netflix has the financial firepower to pull it off: the company had -$506 million in free cash flow last quarter thanks to its ongoing shift from licensing original content (which is pay on delivery) to self-producting it (which requires investment months or years before the content is actually available); this shift gives Netflix even more leverage — and cuts out traditional networks even more — but it’s expensive, and the company has to keep raising debt on the assumption subscriber numbers will increase enough to pay for it.


There were mixed reports as to why Netflix is not in Apple’s new ‘TV’ app: Peter Kafka reported that Netflix was out even before Apple’s event, while Netflix told Wired that the streaming service is “evaluating the opportunity.”

In fact, though, I suspect those reports aren’t so different after all: Apple’s desire to be “the one place to access all of your television” implies the demotion of Netflix to just another content provider, right alongside its rival HBO and the far more desperate networks who lack any sort of customer relationship at all. It is directly counter to the strategy that has gotten Netflix this far — owning the customer relationship by delivering a superior customer experience — and while Apple may wish to pursue the same strategy, the company has no leverage to do so. Not only is the Apple TV just another black box that connects to your TV (that is also the most expensive), it also, conveniently for Netflix, has a (relatively) open app platform: Netflix can deliver their content on their terms on Apple’s hardware, and there isn’t much Apple can do about it.2

The truth is that Apple’s executives seem stuck in the iPod/iTunes era, where selling 70% of all music players led to leverage over the music labels; with streaming content is available on any device at any time, which means that selling hardware isn’t a point of leverage. If Apple wants its usual ownership of end users it needs to buy its way in, and that means buying Netflix.

Why Apple Should Buy Netflix

I am, as a rule, skeptical of large acquisitions: they are all too often a byproduct of management empire-building, and value-destructive for shareholders. Moreover, not only do promised synergies often fail to materialize, but both the acquirer and the acquired are deeply distracted for years.

I am even more dubious when an acquisition entails combining horizontal and vertical business models:3 horizontal business models, like Netflix’s, entail reaching the maximum number of customers across all devices in order to better leverage up-front costs; vertical business models, like Apple’s, entail offering exclusive services to increase the differentiation of devices sold at a profit.

So why am I advocating an acquisition that is both large and entails combining two orthogonal business models? Surprisingly, I think the argument for Apple is more compelling:

  • As I argued earlier this year, the iPhone was the pinnacle of the product business model: it leveraged software to sell an incredible number of highly differentiated physical devices with a fabulous profit margin (in both percentage and absolute terms), but the future of high-dollar physical goods is to be offered as a service. I strongly suspect this reality was an important reason for Apple’s reset of the Apple Car project.
  • In an ideal world one could argue that Apple should change its employees’ compensation mix to more strongly favor high salaries over stock, dramatically increase its dividend program, and gracefully ride its hardware business model as long as it could; here on planet Earth Apple needs a growth engine to replace the iPhone, if not in reality than at least in potential.
  • Apple is at its best when it is creating new products that are the best they can possibly be; it is a capability that is rather independent from Apple’s biggest strategic assets: its dedicated user base and massive cash pile.

A Netflix acquisition would:

  • Give Apple one of the strongest entrants when it comes to business models of the future
  • Provide a far more compelling growth narrative than its current hardware business (particularly given the advantages Apple gives Netflix, which I will discuss below)
  • Leverages Apple’s assets in a way that leaves the product company free to focus on what it does best

The payoff for Netflix is more straightforward:

  • As I noted above, Netflix’s valuation is already sky-high, with a stock so volatile that CEO Reed Hastings felt compelled to apologize to investors on Netflix’s recent earnings calls. The issue is that Netflix’s potential is massive for all the reasons I described above, but realizing that potential entails spending money the company hopes to gain from future subscribers. Ergo, any surprises in churn or new user numbers send the stock on a roller coaster. Having Apple’s financial backing will alleviate those concerns.
  • Apple’s bank account will also allow Netflix to accelerate its strategy of complete ownership of original content. As I hinted at above, most original Netflix content to date has been licensed, not owned, which is problematic in two ways: first, Netflix faces some restrictions on said content, whether it be a temporal license or a geographic one. Secondly, Netflix isn’t realizing the full profit from its original content, in perpetuity; given Netflix’s business model is so powerful precisely because content is valuable not only when it is shown the first time but every time thereafter, this is an unfortunate giveaway dictated by Netflix’s meager cash position. With Apple behind it Netflix could pursue the same strategy it used for this summer’s Stranger Things: produce content without any middle men, and reap the proceeds — and leverage the freedom — forever.
  • While Apple should keep Netflix cross-platform (limiting Netflix to Apple devices would be massively value destructive — Netflix’s value is predicated on being everywhere — and not even that helpful given that Apple’s devices already dominate their price points), that doesn’t mean that making Netflix available by default on every Apple device wouldn’t have the potential to drive Netflix subscriptions. This could be especially effective internationally where Apple’s brand is much stronger than Netflix’s.

Make no mistake, this would be a massive deal: Apple would probably need to pay a 20% premium at a minimum, which means an acquisition price north of $65 billion (and I’d bet higher). And yet, the biggest reason I’m skeptical it will happen is that I’m not sure Netflix would say yes: the company has made it this far with a ladder up strategy predicated on delivering a superior customer experience, and provided the company can keep the cash flowing the leverage in video is all theirs. Granted, Amazon Prime Video is a big threat, particularly because their orthogonal business model and big company backing give them the ability to match Netflix dollar-for-dollar when it comes to acquiring content, but having made it thus far, does Hastings want to take the easy way now?

As for Apple, Cook has been resolute in following the Steve Jobs playbook, which would seem to rule out a transformative acquisition of this nature. Still, strains are showing: in retrospect the Apple Watch was rushed to market, the company is raising prices to preserve margins and average selling prices, while the company seems to be cutting costs on the margins. Wouldn’t it be a relief to sell a future based on more than squeezing the last drops of blood out of the iPhone rock? Indeed, the iPhone as cash cow and Netflix — run as an independent subsidiary — as growth driver would arguably create the greatest possible freedom to recreate the future once again.


  1. As I’ve noted several times in the Daily Update, these contracts gave Apple remarkable precision in forecasting iPhone growth; that the iPhone is mostly everywhere now is, I suspect, a reason why Apple’s forecasting has been off (in both directions) for the last several quarters 

  2. In theory Apple could mandate that all streaming apps tie in to the TV app, but I think the company would soon find that Netflix could drive the sale of other streaming devices more easily than Apple can drive the surrender of Netflix’s most important strategic advantage 

  3. Hello AT&T and Time Warner

Surface Studio, Nintendo Switch, and Niche Strategies

There are few things that can bring geeks (like me) to the edge of hyperbolic hysteria like compelling new hardware videos, and this last week had not one but two!

First, the Nintendo Switch:

Then, yesterday, the Microsoft Surface Studio:

There’s no question both products are exciting in their own right; what makes them compelling, though, is not simply the technology demonstrated, but the fact both, unlike their forbearers, are clearly designed with the smartphone in mind.

The Wii U Mistake

The Nintendo Wii U and the Surface RT were both launched at the end of 2012; both were miserable disasters, and both for largely the same reasons: they targeted markets that no longer existed.

Back in 2006 the Wii came out of nowhere to win the seventh-generation of consoles, outselling the Xbox 360 and PlayStation 3 despite the fact Nintendo’s hardware was significantly underpowered relative to its competition. The key was the Wii’s motion control, best manifested by the seemingly simplistic Wii Sports; it turned out that simplicity was a virtue, attracting casual gamers who had long since abandoned consoles or never considered one in the first place, and Wii Sports went on to become the best-selling video game of all time.1

Nintendo sought to recreate the Wii’s success with the Wii U; the eighth-generation console finally supported high-definition graphics, but it was still significantly underpowered relative to the Xbox One and PS4. Instead Nintendo relied on another gimmick — a second touch screen on a tablet-like controller — but not only was the gimmick a flop with developers (including Nintendo), it was completely ignored by consumers; the console has been all-but discontinued after selling fewer than 15 million units.

There are lots of reasons the Wii U failed — including a name that was easily conflated with its predecessor, a lack of compelling first-party titles at launch, and Nintendo’s usual struggles with 3rd-party developers (much of it self-inflicted through both business practices and product decisions) — but the most important reason is that the market Nintendo exploited with the Wii no longer existed: casual gamers now owned smartphones, and smartphone gaming was good enough (and superior, if you considered convenience and ease-of-use). Meanwhile, Microsoft and especially Sony had continued their focus on dedicated gamers and 3rd-party developers, leaving the Wii U in the middle of nowhere: not good enough for hard-core gamers, and superfluous for everyone else.

The Surface Mistake

The Surface, meanwhile, was designed to be the physical manifestation of Windows 8: a touch-based tablet combined with the power of a traditional PC. Steve Ballmer said at the launch event:

The past several years have seen great change in the industry and great innovations coming from Microsoft. We’ve helped usher in a new era of cloud computing, we’ve embraced mobility, we’re redefining communications, and attempting to transform entertainment. In all that we have done Windows is the heart and soul of Microsoft: from Windows PCs to Windows servers to Windows Phones and Windows Azure, Windows has proven to be the most flexible general purpose software ever created…

With Windows 8 we’ve reimagined…Windows from the chipset to the user experience to power a new generation of PCs that enable new capabilities and new scenarios. We approached the Windows 8 product design in a forward-looking way: we designed Windows 8 for the world we know in which most PCs are mobile and people want access to information and the ability to create content from anywhere, anytime. People want to do all of that without compromising the productivity that PCs are uniquely known for, from personal productivity applications to technical applications, business software and literally millions of other applications that are written for Windows.

The problem is that while Windows may have still been the heart of Microsoft, it was no longer the heart of computing: the smartphone was. Much like the Wii U, the original ARM-based Surface RT failed for lots of product-related reasons — including the fact it was under-powered, lacked compelling 3rd-party applications,2 and the fact that Windows 8’s looks far outweighed its ease-of-use — but the most important reason is that the market Microsoft exploited with the general-purpose PC no longer existed: casual PC users now owned smartphones, and smartphone computing was good enough (and superior, if you considered convenience and ease-of-use). Meanwhile, traditional Windows OEMs and especially Apple had continued their focus on dedicated PC users, leaving the Surface in the middle of nowhere: not good enough for hard-core PC users, and superfluous for everyone else.

Microsoft’s Transformation

In the intervening years the advice for Nintendo and Microsoft has been strikingly similar: stop making hardware (I haven’t said that about Nintendo, but I certainly did about the Surface). The world has changed, it’s time to move on and adapt.

Microsoft in particular has done exactly that, to a degree I frankly didn’t think was possible as long as Windows was around. There are two divisions of the company — Productivity and Business Processes and Intelligent Cloud — that are building for a future where Windows is one of many computing platforms, and an increasingly unimportant one at that.3 Meanwhile, the company’s third division (More Personal Computing) is basically a collection of businesses, most prominently Windows, that are cash cows but don’t figure into Microsoft’s long-term future as a services company.

I have applauded Microsoft CEO Satya Nadella for pulling this split off, not just organizationally but culturally; Ballmer’s (correct) contention that Windows was the center of Microsoft was holding the entire company back, and it was why he needed to be replaced for Microsoft to survive in a world centered on iOS and Android. What I didn’t expect, though, was that the demotion of Windows would be just as good for Windows — specifically Surface — as it was for Microsoft.

The Surface Studio

Perhaps the most important feature of the Surface Studio is its price: $3,000 for the lowest-end model, and that doesn’t include the innovative “Dial”. This price point immediately attracted a lot of consternation on Twitter amongst Windows fans in particular, while some industry observers argued that most consumers weren’t artists anyways. Both true!

But what is also true is that all PCs are niche devices: for most people, particularly outside the U.S., a smartphone is all they need or care to buy. The world today is the exact opposite of the world a mere decade ago, where we bought dedicated devices to plug into our digital hub PCs; the smartphone (and cloud) is the hub, and everything else is optional.

Selling a niche device is a fundamentally different proposition than selling a general purpose one: the question of why a consumer would buy a general purpose device (like a PC ten years ago, or a smartphone today) is a solved one; the only question is which. Niche devices, on the other hand, face two hurdles to adoption: before a consumer chooses which device to buy they need to be convinced as to why they need a niche device in the first place. And, because answering “why” is even more difficult than winning “which”, niche device makers ought to focus on being a clearly superior solution to an identified market need; being cheap is not only not a priority, it’s a distraction.

The Surface Studio does this brilliantly; as the launch video shows, an incredible amount of engineering and expensive manufacturing went into the physical product, particularly the screen and hinge. The result, though, is that if you are an artist or graphic designer or architect or musician or do any sort of activity that requires drawing on or touching your display, and need the productivity capabilities of a PC (hello file system!), then there is nothing on the market like the Surface Studio (including the Surface Pro). Microsoft has rightly pulled out all the stops to make the perfect device for you. Does it cost a lot? Of course it does! But as Apple has demonstrated for years price is less important than differentiation. To insist that Microsoft make the product cheaper is the exact wrong strategy for a niche device maker, and I suspect it is rooted in the false assumption that the PC is a general purpose device that ought to appeal to everyone.

Certainly the Studio’s success is not assured, in large part because Microsoft’s target audience is using OS X. It is a tall order to get people to switch operating systems they are familiar with, which emphasizes the importance of Microsoft focusing on differentiation, not price (if price was all that mattered Windows would have never lost share to OS X in the first place). Just as importantly, though, is that today more people are even considering the possibility: the single best way to build a brand that attracts (as opposed to being the default) is to build a product that is the best possible one you can build, and only then make it as affordable as possible. For too long Microsoft approached that problem backwards, exacerbating the secular shift from PCs to smartphones.

The Nintendo Switch

As for Nintendo, the beloved gaming company has certainly taken steps in the right direction: the company lent its intellectual property (and investment dollars) to Niantic, driving the phenomenal success of Pokémon Go. Perhaps more significantly for the bottom line, the company has leveraged its partnership with mobile game developer DeNA to develop the upcoming Super Mario Run; encouragingly, like Pokémon Go, Nintendo is taking an established concept (a “runner”, in this case) and applying its intellectual property on top. It’s a great way to leverage Nintendo’s most valuable assets.

There are encouraging signs when it comes to the Switch as well:

  • It looks like (but is not confirmed) that Nintendo is abandoning touch as an input, which is exactly right. Touch on Nintendo products debuted in 2004 on the Nintendo DS, when the iPhone was but a gleam in Steve Jobs’ eye; today touch-focused games are going to be developed for the far larger smartphone market, and keeping the technology will only make Nintendo’s product significantly more expensive and/or far worse with regards to screen quality (the current 3DS and Wii U touch screens are embarrassing).
  • What Nintendo is doubling down on is controllers, another smart move. I argued in 2014 that controllers are so important to the user experience of consoles that they will hold off general purpose devices like Apple TVs when it comes to living room gaming; Nintendo’s bet is that they can attract gamers who want mobility by offering high fidelity control that smartphones can not4.
  • Nintendo also looks set to unleash a flurry of first-party titles: the company clearly gave up on the Wii U quite a while ago, shifting resources to the Switch. If the product launches with titles like Mario, Zelda, etc. it will provide a big boost that may actually attract third party developers

However, there remain two big questions: first, while Microsoft is a highly diversified business that can afford to sell Surface Studio to a very narrow niche, Nintendo’s entire business, outside of its nascent smartphone efforts, is its consoles. There is definitely a console niche, but Sony and Microsoft are filling it admirably. Is a portable niche big enough to support Nintendo?

Secondly, will Nintendo fully embrace a niche strategy? As I noted above, in a niche it is most important to convince consumers that they want your device; only then will they decide if they can afford it. If Nintendo skimps on the quality of its components, the performance of the device, or battery life just to save a few bucks, they may well please the people who were going to buy the device anyways, but plenty of others will stick with their good-enough smartphones or clearly superior consoles. Obviously the Switch should not be absurdly expensive, but it can definitely be too cheap.


Over the last couple of years, as it has become clear that rounded rectangles of glass and aluminum running either iOS or Android “won” the smartphone wars, it has been tempting to fret that hardware innovation would slow; and, arguably, in the case of smartphones, it has. In fact, though, I expect that the reality of the smartphone being the dominant general purpose device will open the doors for more and more devices like Surface Studio and the Nintendo Switch.

What might be created if you start with the assumption that the smartphone exists? Perhaps you would make sunglasses with a camera, or a watch, or an activity tracker, or a drone. I noted in Snapchat Spectacles and the Future of Wearables that the establishment of the PC led to an explosion of dedicated devices like PDAs, digital cameras, GPS devices, and digital music players. Now that those have been subsumed into the smartphone there are new opportunities, and in a twist of fate it is smartphone also-rans like Microsoft and Nintendo — along with smartphone native companies like Snapchat — that have more freedom to experiment given they have nothing to protect. It’s never been better to be a geek!


  1. Including bundled versions, breaking the record held for 20 years by Super Mario Bros. 

  2. I am painfully aware of this point, as I was on the Windows 8 team charged with bringing 3rd-party applications on board 

  3. To be fair, the majority of revenue in both divisions — from Office and Windows Server and its related products — still rests on Windows PC being the center of enterprise 

  4. Yes, there are add-on controllers for the iPhone, but games can’t require them which dramatically reduces their utility)  

The IT Era and the Internet Revolution

I like to say that I write about media generally and journalism specifically because the industry is a canary in the coal mine when it comes to the impact of the Internet: text shifted from newsprint to the web seamlessly, completely upending the industry’s business model along the way.

Of course I have a vested interest in this shift: for better or worse I, by virtue of making my living on Stratechery, am a member of the media, and it would be disingenuous to pretend that my opinions aren’t shaped by the fact I have a personal stake in the matter. Today, though, and somewhat reluctantly, I am not just acknowledging my interests but explicitly putting Stratechery forward as an example of just how misguided the conventional wisdom is about the Internet’s long-term impact on society, in ways that extend far beyond newspapers (but per my point, let’s start there).

What Killed Newspapers

On Monday Jack Shafer, the current dean of media critics, asked What If the Newspaper Industry Made a Colossal Mistake?:

What if, in the mad dash two decades ago to repurpose and extend editorial content onto the Web, editors and publishers made a colossal business blunder that wasted hundreds of millions of dollars? What if the industry should have stuck with its strengths — the print editions where the vast majority of their readers still reside and where the overwhelming majority of advertising and subscription revenue come from — instead of chasing the online chimera?

That’s the contrarian conclusion I drew from a new paper written by H. Iris Chyi and Ori Tenenboim of the University of Texas and published this summer in Journalism Practice. Buttressed by copious mounds of data and a rigorous, sustained argument, the paper cracks open the watchworks of the newspaper industry to make a convincing case that the tech-heavy Web strategy pursued by most papers has been a bust. The key to the newspaper future might reside in its past and not in smartphones, iPads and VR. “Digital first,” the authors claim, has been a losing proposition for most newspapers.

Shafer’s theory is that the online editions of newspapers is inferior to print editions; ergo, people read them less. To buttress his point Shafer cites statistics showing that most local residents don’t read their local newspaper online.

The flaw in this reasoning should be obvious to any long-time Stratechery reader: people in the pre-Internet era didn’t read local newspapers because holding an unwieldy ink-staining piece of flimsy newsprint was particularly enjoyable; people read local newspapers because it was the only option. And, by extension, people don’t avoid local newspapers’ websites because the reading experience sucks — although that is true — they don’t even think to visit them because there are far better ways to occupy their finite attention.

Moreover, while some of those alternatives are distractions like games or social networking, any given newspaper’s real competitors are other newspapers and online-only news sites. When I was growing up in Wisconsin I could get the Wisconsin State Journal in my mailbox or I could go to a bookstore to buy the New York Times; it didn’t matter if the latter was “better”, it was too inconvenient for most. Now, though, the only inconvenience is tapping a different app. Of course most readers don’t even bother to do that: they just click on whatever is in their Facebook feed, interspersed with advertisements that are both more targeted and more measurable than newspaper advertisements ever were.

The truth is there is no one to blame for the demise of newspapers — not Google or Facebook, and not 1990s era publishers. The entire linchpin of the newspaper business model was controlling distribution, and when that linchpin was obliterated by the Internet it was inevitable that the entire apparatus would collapse.

The IT Era

Make no mistake, this sucks for journalists in particular; newsroom employment has plummeted over the last decade:

screen-shot-2016-10-19-at-4-27-37-pm

Still, just for a moment set aside those disappearing jobs and look at what happened from roughly 1985 to 2007: at a time when newspaper revenue continued to grow jobs didn’t grow at all; naturally, newspaper companies were enjoying record profits.

What had happened was information technology: copy could be made on computers, passed on to editors via local area networks, then laid out digitally. It was a massive efficiency improvement over typewriters, halftone negatives, and literal cutting-and-pasting:

newspaper-copy

Newspapers obviously weren’t the only industry to benefit from information technology: the rise of ERP systems, databases, and personal computers provided massive gains in productivity for nearly all businesses (although it ended up taking nearly a decade for the improvements to show up). What this first wave of information technology did not do, though, was fundamentally change how those businesses worked, which meant nine of the ten largest companies in 1980 were all amongst the 21 largest companies in 19951. The biggest change is that more and more of those productivity gains started accruing to company shareholders, not the workers — and newspapers were no exception.

This is why I believe it is critical to draw a clear line between the IT era and the Internet era: the IT era saw the formation of many still formidable technology companies, but their success was based on entrenching deep-pocketed incumbent enterprises who could use technology to increase the productivity of their workers. What makes the Internet era a much bigger deal is that it challenges the very foundations of those enterprises.

The Internet Revolution

I already explained what happened to newspapers when distribution erased local newspapers moats in the blink of an eye; as I suggested at the beginning, though, this was not an isolated incident but a sign of what was to come. Back in July I laid out how the acquisition of Dollar Shaving Club suggested the same process was happening to consumer packaged goods companies: leveraging size to secure shelf space supported by TV advertising was no longer the only way to compete. A few weeks before that I pointed out that television was intertwined with its advertisers; the Internet was eroding the business of linear TV, CPG companies, retailers, and even automative companies simultaneously, leaving the entire post World War II economic order dependent on sports to hold everything together.

The ways these changes arrive are strikingly similar; I call it the FANG Playbook after Facebook-Amazon-Netflix-Google:

None of the FANG companies created what most considered the most valuable pieces of their respective ecosystems; they simply made those pieces easier for consumers to access, so consumers increasingly discovered said pieces via the FANG home pages. And, given that Internet made distribution free, that meant the FANG companies were well on their way to having far more power and monetization potential than anyone realized…

By owning the consumer entry point — the primary choke point — in each of their respective industries the FANG companies have been able to modularize and commoditize their suppliers, whether those be publishers, merchants and suppliers, content producers, or basically anyone who needs to be found on the Internet.

This is the critical difference between the IT-era and the Internet revolution: the first made existing companies more efficient, the second, primarily by making distribution free, destroyed those same companies’ business models.

The Internet Upside

What is easy to forget in this tale of woe is that all of these upheavals have massively benefited consumers: that I can read any newspaper in the world is a good thing. That we have access to many more products at much lower price points is amazing. That one can search the entire corpus of human knowledge from just about anywhere in the world, or connect to billions of people, or more prosaically, watch what one wants to watch when one wants to watch it, is pretty great.

Beyond that, what are even more difficult to see are the new possibilities that arise from said upheaval, which is where Stratechery comes in. By no means is this site a replacement for newspapers: I’m pretty explicit about the fact I don’t do original reporting. And yet, I certainly wouldn’t classify the time spent reading this site in the same category as the diversions of gaming and social networking I mentioned earlier. Rather, my goal is to deliver something completely new: deep, ongoing analysis into the business and strategy of technology. It is a viewpoint that wasn’t worth cutting-and-pasting into a broadsheet meant to serve a geographically limited market, but when the addressable market is the entire world the economics suddenly work very well indeed.

Oh, and did you catch that? Saying “the addressable market is the whole world” is the exact same thing as saying that newspapers suddenly had to compete with every other news source in the world; it’s not that the Internet is inherently “good” or “bad”, rather it is a new reality, and just because industries predicated on old assumptions must now fail should not obscure the fact that entirely new industries built with new assumptions — including huge new opportunities for small-scale entrepreneurship by individuals or small teams — are now possible. See YouTube or Etsy or yes, journalism, and this is only the beginning.

The Importance of Secondary Effects

I’d certainly like to think the benefits of this change run deeper than simply ensuring I earn a decent living; it is deeply meaningful to me when I have readers say my writing helped them land a job, or that they are applying my frameworks to a particularly difficult decision they are facing, or even that they feel like they are getting a business school education for practically free. To be certain that last point is overstating things, at least from a business school’s perspective: the breadth of material covered, the degree on your resume, and of course the friends you make are big advantages. But it used to be that the choice was a binary one: spend six figures on business school or don’t; if one can pick up a useful bit of business thinking for $10/month while still being a productive worker then isn’t that a win for society?

These secondary effects will be the key to building a prosperous society amidst the ruin of the Internet’s creative destruction: what is so exciting about Uber is not the fact it is wiping out the taxi industry, but rather that transportation as a service has the potential to radically transform our cities. What happens when parking lots go away, commutes in self-driving cars lend themselves to increased productivity, or going out is as easy as tapping an app? What kind of new jobs and services might arise?

As another example, I wrote last month about the dramatic shift in enterprise software that is being enabled by the cloud. The simple ability to pay-as-you-go has already had a big impact on startups and venture capital, but the initial impact on established companies has been to operationalize costs and increase scalability for established processes; the true transformation — building and selling software in completely new ways — is only getting started. Again, there is a difference between making an existing process more efficient and enabling a completely new approach. The gains from the former are easy to measure; the transformation of the latter is only apparent in retrospect, in part because the old way takes time to die and repurpose.

This two stage process is going to be the most traumatic when it comes to the already-started-and-accelerating introduction of automation and artificial intelligence. The downsides are obvious to everyone: if computers can do the job of a human, then the human no longer has a job. In the long run, though, what might that human do instead? To presume that displaced workers will only ever sit around collecting a universal basic income2 is to, in my mind, sell short the human drive and ingenuity that has already carried us far from our cavemen ancestors.

I know that my perspective is a privileged one: I am a clear beneficiary of this new world order. Moreover, I know that very good and important things will be lost, at least for a time, in these transitions. I strongly agree with Shafer, for example, that newspapers “still publish a disproportionate amount of the accountability journalism available” and that “we stand to lose one of the vital bulwarks that protect and sustain our culture.”

Fixing that and the many other problems wrought by the Internet, though, requires looking forwards, not backwards. The most fundamental assumptions underlying businesses — critical institutions in any society — have changed irrevocably, and to pretend they haven’t is a colossal mistake.


  1. Gulf Oil, which was the 7th largest company in 1980, was the exception 

  2. Which I support  

Chat and the Consumerization of IT

It was in 2001, the same year the iPod was introduced, that Douglas Neal and John Taylor coined the phrase “Consumerization of IT”; they set down a specific definition in this 2004 position paper:

The defining aspect of consumerization is the concept of ‘dual use’. Increasingly, hardware devices, network infrastructure and value-added services will be used by both businesses and consumers. This will require IT organizations to rethink their investments and strategies.

Neal and Taylor’s argument was rooted in math: there were more consumers than there were IT users, which meant that over the long run the rate of improvement in consumer technologies would exceed that of enterprise-focused ones; IT departments needed to grapple with increased demand from their users to use the same technology they used at home. The pinnacle of the trend seemed to be the iPhone: designed unabashedly as a consumer device, Apple’s product was so superior to what was on the market that employees clamored to use it for company business; surprisingly, Apple helped out, adding significant enterprise-focused features to the iPhone.

Meanwhile, a year earlier Google had launched Google Apps for your domain, bringing popular Google consumer applications like Gmail, Google Talk, and Google Calendar to enterprises. The company basically repeated Neal and Taylor’s thesis in the blog post announcement:

A hosted service like Google Apps for Your Domain eliminates many of the expenses and hassles of maintaining a communications infrastructure, which is welcome relief for many small business owners and IT staffers. Organizations can let Google be the experts in delivering high quality email, messaging, and other web-based services while they focus on the needs of their users and their day-to-day business.

Former Microsoft CEO Steve Ballmer originally dismissed Google Apps, but eventually Microsoft viewed the cloud service as a mortal threat; four years later Ballmer had adopted Consumerization of IT as one of his main talking points, writing in Microsoft’s 2012 shareholder letter:

Fantastic devices and services for end users will drive our enterprise businesses forward given the increasing influence employees have in the technology they use at work — a trend commonly referred to as the Consumerization of IT. It’s one more reason Microsoft is committed to delivering devices and services that people love and businesses need.

In Ballmer’s interpretation, Consumerization of IT was about delivering enterprise products that were IT-friendly with a nice consumer-like interface on top, and while outside observers may have considered the latter a particularly significant challenge for Microsoft, there’s no question the company was well-placed when it came to the former; after all, IT managers were the ultimate decision-makers.

Meanwhile, an enterprise startup called Atlassian, founded in 2002, was offering a new definition of Consumerization of IT. Their main product, the developer-focused JIRA project management software, was not a consumer product ported to the enterprise, like Google Apps, nor was it a traditional IT-product with a consumer-like interface peddled to IT managers to get users off their backs. What was so innovative about JIRA was how it was sold: to teams, not companies. Thanks to the distributive power of the web the company never bothered to build a sales force to wine-and-dine CIOs; rather the product could be downloaded from a website (and later signed up for as a cloud service) and paid for with a credit card; the primary form of marketing was word-of-mouth.


So which definition of the Consumerization of IT is most meaningful? Is it consumer products ported to IT, consumer UI on traditional enterprise products, or a new business model that transforms the relationship between buyers and sellers? Certainly all three factors are important to the rise of software as a service, but the upcoming chat wars will provide an interesting test as to which is the most important.

Approach 1: Workplace by Facebook

Earlier this week Facebook formally introduced Workplace by Facebook. From the company’s blog:

At Facebook, we’ve had an internal version of our app to help run our company for many years. We’ve seen that just as Facebook keeps you connected to friends and family, it can do the same with coworkers…We’ve brought the best of Facebook to the workplace — whether it’s basic infrastructure such as News Feed, or the ability to create and share in Groups or via chat, or useful features such as Live, Reactions, Search and Trending posts. This means you can chat with a colleague across the world in real time, host a virtual brainstorm in a Group, or follow along with your CEO’s presentation on Facebook Live. We’ve also built unique, Workplace-only features that companies can benefit from such as a dashboard with analytics and integrations with single sign-on, in addition to identity providers that allow companies to more easily integrate Workplace with their existing IT systems.

Workplace is a near perfect application of Neal and Taylor’s original thesis: Facebook has already done much of the R&D and all of the backend buildout for the product by virtue of building the Facebook consumer app, meaning it can offer a very robust service at very competitive prices.

Moreover, it’s a service that potential users already know how to use; training and adoption remain significant barriers for all enterprise products, including cloud-based ones, which means Workplace will have a head start over competitors.

Still, there are challenges that arise from Workplace’s consumer roots: for one, while Facebook is promising that Workplace data will be both secure and independent from Facebook itself, the company will still have a reputational problem to overcome. Relatedly, the business model is a complete departure: Facebook has never charged for software before, and as Google learned a decade ago, licensing revenue come with new obligations around service and responsiveness that require the development of completely new organizational skills.

There’s one more hangup: it’s not as easy to get started as Facebook suggests. You need to apply, at which point you will be contacted by Facebook’s sales team, who “will work with you to understand your needs and help launch Workplace across your organisation.”1 This is understandable given that Workplace’s biggest gains come from connecting the entire organization, but it’s a lot closer to a traditional enterprise selling motion.

Approach 2: Skype Teams by Microsoft

Microsoft already owns Yammer, which will have to give up its mantle as “Facebook for work.” It seems much of the company’s energies, though, have been focused on building Skype Teams, a product first reported by MS Power User in early September:

Skype Teams is going to be Microsoft’s take on messaging apps for teams. Skype Teams will include a lot of similar features which you’ll find on Slack. For example, Skype Teams will allow you to chat in different groups within a team, also known as “channels”. Additionally, users will be able to talk to each other via Direct Messages on Skype Teams.

This isn’t exactly a surprise given reports Microsoft considered (some say tried) buying Slack; at the time it was reported that Bill Gates was a particularly vocal proponent of building a competitor. What is more interesting is the licensing model, which was reported by Petri late last month:

Skype Teams will be part of Office 365 and will be available to anyone who is already subscribed to a business plan, likely starting with E3 SKU. Skype Teams integrates deeply with your Office 365 content as well, with the ability to share your calendar inside the app as well as join meetings too. To no surprise, this application is built on the company’s new cloud platform and very well may be the future of Skype for Business. Make no mistake, Microsoft is going for the jugular on Slack with this product as many corporate customers already use Office 365 and with this product being bundled into that service, there will be no need to pay for Slack.

This is very much in-line with the Ballmer view: build a product explicitly for enterprise that has all the trappings of consumer software. Then, in true Microsoft fashion, leverage its existing position in companies to drive adoption. In the case of Skype Teams, companies that have Office 365 will have the choice of paying extra for Slack or simply using Skype Teams for free; ideally they will never even try to compare the two — and Microsoft will have an entire field organization working to ensure that is exactly what happens.

There are, of course, challenges with this approach. As is their wont Microsoft seems to be over-indexing on brand awareness (Skype ?) as opposed to brand reputation (Skype ?). More importantly, Microsoft has to prove they can build a competitive user experience while avoiding the temptation of competing on features; when Instagram copied Snapchat’s Stories feature I wrote in The Audacity of Copying Well:

The problem with focusing on features as a means of differentiation is that nothing happens in a vacuum: category-defining products by definition get a lot of the user experience right from the beginning, and the parts that aren’t perfect — like Facebook’s sharing settings or the iPhone’s icon-based UI — become the standard anyways simply because everyone gets used to them.

Perhaps more importantly, though, Microsoft’s Office 365-based model is a double-edged sword: the “you already paid for it” pitch is one that resonates with CIOs, not necessarily users and team leaders who may already be using a competitor — or be tempted to try.

Approach 3: Slack

The only thing that has grown faster than Slack’s user base and valuation is its hype and mindshare among the tech press, so it’s not a surprise that Workplace by Facebook is being characterized as a Slack competitor; undoubtedly Skype Teams will be described the same way.

Slack represents the evolution of the Atlassian model: getting started takes nothing more than an email address. There is no server software to install, no contracts to sign, and you don’t even need a credit card.2 There is a generous free level, which means the only friction entailed in giving the product a try — or in accepting an invitation — is said sign-up. And as we’ve learned in the consumer space, when everything else is equal the quality of the user experience matters most.

Moreover, because this is messaging, the user experience is about more than the user interface: it is first and foremost about who else is using Slack. And, at least amongst developers, that is a whole host of other people and interest groups one might want to be connected to. No, Slack cannot leverage itself into a particular company through another SKU like Microsoft will leverage Office 365 for Skype Teams, but that means the company can willingly (or unwillingly) let its users connect to teams far beyond their own organization, adding on a consumer-like network effect; granted, this won’t necessarily drive CIO decision-making, but it does enhance the end user experience, which matters more the lower down the hierarchy the decision-maker is.

Where Slack is weaker is, well, being a proper enterprise app. While it is used widely in startups, Atlassian’s Hipchat has won contracts at bigger companies like Apple, Twitter, and Uber. In some cases this is because Atlassian offers an on-premise version, in others it comes down to issues like compliance and data storage. Slack will surely catch up, but it’s worth noting that feature weakness is a natural outcome of the model: there are no CIOs demanding features via sales people desperate to close a deal. The feedback loops are a little looser.


Slack’s network effect notwithstanding, the reality of enterprise software is that there will likely be multiple winners. Facebook’s product looks very compelling, but it’s fair to wonder if the company will be truly committed to the category; Google came out of the gate fast a decade ago, and then let its enterprise efforts stagnate until this year. Microsoft’s product hasn’t even launched yet, but presuming it is good enough the ability to leverage Office 365 is a real advantage. Slack, meanwhile, has all of the advantages of a startup: singular focus, aligned incentives, and, most potent of all, a new kind of business model.

What has to worry Microsoft is that their advantage is waning: being a part of Office 365 is great…as long as a company uses Office 365. For all the good work that CEO Satya Nadella has done to reorient Microsoft away from Windows and towards services, the company is still lacking a new generation of products that sell themselves — and, by extension, sell the rest of Microsoft.

Of course that may be unrealistic: as I wrote in the case of Oracle the attractiveness of suites is greatly diminished in a world of cloud-based software; decision-making is devolving away from CIOs concerned with up-front costs and having one throat to choke, to users and managers concerned with the user experience. And, by extension, in an a la carte world standard interfaces and easy integrations become paramount: that means ecosystem building and developer support, which are a whole lot easier to accomplish if your product is free for anyone to use (advantage Slack).

Maybe that is the ultimate meaning of Consumerization of IT: it’s not just products and user interfaces moving from consumer to enterprise; rather, the ultimate manifestation is an enterprise product that can be used by consumers. That means scalable infrastructure, it means nailing the user experience, and, most critically, it means an entirely new business model.


  1. Note the spelling: organisation with an ‘s’. You can tell Workplace was developed in Facebook’s London office, not in California 

  2. This is the case for Atlassian’s cloud-based products as well 

Google and the Limits of Strategy

John Gruber is not impressed by the suggestion that Google’s new Pixel phone, which the company introduced at a keynote yesterday, is the first time the company has competed head-to-head with the iPhone:

Google has been going head-to-head against the iPhone ever since the first Android phone debuted. You can’t say the Nexus phones don’t count just because they never succeeded.

Google then-VP of engineering Vic Gundotra devoted his 2010 I/O keynote to ripping into the iPhone and iPad, pedal to the metal on “open beats closed” and how an ecosystem of over 60 different Android devices (a drop in the pond compared to today) was winning, saving the world from a future where “one man, one company, one device” controls mobile. (Gundotra tossed in “one carrier”, which was true at the time, but looks foolish in hindsight.) He even compared the iPhone to Orwell’s 1984. Really.

The only thing Orwellian here is Google’s attempt to flush down the memory hole their previous attempts to go head-to-head against the iPhone. Watch the first 10 minutes of Gundotra’s 2010 keynote — the whole thing is about beating the iPhone.

Gruber is both right and wrong: yes, Gundotra’s rhetoric was stridently anti-Apple, but at the end of that keynote everyone in attendance received an HTC EVO 4G; when it came to the zero-sum game of actually putting phones in people’s pockets, Apple’s competitors (then) were companies like HTC, Motorola, and especially Samsung. Granted, those manufacturers’ phones ran Google’s Android software, but then again Google’s software ran on the iPhone, too; in fact, at the time of Gundotra’s speech, Google Maps, YouTube, and Google Search were all built into iOS.

As we know today that wouldn’t be the case for long: two years later iOS 6 dumped the YouTube app and, more famously, changed the default mapping application from one based on Google to Apple’s own.1 The proximate cause was not Gundotra’s speech, though: in fact, Apple had purchased a mapping company called Placebase in 2009, and a few months later Google had introduced turn-by-turn navigation; it was Android-only.

Google Versus Android

Very few people know for sure who exactly is to blame for the Google-Apple breakup. Yes, Steve Jobs was livid that Android phones looked a lot like iPhones, but remember, Google purchased Android two years before the iPhone came out (and a year before Eric Schmidt joined Apple’s board) as a hedge against Microsoft; once the iPhone came out, why but for pride would you build a phone any differently?

Where Google went wrong was with that maps decision: making turn-by-turn directions an Android-exclusive differentiated Android as a platform, but to what end? So that HTC et al could sell a few more phones, and pay Google nothing for the privilege?

The truth is that when it came to making money Google and Apple were not competitors in the slightest: Apple was a vertical company that expended R&D and capital investment to design and build devices that included significant material costs, and then sold those devices in a zero-sum competition against other manufacturers. Yes, marketshare was important, but so was profitability: Apple traded off reaching the entire market in favor of creating a differentiated experience that customers would pay a premium for that far exceeded the (significant) marginal costs of each iPhone.

Google, meanwhile, has always been a completely different kind of company — a horizontal one. Nearly all of Google’s costs are fixed — R&D and data centers — which means profitability goes hand-in-hand with marketshare, which by extension means advertising is the perfect business model. The more people using Google the more that those fixed costs can be spread out, and the more attractive Google is to advertisers.

This is why favoring Android in any way was such a strategic error by Google: everything about the company was predicated on serving all customers, but Android by definition would only ever be on a percentage of smartphones. 2 Again, it’s possible Apple would have built its own Maps product regardless, but Google’s short-sighted favoring of Android ensured that for hundreds of millions of potential Google users the default mapping experience and the treasure trove of data that came with it would belong to someone else.

This is where that infamous Gundotra speech matters: I’m not convinced that anyone at Google fully thought through the implication of favoring Android with their services. Rather, the Android team was fully committed to competing with iOS — as they should have been! — and human nature ensured that the rest of Google came along for the ride. Remember, given Google’s business model, winning marketshare was perfectly correlated with reaping outsized profits; it is easy to see how the thinking and culture that developed around Google’s core business failed to adjust to the zero-sum world of physical devices. And so, as that Gundotra speech exemplified, Android winning became synonymous with Google winning, when in fact Android was as much ouroboros as asset.

Google’s Assistant Problem

In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014,3 declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.

It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.

This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money.4 After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention.5 Google Assistant has the exact same problem: where do the ads go?

Google’s Shift

Yesterday’s announcements from Google seem designed to take on the company’s challenges in an assistant-centric world head-on. For good reason the presentation opened with the Google Assistant itself: pure technology has always been the foundation of Google’s power in the marketplace.

Today’s world, though, is not one of (somewhat) standards-based browsers that treat every web page the same, creating the conditions for Google’s superior technology to become the door to the Internet; it is one of closed ecosystems centered around hardware or social networks, and having failed at the latter, Google is having a go at the former. To put it more generously, Google has adopted Alan Kay’s maxim that “People who are really serious about software should make their own hardware.” To that end the company introduced multiple hardware devices, including a new phone, the previously-announced Google Home device, new Chromecasts, and a new VR headset. Needless to say, all make it far easier to use Google services than any 3rd-party OEM does, much less Apple’s iPhone.

What is even more interesting is that Google has also introduced a new business model: the Pixel phone starts at $649, the same as an iPhone, and while it will take time for Google to achieve the level of scale and expertise to match Apple’s profit margins, the fact there is unquestionably a big margin built-in is a profound new direction for the company.6

The most fascinating point of all, though, is how Google intends to sell the Pixel: the Google Assistant is, at least for now, exclusive to the first true Google phone, delivering a differentiated experience that, at least theoretically, justifies that margin.

It is a strategy that certainly sounds familiar, raising the question of whether this is a replay of the turn-by-turn navigation disaster. Is Google forgetting that they are a horizontal company, one whose business model is designed to maximize reach, not limit it?

I don’t think so. In fact, I think this profound strategy shift springs from a depth of thinking that is the polar opposite of the hot-headedness of the former Android leadership. It is not that Google is artificially constraining its horizontal business model; it is that its business model is being constrained by the reality of a world where, as Pichai noted, artificial intelligence comes first. In that world you must own the interaction point, and there is no room for ads, rendering both Google’s distribution and business model moot. Both must change for the company’s technological advantage to come to the fore.

In this respect Google is like the bizarro-Apple: the iPhone maker has the distribution channel and business model to make Siri the dominant assistant in its users’ lives, but there are open questions about its technology prowess when it comes to artificial intelligence specifically and services generally; moreover, efforts to improve are fundamentally stymied by the company’s device-centric culture and organizational structure.

Google’s culture and organizational structure, meanwhile, are attuned to its old business model, the one that equated marketshare with profitability, and which achieved that market share with a product development approach predicated on iteration and experimentation on top of the positive feedback loop that comes from massive amounts of data.

Phones couldn’t be more different: the Pixel is what it is, which means it has to be great on day one, and it has to be sold. The first means an organizational structure that delivers on the promise of focused integration, not willy-nilly experimentation and iteration; the second means partnerships, and outbound marketing, and a whole bunch of other things that Google has traditionally not valued. Indeed, you could see the disconnect in yesterday’s presentation: while Pichai was extremely clear about the company’s new direction, the actual product demonstrations quickly devolved into droning technical mumbo-jumbo that never bothered to explain why users should care.

This is why, much like Apple, I can be both impressed by Google’s strategic thinking and yes, courage in facing this new epoch, even as I am a bit bearish about their prospects: technology alone is rarely enough, and the only thing more difficult than changing business models is changing cultures.


  1. Google search still ships as the default, and Google pays dearly for the privilege 

  2. One could argue in Google’s defense that Android had the potential to wipe out the iPhone just like Windows wiped out the Mac; that, though, is a complete misunderstanding of history (albeit a commonly held one). Many of us were confident in the iPhone’s market resilience even then 

  3. And further refined in 2015’s The Facebook Epoch  

  4. In fact, thanks to Google’s instant search results, the button doesn’t really exist anymore 

  5. And, by extension, said advertisers don’t have the opportunity to build a customer relationship that potentially makes winning that ad auction far more valuable than the single transaction that may have resulted, which always made the price they were willing to pay much higher than it would be in the sort of affiliate model that may work in some Assistant use cases 

  6. And no, the Nexus devices don’t count; they had neither the business model nor the infrastructure to suggest they were anything but what Google said they were: public reference devices that offered the idealized Android experience for enthusiasts for a relatively low price. Unlike Nexus phones the Pixel is launching with carrier support, top-of-the-line specs, and a price to match. The only thing missing is a multi-million dollar advertising campaign, which I suspect we’ll hear news of shortly.