Opendoor: A Startup Worth Emulating

I suppose I appreciate the efficiency with which Techcrunch expressed its skepticism for tech’s latest unicorn, Opendoor; it’s right there in the headline: Online real estate service Opendoor raises $210M Series D despite risky financing model. And, to be fair, Opendoor’s approach is risky. Here’s a summary from a feature in Forbes magazine:1

Opendoor is betting that there are hundreds of thousands of Americans who value the certainty of a sale over getting the highest price. The company makes money by taking a service fee of 6%, similar to the standard real estate commission, plus an additional fee that varies with its assessment of the riskiness of the transaction and brings the total charge to an average of 8%. It then makes fixes recommended by inspectors and tries to sell the homes for a small premium. Buyers get to shop on their own timetable, using key codes for access to the properties, and they receive a 30-day guarantee that Opendoor will buy it back if they’re not satisfied and a two-year warranty on the electrical system and major appliances…

To succeed, it has to price the homes it buys accurately, without seeing them, and it has to sell them quickly to minimize the costs of carrying them. As interest rates rise or housing prices fall, Opendoor will have to figure out how to respond. When market risk increases, the company may charge a higher fee, its own version of surge pricing.

To paraphrase that Techcrunch headline, what were those investors thinking?

Zillow and Redfin

Real estate has long been an appealing market for investors, and for good reason: there is a lot of money at stake. The United States alone has $25 trillion worth of housing, and $900 billion of that changes hands every year; that means well over $50 billion in real estate fees and even more in mortgage interest. It is a market tailor-made for those ubiquitous “If we get X% of the market” pitches that have characterized many a startup business plan.

And yet, the most successful real estate startup, Zillow (which acquired its largest competitor Trulia a couple of years ago), is little more than a glorified marketing tool: the company makes most of its revenue by getting real estate agents — the ones collecting 6% of fees, split between the buying and selling agents2 — to pay to advertise their houses on the site. Certainly a free tool that makes it easier to find houses in a more intuitive way is valuable — Zillow has acquired the sort of userbase that allow it to build an advertising business for a reason — but at the end of the day the company is a tax on a system that hasn’t really changed in decades.

Redfin, meanwhile, which came up with the original houses on a map idea, is taking real estate agents on directly by being a real estate agent itself: their hook is that other tech monetization favorite, being cheap. Redfin keeps half as much as a typical selling agent3 and pays its agents a salary instead of commission. Both limit growth: regular agents with a stake in the current system steer home buyers away from Redfin properties, and hiring and training agents who aren’t interested in the upside from commission takes a lot of time and money.

The Opendoor Model

Opendoor is unique in two respects:

  • First, Opendoor is focused on sellers, the party with the least leverage in a typical residential transaction. Real estate agents sell lots of houses, buyers can choose from lots of houses, but sellers only have a single house to sell, which means the cost of any delay or complication falls on their shoulders. Opendoor relieves that burden by making an offer for the house based solely on an address and questionnaire; if the seller accepts Opendoor will send an inspector to verify the house, agree on any necessary repairs (which are paid by the seller), and close the deal as soon as the seller wishes.
  • Second, Opendoor explicitly charges sellers for having replaced total uncertainty with a bank wire: not just the same 6% that typically goes for buyer and seller agent fees, but also an additional 0-6% for “market risk” — i.e. dealing with the uncertainty of actually showing and selling the house — along with the aforementioned repair costs.

After that the house is on the market like any other, listed in the Multiple Listing Service (MLS), and freely accessible to regular real estate agents to whom Opendoor will pay the customary 3% fee a selling agent pays the buying agent. There are some additional niceties uniquely enabled by technology that both enhance the buying experience and keep prices down, particularly 24-hour access to listed homes with only a smartphone (and cameras to keep a watchful eye), but unlike Redfin, Opendoor isn’t seeking to compete with agents on price, giving them no reason to retaliate; remember, Opendoor actually charges more!

The Ingredients of Disruption

This is a dynamic that Redfin never understood: using technology to list houses was a sustaining advantage that was trivially co-opted by Zillow et al working hand-in-hand with realtors, while competing on price incentivized independent realtors to effectively collude against Redfin without any overt organization.

More broadly, technology alone is rarely sufficient when it comes to entering existing markets: at best a company focused on technology can skim a tax from incumbents, either through advertising like Zillow or service fees; taxes, though, by definition only capture a slice of the value being generated. To truly disrupt a market requires both a differentiated means of meeting the needs of an underserved market and a new business model.

Given that, look again at Opendoor’s two unique features:

  • Sellers are uniquely disadvantaged under the current system, which is another way of saying they are an underserved market with unmet needs
  • Opendoor has a new business model: taking advantage of a theoretical arbitrage opportunity (earning fees on houses sold at a slight mark-up) by leveraging technology in pursuit of previously impossible scale that should, in theory, ameliorate risk

Unlike Zillow and Redfin, Opendoor has the pieces in place to actually disrupt the market over the long run.

Opendoor Risk

There’s no question Opendoor’s approach is, to use Techcrunch’s term, risky. The company needs to accurately price homes sight unseen, carry them on the balance sheet while waiting for them to sell, and bear the risk of market collapses, and that’s above-and-beyond the usual startup risks: finding customers, scaling the product, dealing with regulations and paperwork that differ in every market they enter. It’s a whole lot less risky to just build a fancy search engine and charge advertising, or try to win on price.

Risk, though, is not only about downside; it’s about upside. More than that, the level of downside risk is correlated to upside risk: Opendoor has many more reasons why it might fail than Zillow or Redfin, but its potential upside is far greater as a result. First is the immediate opportunity: sellers who can’t wait. However, as Opendoor grows its seller base, especially geographically, its risk will start to decrease thanks to diversification and sheer size; that will allow it to lower its “market risk” charge which will lead to more sellers. More sellers means both less risk and an increasingly compelling product for buyers to access, first with a real estate agent and eventually directly. More buyers will mean lower marketing costs and faster sell-through, which will lower risk further and thus lower prices, pushing the cycle forward. It’s even possible to envision a future where Opendoor actually does uproot the anachronistic real estate agent system that is a relic of the pre-Internet era, and they will have done so with realtors not only not fighting them but, on the buying side, helping them.

Or, in the next downturn, the entire company might go bust.

Opendoor’s Potential Impact

There is a deeper reason why I am excited about Opendoor, and it too is related to how the company’s approach differs from Zillow’s and Redfin’s. While Zillow makes it easier to look for new houses, and Redfin promises to save sellers a few bucks, making it trivial to sell a house has the potential to fundamentally impact our economy at a time we desperately need exactly that. Many, including myself, have written about how globalization and technology are upending the job market; one particular challenge is that often new jobs are created in different geographic areas than where job seekers are located.

To that end the potential for Opendoor to dramatically increase liquidity in the housing market by buying direct from sellers is not just a business opportunity, but one with the potential to increase dynamism in the job market. Granted, it will take a long time for Opendoor to move into the towns where this sort of service is most needed, but the idea is very much a move in the right direction.

I suspect that this positive impact is related to Opendoor’s business model: by identifying a market need and offering a paid service to alleviate that need, Opendoor is creating value as opposed to taxing a few bucks off the top of an existing market or simply trying to be cheap. While I am hardly anti-advertising, the fact of the matter is that it is a zero-sum market: advertising has been just over 1% of U.S. GDP for a century, which means advertising-based businesses are by-and-large stealing value from other advertising-based business, not creating their own, at least from a dollars-and-cents perspective (end users have been the real beneficiaries, as most of the value creation of services like Google and Facebook is consumer surplus). Meanwhile, simply being cheaper is certainly a viable business model, but it is on the whole deflationary, which is to say value-destructive; sure, money saved can be deployed elsewhere, but the long-term benefits are much more difficult to trace.

To that end I hope Opendoor succeeds simply so it can be a role model for tech: taking on big risks for big rewards that create real value by solving real problems is the best possible way our industry can create benefits that extend beyond investors and shareholders; for too long too much money and talent has been poured into low-risk digital-only businesses that aspire to little more than leeching off of value creators that already exist, or at best cut them off at the knees with low prices, with the assumption that someone else will pick up the pieces. And while I am all for appropriate skepticism, to not consider upside is to be just as oblivious to risk as those who don’t consider what might go wrong.

  1. I can’t quote the Techcrunch article as it fails to explain the business model properly []
  2. This is unique to North America; in most countries the selling agent works alone. However, there is little incentive to change the system because the buyer doesn’t pay for their agent, the seller does via the selling agent []
  3. Which means they charge the seller 4.5%; the buying agent still gets 3%. If a buyer goes through Redfin they also get a partial rebate of the fee []

How Google Is Challenging AWS

Big companies are often criticized for having “missed” the future — from the comfortable perch of a present where said future has come to pass, of course — but while the future is still the future incumbents are first more often than not. Probably the best example is Microsoft: the company didn’t “miss mobile” — Windows Mobile came out in 2000 — but rather was handicapped by its allegiance to its license-based modular business model and inability to envision a world where its core product (Windows) was a planet orbiting mobile’s sun; everything about Windows Mobile’s design presumed the exact opposite.

One could make the same argument about Google and the enterprise; both G Suite (née Google Apps for Your Domain) and Google Docs launched a decade ago and enjoyed modest success, particularly in smaller businesses and education; unsurprisingly, both markets share broadly similar characteristics to Google’s core consumer user base — limited configurability and a low price were good things. Traction was harder to come by in larger enterprises, though, and in fact over the last few years Office 365 has well out-paced G Suite, not only growing faster but winning back customers.

Still, for all the success Microsoft has had with Office 365, the real giant of cloud computing — which is to say the future of enterprise computing — is, as is so often the case, a company no one saw coming: the same year Google decided to take on Microsoft Amazon launched Amazon Web Services. What makes AWS so compelling is the way that it reflects Amazon itself: it is built for scale and with clearly-defined and hardened interfaces. Customers — first Amazon but also companies around the world — access “primitives” that can be mixed-and-matched to build a more efficient, scalable, and secure back-end than nearly any company could build on its own.

AWS’ Primitives

Earlier this year in The Amazon Tax I explained how Amazon’s AWS strategy sprang from the same approach that made the company successful in the first place:

The company is organized with multiple relatively independent teams, each with their own P&L, accountabilities, and distributed decision-making. [The Everything Store author Brad] Stone explained an early Bezos initiative (emphasis mine):

The entire company, he said, would restructure itself around what he called “two-pizza teams.” Employees would be organized into autonomous groups of fewer than ten people — small enough that, when working late, the team members could be fed with two pizza pies. These teams would be independently set loose on Amazon’s biggest problems…Bezos was applying a kind of chaos theory to management, acknowledging the complexity of his organization by breaking it down to its most basic parts in the hopes that surprising results might emerge.

Stone later writes that two-pizza teams didn’t ultimately make sense everywhere, but as he noted in a follow-up article the company remains very flat with responsibility widely distributed. And there, in those “most basic parts”, are the primitives that lend themselves to both scale and experimentation. Remember the quote above describing how Bezos and team arrived at the idea for AWS:

If Amazon wanted to stimulate creativity among its developers, it shouldn’t try to guess what kind of services they might want; such guesses would be based on patterns of the past. Instead, it should be creating primitives — the building blocks of computing — and then getting out of the way.

Steven Sinofsky is fond of noting that organizations tend to ship their org chart, and while I began by suggesting Amazon was duplicating the AWS model, it turns out that the AWS model was in many respects a representation of Amazon itself (just as the iPhone in many respects reflects Apple’s unitary organization): create a bunch of primitives, get out of the way, and take a nice skim off the top.

AWS’ offering has certainly expanded far beyond infrastructure like (virtualized) processors, hard drives, and databases, both in terms of further abstraction (e.g. Lambda “serverless” computing) and up the stack into platform and software services, but the foundation of its success continues to be Amazon’s pure platform approach: they provide the pieces for enterprises to build just about anything they want.

Google is a Product Company

Google, meanwhile, has never really been a platform company; in fact, while Google is often cast as Apple’s opposite — the latter is called a product company, and the former a services one — that only makes sense if you presume that only hardware can be a product. A more expansive definition of “product” — a fully realized solution presented to end users — would show the two companies are in fact quite similar.

Make no mistake: the differences between cloud services and hardware are profound (which I explored at length in Apple’s Organizational Crossroads), but so are the differences between being a product company and being a platform one. The ideal product, whether it be a smartphone or a search box, achieves simplicity and a great user experience through tremendous effort in design and engineering that, ideally, is never seen by the end user. Indeed, this is why integrated products win in consumer markets, and make no mistake, Google’s consumer-focused services have traditionally been as integrated on the back-end as iPhones are.

Note, though, that this is the exact opposite of the model employed by not just Amazon but also Microsoft, the pre-eminent platform company of the IT era: instead of integrating pieces to deliver a product AWS went in the opposite direction, breaking down all of the pieces that go into building back-end services into fully modular parts; Microsoft did the same with its Win32 API. Yes, this meant that Windows was by design a worse platform in terms of the end user experience than, say, Mac OS, but it was far more powerful and extensible, an approach that paid off with millions of line of business apps that even today keep Windows at the center of business. AWS has done the exact same thing for back-end services, and the flexibility and modularity of AWS is the chief reason why it crushed Google’s initial cloud offering, Google App Engine, which launched back in 2008. Using App Engine entailed accepting a lot of decisions that Google made on your behalf; AWS let you build exactly what you needed.

Google’s Platform Antidote

The Windows example is instructive when it comes to thinking about how Google has since changed its approach: the massive ecosystem built around Microsoft’s extensive API ended up being the ultimate lock-in. Most obviously the apps built for Windows were not easily ported to other operating systems, but just as important was the huge network of partners and value-added resellers that made Windows the only viable choice for enterprise. Amazon is hard at work building the exact same sort of ecosystem.

And yet, it has never been more viable to not use Windows, first for consumers but also for enterprise, and the reason is the web: here was a new runtime that sat on top of Windows but did not depend on it,1 and on the consumer side Google was the biggest winner. Indeed, the rise of the browser explains AWS as well: any new business application is built for the web (including apps that run on web-based APIs) and it is accessible on any device.

It turns out that over the last couple of years Google has undertaken a sort of browser approach to enterprise computing . In 2014 Google announced Kubernetes, an open-source container cluster manager based on Google’s internal Borg service that abstracts Google’s massive infrastructure such that any Google service can instantly access all of the computing power they need without worrying about the details. The central precept is containers, which I wrote about in 2014: engineers build on a standard interface that retains (nearly) full flexibility without needing to know anything about the underlying hardware or operating system (in this it’s an evolutionary step beyond virtual machines).

Where Kubernetes differs from Borg is that it is fully portable: it runs on AWS, it runs on Azure, it runs on the Google Cloud Platform, it runs on on-premise infrastructure, you can even run it in your house. More relevantly to this article, it is the perfect antidote to AWS’ ten year head-start in infrastructure-as-a-service: while Google has made great strides in its own infrastructure offerings, the potential impact of Kubernetes specifically and container-based development broadly is to make irrelevant which infrastructure provider you use. No wonder it is one of the fastest growing open-source projects of all time: there is no lock-in.

But how does that help Google? After all, even if Kubernetes becomes the standard for enterprise clouds Amazon’s broader ecosystem lock-in is still present (and the company has its own container strategy that further locks customers into AWS); Google needs a differentiator.

Costs Versus Experience

Here again the desktop is instructive: the open nature of the web running on platform-agnostic browsers did not make Google successful per se; rather, the openness of the web created the conditions for the best technology to win. And not only did Google have the best search engine, but the reason it was the best — its reliance on links instead of simply page content — meant that as the web got bigger Google, unlike its competitors, got better.

I think this is an idea that can be abstracted to be broadly applicable; indeed, it’s a core piece of Aggregation Theory: as distribution (or switching) costs decrease, the importance of the user experience increases. To put it another way, when you can access any service, whether that be news or car-sharing or hotels or video or search etc., the one that is the best will not only win initially but will see its advantages compound.

This is Google’s bet when it comes to the enterprise cloud: open-sourcing Kubernetes was Google’s attempt to effectively build a browser on top of cloud infrastructure and thus decrease switching costs; the company’s equivalent of Google Search will be machine learning.

Machine Learning and Data

It seems certain that machine learning will be increasingly dominated by cloud services: both are about processing scale and massive amounts of data, and only a select few behemoths will have the financial capability to not only build out the infrastructure required but also have the wherewithal to employ the best machine learning engineers in the world. That, by extension, means that for most enterprises the differentiation arising from machine learning will derive first and foremost from whether or not their data is in the cloud (there will be on-premise solutions, but I expect them to fall more and more behind over time), but secondly from which cloud provider they choose.

That raises the stakes for cloud providers themselves; superior machine learning offerings can not only be a differentiator but a sustainable one: being better will attract more customers and thus more data, and data is the fuel by which machine learning improvement comes about. And it is because of data that Google is AWS’ biggest threat in the cloud.

I described how Google’s enterprise business was limited by its consumer focus above, but the big advantage that Google has is that it has been working with massive amounts of data for nearly two decades, and developing powerful machine learning algorithms for the last several years. Still, it’s the data that matters most-of-all, and the best evidence that is the case came last year when Google open-sourced TensorFlow, a blueprint for machine learning: as I noted in TensorFlow and Monetizing Intellectual Property Google’s willingness to share its approach was an implicit admission that its superior data and processing infrastructure was a sustainable advantage.

We’re just now starting to see that advantage applied to Google’s cloud offering. Just before Thanksgiving Google made a series of product announcements that clearly leveraged its data advantage:

  • The Cloud Natural Language API, which uses machine learning to analyze text, graduated to general availability
  • A premium edition of the Cloud Translation API, which uses machine learning to massively improve accuracy in translating eight languages (above-and-beyond the standard edition that supports over 100 languages)
  • A big price reduction for the Cloud Vision API, which uses machine learning to analyze images
  • A new Cloud Jobs API that uses machine learning to match potential employees with jobs

These four join the Cloud Prediction API that uses machine learning to, well, make predictions. It, along with the first three APIs above, is clearly derived from various Google consumer products; the Jobs API likely builds on an internal Google tool, as well as Google’s wealth of data from all over the web. In each case Google has spent years honing its algorithms so that by the time they are applied to a corporate data set the results are very likely superior, or at least far down the training funnel. I expect this advantage to persist and be meaningful.

Still, Google will have to do more, which is why the other big announcement was the creation of the Google Cloud Machine Learning group headed by Fei-Fei Li and Jia Li: this group will be charged with building new machine learning APIs specifically for business; to put it another way, they are tasked with productizing Google’s machine learning capabilities.

That, in a roundabout way, gets to the genius of Google’s strategy: the company was outpaced by Amazon in the first wave of cloud computing because success rested on being the best platform; by open-sourcing Kubernetes in an attempt to shift the industry to vendor-agnostic containers, Google is trying to move the plane of competition to products. After all, it’s often easier to change the rules of competition than to change your fundamental nature as a company.


To be sure, Google’s success is not assured: the company still has to grapple with a new business model — sales versus ads — and build up the sort of organization that is necessary for not just sales but also enterprise support. Both are areas where Amazon has a head start, along with a vastly larger partner ecosystem and a larger feature set generally.

And, of course, AWS has its own machine learning API, along with IBM and Microsoft. Microsoft is likely to prove particularly formidable in this regard: not only has the company engaged in years of research, but the company also has experience productizing technology for business specifically; Google’s longstanding consumer focus may at times be a handicap. And as popular as Kubernetes may be broadly, it’s concerning that Google is not yet eating its own dog food.

Still, Google will be a formidable competitor: its strategy is sound and, perhaps more importantly, the urgency to find a new line of business is far more pressing today than it was in 2006. Most importantly, the shift to cloud computing is still in its beginning stages, and while Amazon seems to be living the furthest in the future, the future has not happened yet; it will be fascinating to watch Google’s attempt to change the rules under which said future will operate.

  1. ActiveX notwithstanding []

Fake News

Between 2001 and 2003, Judith Miller wrote a number of pieces in the New York Times asserting that Iraq had the capability and the ambition to produce weapons of mass destruction. It was fake news.

Looking back, it’s impossible to say with certainty what role Miller’s stories played in the U.S.’s ill-fated decision to invade Iraq in 2003; the same sources feeding Miller were well-connected with the George W. Bush administration’s foreign policy team. Still, it meant something to have the New York Times backing them up, particularly for Democrats who may have been inclined to push back against Bush more aggressively. After all, the New York Times was not some fly-by-night operation, it was the preeminent newspaper in the country, and one generally thought to lean towards the left. Miller’s stories had a certain resonance by virtue of where they were published.

It’s tempting to make a connection between the Miller fiasco and the current debate about Facebook’s fake news problem; the cautionary tale that “fake news is bad” writes itself. My takeaway, though, is the exact opposite: it matters less what is fake and more who decides what is news in the first place.

Facebook’s Commoditization of Media

In Aggregation Theory I described the process by which the demise of distribution-based economic power has resulted in the rise of powerful intermediaries that own the customer experience and commoditize their suppliers. In the case of Facebook, the social network started with the foundation of pre-existing offline networks that were moved online. Given that humans are inherently social, users started prioritizing time on Facebook over time spent reading, say, the newspaper (or any of the effectively infinite set of alternatives for attention).

It followed, then, that it was in the interest of media companies, businesses, and basically anyone else who wanted to get the attention of users, to be on Facebook as well. This was great for Facebook: the more compelling content it could provide to its users, the more time they would spend on Facebook; the more time they spent on Facebook, the more opportunities Facebook would have to place advertisements in front of them. And, critically, the more time users spent on Facebook, the less time they had to read anything else, further increasing the motivation for media companies (and businesses of all types) to be on Facebook themselves, resulting in a virtuous cycle in Facebook’s favor: by having the users Facebook captured the suppliers, which deepened their hold on the users, increasing their power over suppliers.

This process reduced Facebook’s content suppliers — media companies — into pure commodity providers. All that mattered for everyone was the level of engagement: media companies got ad views, Facebook got shares, and users got the psychic reward of having flipped a bit in a database. Of course not all content was engaging to all users; that’s what the algorithm was for: show people only what they want to see, whether it be baby pictures, engagement announcements, cat pictures, quizzes, or, yes, political news. It was, from Facebook’s perspective — and, frankly, from its users’ perspective — all the same. That includes fake news too, by the way: it’s not that there is anything particularly special about news from Macedonia, it’s that according to the algorithm there isn’t anything particularly special about any content, beyond the level of engagement it drives.

The Media and Trump

There has been a lot of discussion — in the media, naturally — about how the media made President-elect Donald Trump. The story is that Trump would have never amounted to anything had the media not given him billions of dollars worth of earned media — basically news coverage (as opposed to paid media, which is advertising) — and that the industry needed to take responsibility. It’s a lovely bit of self-reflection that lets the industry deny the far more discomforting reality: that the media couldn’t have done a damn thing about Trump if they had wanted to.

The reason the media covered Trump so extensively is quite simple: that is what users wanted. And, in a world where media is a commodity, to act as if one has the editorial prerogative to not cover a candidate users want to see is to face that reality square in the face, absent the clicks that make the medicine easier to take.

Indeed, this is the same reason fake news flourishes: because users want it. These sites get traffic because users click on their articles and share them, because they confirm what they already think to be true. Confirmation bias is a hell of a drug — and, as Techcrunch reporter Kim-Mai Cutler so aptly put it on Twitter, it’s a hell of a business model.

Why Facebook Should Fix Fake News

So now we arrive at the question of what to do about fake news. Perhaps the most common sentiment was laid out by Zeynep Tufekci in the New York Times: Facebook should eliminate fake news and the filter effect — the tendency to see news you already agree with — while they’re at it. Tufekci writes:

Mark Zuckerberg, Facebook’s chief, believes that it is “a pretty crazy idea” that “fake news on Facebook, which is a very small amount of content, influenced the election in any way.” In holding fast to the claim that his company has little effect on how people make up their minds, Mr. Zuckerberg is doing real damage to American democracy — and to the world…

The problem with Facebook’s influence on political discourse is not limited to the dissemination of fake news. It’s also about echo chambers. The company’s algorithm chooses which updates appear higher up in users’ newsfeeds and which are buried. Humans already tend to cluster among like-minded people and seek news that confirms their biases. Facebook’s research shows that the company’s algorithm encourages this by somewhat prioritizing updates that users find comforting…

Tufekci offers up a number of recommendations for Facebook, including sharing data with outside researchers to better understand how misinformation spreads and the extent of filter bubbles, 1 acting much more aggressively to eliminate fake news like it does spam and other objectionable content, rehiring human editors, and retweaking its algorithm to favor news balance, not just engagement.

Why Facebook Should Not

All seem reasonable on their face, but in fact Tufekci’s recommendations are radical in their own way.

First, there is no incentive for Facebook to do any of this; while the company denies this report in Gizmodo that the company shelved a change to the News Feed algorithm that would have eliminated fake news stories because it disproportionately affected right-wing sites, the fact remains that the company is heavily incentivized to be perceived as neutral by all sides; anything else would drive away users, a particularly problematic outcome for a social network.2

Moreover, any move away from a focus on engagement would, by definition, decrease the time spent on Facebook, and here Tufekci is wrong to claim that this is acceptable because there is “no competitor in sight.” In fact, Facebook is in its most challenging position in a long time: Snapchat is stealing attention from its most valuable demographics, even as the News Feed is approaching saturation in terms of ad load, and there is a real danger Snapchat will beat the company to the biggest prize in consumer tech: TV-centric brand advertising dollars.

There are even more fundamental problems, though: how do you decide what is fake and what isn’t? Where is the line? And, perhaps most critically, who decides? To argue that the existence of some number of fake news items amongst an ocean of other content ought to result in active editing of Facebook content is not simply a logistical nightmare but, at least when it comes to the potential of bad outcomes, far more fraught than it appears.

That goes double for the filter bubble problem: there is a very large leap from arguing Facebook impacts its users’ flow of information via the second-order effects of driving engagement, to insisting the platform actively influence what users see for political reasons. It doesn’t matter that the goal is a better society, as opposed to picking partisan sides; after all, partisans think their goal is a better society as well. Indeed, if the entire concern is the outsized role that Facebook plays in its users’ news consumption, then the far greater fear should be the potential of someone actively abusing that role for their own ends.

I get why top-down solutions are tempting: fake news and filter bubbles are in front of our faces, and wouldn’t it be better if Facebook fixed them? The problem is the assumption that whoever wields that top-down power will just so happen to have the same views I do. What, though, if they don’t? Just look at our current political situation: those worried about Trump have to contend with the fact that the power of the executive branch has been dramatically expanded over the decades; we place immense responsibility and capability in the hands of one person, forgetting that said responsibility and capability is not so easily withdrawn if we don’t like the one wielding it.

To that end I would be far more concerned about Facebook were they to begin actively editing the News Feed; as I noted last week I’m increasingly concerned about Zuckerberg’s utopian-esque view of the world, and it is a frighteningly small step from influencing the world to controlling the world. Just as bad would be government regulation: our most critical liberty when it comes to a check on tyranny is the freedom of speech, and it would be directly counter to that liberty to put a bureaucrat — who reports to the President — in charge of what people see.

The key thing to remember is that the actual impact of fake news is dependent on who delivers it: sure, those Macedonian news stories aren’t great, but their effect, such as it is, comes from confirming what people already believe. Contrast that to Miller’s stories in the New York Times: because the New York Times was a trusted gatekeeper, many people fundamentally changed their opinions, resulting in a disaster the full effects of which are still being felt. In that light, the potential downside of Facebook coming anywhere close to deciding the news can scarcely be imagined.

Liberty and Laziness

There may be some middle ground here: perhaps some sources are so obviously fake that Facebook can easily exclude them, ideally with full transparency about what they are doing and why. And, to the extent Facebook can share data with outside researchers without compromising its competitive position, it should do so. The company should also provide even more options to users to control their feed if they wish to avoid filter bubbles.

In truth, though, you and I know that few users will bother. And that, seemingly, is what bothers many of Facebook’s critics the most. If users won’t seek out the “right” news sources, well, then someone ought to make them see them. It all sounds great — and, without question, a far more convenient solution to winning elections than actually making the changes necessary to do so — until you remember that that someone you just entrusted with such awesome power could disagree with you, and that the very notion of controlling what people read is the hallmark of totalitarianism.

Let me be clear: I am well aware of the problematic aspects of Facebook’s impact; I am particularly worried about the ease with which we sort ourselves into tribes, in part because of the filter bubble effect noted above (that’s one of the reasons Why Twitter Must Be Saved). But the solution is not the reimposition of gatekeepers done in by the Internet; whatever fixes this problem must spring from the power of the Internet, and the fact that each of us, if we choose, has access to more information and sources of truth than ever before, and more ways to reach out and understand and persuade those with whom we disagree. Yes, that is more work than demanding Zuckerberg change what people see, but giving up liberty for laziness never works out well in the end.


For more about how the Internet has fundamentally changed politics, please see this piece from March, The Voters Decide.

  1. Facebook has done a study about the latter, but as Tufekci and others have documented, the study was full of problems []
  2. Indeed, it wasn’t that long ago that I was making this exact argument in response to those who insisted Facebook would alter the News Feed to serve their own political purposes []

Why Twitter Must Be Saved

It is election day in the United States, and the tech figure who had one of the biggest impacts on the current cycle is perhaps a non-obvious one: Jeff Bezos.

Back in 2013 Bezos bought the Washington Post, whose coverage of the campaign has been exemplary. The august newspaper’s reporting, particularly the work of David Fahrenthold, has uncovered stories that have had a far bigger impact than any number of tweets or blog posts or calls for days-off-work in Democrat-safe California ever could have had. What Bezos understood is a technology industry truism: impact is made at scale through the construction of repeatable processes. In the case of the Washington Post, facilitating a strong, confident newsroom has reaped far greater returns than any of us can accomplish on our own.

When Bezos made his purchase, I wrote an article entitled Rebuilding the World Technology Destroyed. It is, by Stratechery standards, pretty short, so I hope you will forgive my taking the unusual step of quoting it in full:


The Washington Post was headed for bankruptcy, and was finally sold for a pittance. Its buyer began his career on Wall Street, only to move into a burgeoning new industry, where he truly made his wealth. The newspaper he bought has a noble history, but will certainly earn losses for years to come.

I’m talking not about Jeff Bezos, who bought the Washington Post yesterday, but rather Eugene Meyer, who bought the Post in 1933. Meyer left a lucrative career on Wall Street in 1920 to seize the burgeoning opportunity in industrial chemicals and founded Allied Chemical (today’s Honeywell). After making millions, Meyer spent the rest of his life both in public service and building the Post, spending millions of his own money in the process.

Meyer was in many ways following the established playbook for industrial magnates. Families like the Vanderbilts, Rockefellers, and Carnegies, who made their fortunes in railroads, oil, and steel, respectively, plowed money into universities, museums, and a host of other cultural touchstones.

It’s this tradition that makes Bezos’s purchase feel momentous, a crossing of the Rubicon of sorts. The tech industry is now producing its own magnates, who are following the Rockefeller playbook. See Mark Zuckerberg giving $100 million to the Newark school district, or Chris Hughes buying the New Republic. Neither though, feels as momentous as Jeff Bezos, the preeminent tech magnate, buying the Washington Post, the nation’s third most important newspaper.


The ironic thing, of course, about a tech magnate buying the Washington Post is that technology has destroyed the traditional newspaper business model. Not that newspapers are particularly special in this regard. As I wrote a month ago in a piece called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction.

With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

While the struggles of the Washington Post and other newspapers fall squarely into the “value” bucket, the particularly devastating effect of our new world order is seen much more strongly in its effect on livelihoods. This piece on the Crumbling American Dream is a must-read:

But just beyond the horizon a national economic, social and cultural whirlwind was gathering force that would radically transform the life chances of the children and grandchildren of the graduates of the P.C.H.S. class of 1959. The change would be jaw dropping and heart wrenching, for Port Clinton turns out to be a poster child for changes that have engulfed America.

Port Clinton’s demise was largely about the demise of manufacturing, but to my mind, the story of manufacturing is the story of technology. The relentless pursuit of productivity has created massive wealth in the aggregate, even as it has destroyed the foundations of many of our institutions.

In this respect, what Bezos is doing feels almost obligatory. Technology — and I’m using the term very broadly here — has torn so much down; surely it’s the responsibility of technologists to build it back up.

And yet, I fear we as an industry are woefully unprepared for this responsibility. We glorify dropouts, endorse endless hours at work, and subscribe to a libertarian ideal that has little to do with reality. We say that ideas don’t matter, and yet, as Chris Dixon wrote in The Idea Maze:

The reality is that ideas do matter, just not in the narrow sense in which startup ideas are popularly defined. Good startup ideas are well developed, multi-year plans that contemplate many possible paths according to how the world changes.

But do we as an industry understand the world?


It’s here this essay turns personal.

My life is just about the exact opposite of what you would expect from a technologist. I studied political science as an undergrad, was an editor of one of the largest student newspapers in the country, and planned to work in politics. After graduating I took off for Taiwan to travel and teach English, and ended up with a family. Six years later I managed to finagle my way into a top-tier MBA program, only to be rejected by every tech company (but one) when it came time for internships. I didn’t have the right background — I hadn’t lived my life in the technology industry.

Yet I had lived life! I had lived life so fully, and gained so much perspective. And it turned out there was one company that valued that: Apple hired me within 24 hours of my first interview.

I think my being hired had something to do with this:

"It's in Apple's DNA that technology alone is not enough — it's technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing." -- Steve Jobs
“It’s in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.” — Steve Jobs

It turned out that a life lived outside of technology was my greatest asset, at least at the company most every founder claims to idolize. But how many take this image and this philosophy seriously? It seems most are more closely aligned with Peter Thiel, who suggested that the best way to increase technological progress was to “Discourage people from pursuing humanities majors.”

Thiel may be right about the best way to “increase technological progress,” but progress is an objective fact; whether its effect is positive or negative remains to be determined.


There was a third article I read this weekend, about the social scientist Daniel Kahneman, called The Anatomy of Influence:

Kahneman’s career tells the story of how an idea can germinate, find far-flung disciples, and eventually reshape entire disciplines. Among scholars who do citation analysis, he is an anomaly. “When you look at how many areas of social science he’s put his fingers in, it’s just ridiculous,” says Jevin West, a postdoctoral researcher at the University of Washington, who has helped develop an algorithm for tracing the spread of ideas among disciplines. “Very rarely do you see someone with that amount of influence.”

But intellectual influence is tricky to define. Is it a matter of citations? Awards? Prestigious professorships? Book sales? A seat at Charlie Rose’s table? West suggests something else, something more compelling: “Kahneman’s career shows that intellectual influence is the ability to dissolve disciplinary boundaries.”

Influence lives at intersections. Yet, as an industry, it at times feels the boundaries we have built around who makes an effective product manager, or programmer, or designer, are stronger than ever, even as the need to cross those boundaries is ever more pressing. It’s not that Thiel was wrong about what types of degrees push progress forward; rather, it’s the blind optimism that technology is an inherent good that is so dangerous.

Technology is destroying the world as it was; do we have the vision and outlook to rebuild it into something better? Do we value what matters?

I’m confident in Jeff Bezos. I’m a little more worried about the rest of us.


To say that this election cycle has only deepened those worries would be a dramatic understatement. This is not a partisan statement, just an objective statement that technology has made objective truth a casualty to the pursuit of happiness — or engagement, to use the technical term — and now life and liberty hang in the balance.

A few weeks ago, during the keynote of the Oculus Connect 3 developer conference, Facebook founder and CEO Mark Zuckerberg articulated a vision for Facebook that I found chilling:

At Facebook, this is something we’re really committed to. You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much much better than it is today. Anything, whether it’s hardware, or software, a company, a developer ecosystem, you can take anything and make it much, much better. And as I look out today, I see a lot of people who share this engineering mindset. And we all know where we want to improve and where we want virtual reality to eventually get…

The magic of VR software is this feeling of presence. The feeling that you’re really there with another person or in another place. And helping this community build this software and these experiences is the single thing I am most excited about when it comes to virtual reality. Because this is what we do at Facebook. We build software and we build platforms that billions of people use to connect with the people and things that they care about.

Leave aside the parts about virtual reality; what bothers me is the faint hints of utopianism inherent in Zuckerberg’s declaration: engineers can make things better by sheer force of will — and that Facebook is an example of just that. In fact, Facebook is the premier example of just how efficient tech companies can be, and just how problematic that efficiency is when it is employed in the pursuit of “engagement” with no regard to the objective truth specifically, or the impact on society broadly.

Last spring Facebook was caught up in a ginned-up controversy about alleged bias: a solitary member of Facebook’s contracted Trending Topics editorial team claimed that conservative news stories were suppressed thanks to team members’ liberal bias. After an investigation Facebook found no evidence of said suppression, but went ahead and laid off the entire team anyways in favor of an algorithm; within days a fake news story was in Trending Topics, and at least four more followed in the next few weeks.

Granted, trending topics has always been a sideshow; what is much more disturbing are the revelations that fake news is widespread in Facebook’s news feed; unsurprisingly, given they are human, many Facebook users wish to connect with people and things that confirm their pre-existing opinions, whether or not they are true.

Make no mistake, this results in a great business: I have written effusively about Facebook’s financial potential and noted that the News Feed algorithm is a big reason why Facebook Squashed Twitter. Giving people what they want to see will always draw more attention than making them work for it, in rather the same way that making up news is cheaper and more profitable than actually reporting the truth.

And yet it is Twitter that has reaffirmed itself as the most powerful antidote to Facebook’s algorithm: misinformation certainly spreads via a tweet, but truth follows unusually quickly; thanks to the power of retweets and quoted tweets, both are far more inescapable than they are on Facebook. Twitter is a far preferable manifestation of Supreme Court Justice Louis Brandeis’ famous concurrence in Whitney vs California (emphasis mine):

Those who won our independence believed that the final end of the State was to make men free to develop their faculties, and that, in its government, the deliberative forces should prevail over the arbitrary. They valued liberty both as an end, and as a means. They believed liberty to be the secret of happiness, and courage to be the secret of liberty. They believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth; that, without free speech and assembly, discussion would be futile; that, with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine; that the greatest menace to freedom is an inert people; that public discussion is a political duty, and that this should be a fundamental principle of the American government. They recognized the risks to which all human institutions are subject.

Brandeis’ concurrence was a defense of free speech, the right of which applies to government action; private companies are free to police their platforms at they wish. What, though, does free speech mean in an era of abundance? When information was scarce, limiting speech was a real danger; when information is abundant shielding people from speech they might disagree with has its own perverse effects.

To be clear, Twitter has a real abuse problem that it has been derelict in addressing, a decision that is costly in both human and business terms; there is real harm that comes from the ability to address anyone anonymously, including the suppression of viewpoints by de facto vigilantism. But I increasingly despair about the opposite extreme: the construction of cocoons where speech that intrudes on one’s world view with facts is suppressed for fear of what it does to the bottom line, resulting in an inert people incapable of finding common ground with anyone else.

This is why Twitter must be saved: the combination of network and format is irreplaceable, especially now that everyone knows it might not be a great business. For all the good that the Washington Post has done it is but one publication among many; the place where those publications disseminate information is the true scale, but Facebook has made its priorities clear: engagement and dollars, leavened with the certainty that engineers can make it all better; the externalities that result from a focus on making people feel good are not their concern.

The weakness of Twitter, in contrast, is its unwieldy reliance on humans, to build their own feeds, to find a new network, to broadcast to potentially no one what they think. The payoff, though, is the capability of spreading information more widely and more quickly than has ever before been possible; the societal benefit is an externality that needs to be preserved.

Apple Should Buy Netflix

While much of the focus of last Thursday’s Apple announcement was on the new MacBook Pros (and the Macs that were not updated), the more interesting announcement from a strategic perspective was about Apple TV. Tim Cook stated Apple’s goals plainly:

We want Apple TV to be the one place to access all of your television. A unified TV experience. That’s one place to access all of your TV shows and movies. One place to discover great new content to watch. So today we’re announcing a new app and we simply call it ‘TV’…

After the app demo, Cook concluded:

Apple TV, iPhone and iPad have become the primary ways that many of us enjoy watching television, and now with the TV app there’s really no reason to watch TV anywhere else.

Unless, of course, you want to watch Netflix.

Apple’s Leverage Playbook

There is a bit of a playbook to the way Apple comes to dominate industries, and it is founded on customer loyalty. The best example is, naturally, the iPhone:

  • Back in 2006 Apple sought to release the original iPhone on Verizon; the leading carrier in the U.S., though, was wary of Apple’s demands that there be no Verizon branding, no Verizon control of the user experience, and no Verizon relationship with iPhone users beyond managing their data plan. Therefore, Apple launched the iPhone on the second-place carrier (AT&T née Cingular); AT&T accepted Apple’s demands in full with the hope that Apple’s famously loyal customers would see the iPhone as a reason to switch.
  • That, of course, is exactly what happened: in the five years following the iPhone launch, AT&T went from trailing Verizon by $400 million in wireless revenue to leading by $700 million; that’s a $1.1 billion switch thanks in large part to Apple loyalists’ willingness to switch carriers to get an iPhone. The effect was even greater on smaller carriers, which had no choice but to accede to Apple’s increasingly demanding terms: not only would Apple own the customers, but carriers had to agree to significant marketing outlays and guaranteed sales to carry the iPhone.1
  • Apple repeated this formula in market after market: in Japan, for example, Softbank leveraged the iPhone to huge increases in market share, forcing NTT Docomo to finally give in to Apple’s terms. Apple’s leverage also played a role in bringing China Mobile to the negotiating table, along with Apple’s ability to drive higher average selling prices for China Mobile’s then new 4G network.

The iPhone wasn’t the first time Apple used this approach: perhaps the most famous example of Apple coming to dominate its suppliers was the iPod and iTunes Music Store, when Apple was able to leverage its loyal users to dictate terms to the music industry. In some respects this was the more impressive achievement, because while carriers are largely undifferentiated (presuming you live in a location with comparable coverage), music labels have exclusive rights to huge catalogs of music. This should, in theory, provide strong leverage in negotiations, but especially when the iTunes Music Store got started, the labels were terrified of the effect music piracy was having on their business. Apple offered a better alternative to piracy, and then grew so big the labels couldn’t afford to not have their music on the iTunes Music Store.

The problem Apple has in premium video — and given that the company has been trying and failing to secure video content on its terms for years now, it definitely has a problem — is that its executives seem to have forgotten just how important the piracy leverage was to the iTunes Music Store’s success. This Wall Street Journal story from this past summer is one of many similar stories over the years detailing Apple’s take-it-or-leave-it approach to premium video content:

[Senior Vice President of Internet Software and Services Eddy] Cue is also known for a hard-nosed negotiating style. One cable-industry executive sums up Mr. Cue’s strategy as saying: “We’re Apple”…TV-channel owners “kept looking at the Apple guys like: ‘Do you have any idea how this industry works?’” one former Time Warner Cable executive says…Mr. Cue has said the TV industry overly complicated talks. “Time is on my side,” he has told some media executives.

Time may be on Apple’s side, but the bigger issue for Cue and Apple is that leverage is not; that belongs to the company that is actually threatening premium content makers: Netflix. Netflix is the “piracy” of video content, but unfortunately for Apple they are a real company capable of using the leverage they have acquired.

Netflix’s Rise

Much like Apple vis-à-vis the music industry in the 2000s, Netflix got its start by being a friend to the industry it would eventually threaten: DVD sales simply added to the premium video industry’s bottom line, and when the company made the leap to streaming it was through a deal with Starz that the latter basically viewed as found money. Streaming, though, was transformative to the user experience: while Starz had an 11,000 movie catalog, the effective catalog size was one — whatever was showing on the Starz linear TV channel. On Netflix, though, the effective size of the catalog was 11,000: Netflix customers could watch whichever movie they wished whenever they wished on any device they wished.

That superior user experience drove Netflix’s ever-expanding user base, which gave the company the capability to acquire more content with money that was still pure profit for network owners; by the time said networks woke up to the fact that Netflix was devouring attention they were, like the music industry relative to Apple in the 2000s, increasingly captive to what was one of their biggest buyers. Netflix, meanwhile, was investing in its own original content, making deals with content creators directly; this strengthened Netflix’s value proposition to customers, further weakened the negotiating power of networks, and laid the groundwork for Netflix to leverage the Internet to offer its service to nearly every customer on Earth.

Netflix’s strategy has been a textbook example of Aggregation Theory; Netflix has built leverage and monopsony power over the premium video industry not by controlling distribution, at least not at the beginning, but by delivering a superior customer experience that creates a virtuous cycle: Netflix earns the users, which increases its power over suppliers, which brings in more users, which increases its power even more.

It is this virtuous cycle that drives Netflix’s $54 billion valuation, which implies a sky-high price-to-earnings ratio of 340; the company is spending billions on an ever increasing amount of original content that threatens to cut out networks entirely, leading Hollywood to fear a content monopoly. The big question is if Netflix has the financial firepower to pull it off: the company had -$506 million in free cash flow last quarter thanks to its ongoing shift from licensing original content (which is pay on delivery) to self-producting it (which requires investment months or years before the content is actually available); this shift gives Netflix even more leverage — and cuts out traditional networks even more — but it’s expensive, and the company has to keep raising debt on the assumption subscriber numbers will increase enough to pay for it.


There were mixed reports as to why Netflix is not in Apple’s new ‘TV’ app: Peter Kafka reported that Netflix was out even before Apple’s event, while Netflix told Wired that the streaming service is “evaluating the opportunity.”

In fact, though, I suspect those reports aren’t so different after all: Apple’s desire to be “the one place to access all of your television” implies the disintermediation of Netflix to just another content provider, right alongside its rival HBO and the far more desperate networks who lack any sort of customer relationship at all. It is directly counter to the strategy that has gotten Netflix this far — owning the customer relationship by delivering a superior customer experience — and while Apple may wish to pursue the same strategy, the company has no leverage to do so. Not only is the Apple TV just another black box that connects to your TV (that is also the most expensive), it also, conveniently for Netflix, has a (relatively) open app platform: Netflix can deliver their content on their terms on Apple’s hardware, and there isn’t much Apple can do about it.2

The truth is that Apple’s executives seem stuck in the iPod/iTunes era, where selling 70% of all music players led to leverage over the music labels; with streaming content is available on any device at any time, which means that selling hardware isn’t a point of leverage. If Apple wants its usual ownership of end users it needs to buy its way in, and that means buying Netflix.

Why Apple Should Buy Netflix

I am, as a rule, skeptical of large acquisitions: they are all too often a byproduct of management empire-building, and value-destructive for shareholders. Moreover, not only do promised synergies often fail to materialize, but both the acquirer and the acquired are deeply distracted for years.

I am even more dubious when an acquisition entails combining horizontal and vertical business models:3 horizontal business models, like Netflix’s, entail reaching the maximum number of customers across all devices in order to better leverage up-front costs; vertical business models, like Apple’s, entail offering exclusive services to increase the differentiation of devices sold at a profit.

So why am I advocating an acquisition that is both large and entails combining two orthogonal business models? Surprisingly, I think the argument for Apple is more compelling:

  • As I argued earlier this year, the iPhone was the pinnacle of the product business model: it leveraged software to sell an incredible number of highly differentiated physical devices with a fabulous profit margin (in both percentage and absolute terms), but the future of high-dollar physical goods is to be offered as a service. I strongly suspect this reality was an important reason for Apple’s reset of the Apple Car project.
  • In an ideal world one could argue that Apple should change its employees’ compensation mix to more strongly favor high salaries over stock, dramatically increase its dividend program, and gracefully ride its hardware business model as long as it could; here on planet Earth Apple needs a growth engine to replace the iPhone, if not in reality than at least in potential.
  • Apple is at its best when it is creating new products that are the best they can possibly be; it is a capability that is rather independent from Apple’s biggest strategic assets: its dedicated user base and massive cash pile.

A Netflix acquisition would:

  • Give Apple one of the strongest entrants when it comes to business models of the future
  • Provide a far more compelling growth narrative than its current hardware business (particularly given the advantages Apple gives Netflix, which I will discuss below)
  • Leverages Apple’s assets in a way that leaves the product company free to focus on what it does best

The payoff for Netflix is more straightforward:

  • As I noted above, Netflix’s valuation is already sky-high, with a stock so volatile that CEO Reed Hastings felt compelled to apologize to investors on Netflix’s recent earnings calls. The issue is that Netflix’s potential is massive for all the reasons I described above, but realizing that potential entails spending money the company hopes to gain from future subscribers. Ergo, any surprises in churn or new user numbers send the stock on a roller coaster. Having Apple’s financial backing will alleviate those concerns.
  • Apple’s bank account will also allow Netflix to accelerate its strategy of complete ownership of original content. As I hinted at above, most original Netflix content to date has been licensed, not owned, which is problematic in two ways: first, Netflix faces some restrictions on said content, whether it be a temporal license or a geographic one. Secondly, Netflix isn’t realizing the full profit from its original content, in perpetuity; given Netflix’s business model is so powerful precisely because content is valuable not only when it is shown the first time but every time thereafter, this is an unfortunate giveaway dictated by Netflix’s meager cash position. With Apple behind it Netflix could pursue the same strategy it used for this summer’s Stranger Things: produce content without any middle men, and reap the proceeds — and leverage the freedom — forever.
  • While Apple should keep Netflix cross-platform (limiting Netflix to Apple devices would be massively value destructive — Netflix’s value is predicated on being everywhere — and not even that helpful given that Apple’s devices already dominate their price points), that doesn’t mean that making Netflix available by default on every Apple device wouldn’t have the potential to drive Netflix subscriptions. This could be especially effective internationally where Apple’s brand is much stronger than Netflix’s.

Make no mistake, this would be a massive deal: Apple would probably need to pay a 20% premium at a minimum, which means an acquisition price north of $65 billion (and I’d bet higher). And yet, the biggest reason I’m skeptical it will happen is that I’m not sure Netflix would say yes: the company has made it this far with a ladder up strategy predicated on delivering a superior customer experience, and provided the company can keep the cash flowing the leverage in video is all theirs. Granted, Amazon Prime Video is a big threat, particularly because their orthogonal business model and big company backing give them the ability to match Netflix dollar-for-dollar when it comes to acquiring content, but having made it thus far, does Hastings want to take the easy way now?

As for Apple, Cook has been resolute in following the Steve Jobs playbook, which would seem to rule out a transformative acquisition of this nature. Still, strains are showing: in retrospect the Apple Watch was rushed to market, the company is raising prices to preserve margins and average selling prices, while the company seems to be cutting costs on the margins. Wouldn’t it be a relief to sell a future based on more than squeezing the last drops of blood out of the iPhone rock? Indeed, the iPhone as cash cow and Netflix — run as an independent subsidiary — as growth driver would arguably create the greatest possible freedom to recreate the future once again.

  1. As I’ve noted several times in the Daily Update, these contracts gave Apple remarkable precision in forecasting iPhone growth; that the iPhone is mostly everywhere now is, I suspect, a reason why Apple’s forecasting has been off (in both directions) for the last several quarters []
  2. In theory Apple could mandate that all streaming apps tie in to the TV app, but I think the company would soon find that Netflix could drive the sale of other streaming devices more easily than Apple can drive the surrender of Netflix’s most important strategic advantage []
  3. Hello AT&T and Time Warner! []

Surface Studio, Nintendo Switch, and Niche Strategies

There are few things that can bring geeks (like me) to the edge of hyperbolic hysteria like compelling new hardware videos, and this last week had not one but two!

First, the Nintendo Switch:

Then, yesterday, the Microsoft Surface Studio:

There’s no question both products are exciting in their own right; what makes them compelling, though, is not simply the technology demonstrated, but the fact both, unlike their forbearers, are clearly designed with the smartphone in mind.

The Wii U Mistake

The Nintendo Wii U and the Surface RT were both launched at the end of 2012; both were miserable disasters, and both for largely the same reasons: they targeted markets that no longer existed.

Back in 2006 the Wii came out of nowhere to win the seventh-generation of consoles, outselling the Xbox 360 and PlayStation 3 despite the fact Nintendo’s hardware was significantly underpowered relative to its competition. The key was the Wii’s motion control, best manifested by the seemingly simplistic Wii Sports; it turned out that simplicity was a virtue, attracting casual gamers who had long since abandoned consoles or never considered one in the first place, and Wii Sports went on to become the best-selling video game of all time.1

Nintendo sought to recreate the Wii’s success with the Wii U; the eighth-generation console finally supported high-definition graphics, but it was still significantly underpowered relative to the Xbox One and PS4. Instead Nintendo relied on another gimmick — a second touch screen on a tablet-like controller — but not only was the gimmick a flop with developers (including Nintendo), it was completely ignored by consumers; the console has been all-but discontinued after selling fewer than 15 million units.

There are lots of reasons the Wii U failed — including a name that was easily conflated with its predecessor, a lack of compelling first-party titles at launch, and Nintendo’s usual struggles with 3rd-party developers (much of it self-inflicted through both business practices and product decisions) — but the most important reason is that the market Nintendo exploited with the Wii no longer existed: casual gamers now owned smartphones, and smartphone gaming was good enough (and superior, if you considered convenience and ease-of-use). Meanwhile, Microsoft and especially Sony had continued their focus on dedicated gamers and 3rd-party developers, leaving the Wii U in the middle of nowhere: not good enough for hard-core gamers, and superfluous for everyone else.

The Surface Mistake

The Surface, meanwhile, was designed to be the physical manifestation of Windows 8: a touch-based tablet combined with the power of a traditional PC. Steve Ballmer said at the launch event:

The past several years have seen great change in the industry and great innovations coming from Microsoft. We’ve helped usher in a new era of cloud computing, we’ve embraced mobility, we’re redefining communications, and attempting to transform entertainment. In all that we have done Windows is the heart and soul of Microsoft: from Windows PCs to Windows servers to Windows Phones and Windows Azure, Windows has proven to be the most flexible general purpose software ever created…

With Windows 8 we’ve reimagined…Windows from the chipset to the user experience to power a new generation of PCs that enable new capabilities and new scenarios. We approached the Windows 8 product design in a forward-looking way: we designed Windows 8 for the world we know in which most PCs are mobile and people want access to information and the ability to create content from anywhere, anytime. People want to do all of that without compromising the productivity that PCs are uniquely known for, from personal productivity applications to technical applications, business software and literally millions of other applications that are written for Windows.

The problem is that while Windows may have still been the heart of Microsoft, it was no longer the heart of computing: the smartphone was. Much like the Wii U, the original ARM-based Surface RT failed for lots of product-related reasons — including the fact it was under-powered, lacked compelling 3rd-party applications,2 and the fact that Windows 8’s looks far outweighed its ease-of-use — but the most important reason is that the market Microsoft exploited with the general-purpose PC no longer existed: casual PC users now owned smartphones, and smartphone computing was good enough (and superior, if you considered convenience and ease-of-use). Meanwhile, traditional Windows OEMs and especially Apple had continued their focus on dedicated PC users, leaving the Surface in the middle of nowhere: not good enough for hard-core PC users, and superfluous for everyone else.

Microsoft’s Transformation

In the intervening years the advice for Nintendo and Microsoft has been strikingly similar: stop making hardware (I haven’t said that about Nintendo, but I certainly did about the Surface). The world has changed, it’s time to move on and adapt.

Microsoft in particular has done exactly that, to a degree I frankly didn’t think was possible as long as Windows was around. There are two divisions of the company — Productivity and Business Processes and Intelligent Cloud — that are building for a future where Windows is one of many computing platforms, and an increasingly unimportant one at that.3 Meanwhile, the company’s third division (More Personal Computing) is basically a collection of businesses, most prominently Windows, that are cash cows but don’t figure into Microsoft’s long-term future as a services company.

I have applauded Microsoft CEO Satya Nadella for pulling this split off, not just organizationally but culturally; Ballmer’s (correct) contention that Windows was the center of Microsoft was holding the entire company back, and it was why he needed to be replaced for Microsoft to survive in a world centered on iOS and Android. What I didn’t expect, though, was that the demotion of Windows would be just as good for Windows — specifically Surface — as it was for Microsoft.

The Surface Studio

Perhaps the most important feature of the Surface Studio is its price: $3,000 for the lowest-end model, and that doesn’t include the innovative “Dial”. This price point immediately attracted a lot of consternation on Twitter amongst Windows fans in particular, while some industry observers argued that most consumers weren’t artists anyways. Both true!

But what is also true is that all PCs are niche devices: for most people, particularly outside the U.S., a smartphone is all they need or care to buy. The world today is the exact opposite of the world a mere decade ago, where we bought dedicated devices to plug into our digital hub PCs; the smartphone (and cloud) is the hub, and everything else is optional.

Selling a niche device is a fundamentally different proposition than selling a general purpose one: the question of why a consumer would buy a general purpose device (like a PC ten years ago, or a smartphone today) is a solved one; the only question is which. Niche devices, on the other hand, face two hurdles to adoption: before a consumer chooses which device to buy they need to be convinced as to why they need a niche device in the first place. And, because answering “why” is even more difficult than winning “which”, niche device makers ought to focus on being a clearly superior solution to an identified market need; being cheap is not only not a priority, it’s a distraction.

The Surface Studio does this brilliantly; as the launch video shows, an incredible amount of engineering and expensive manufacturing went into the physical product, particularly the screen and hinge. The result, though, is that if you are an artist or graphic designer or architect or musician or do any sort of activity that requires drawing on or touching your display, and need the productivity capabilities of a PC (hello file system!), then there is nothing on the market like the Surface Studio (including the Surface Pro). Microsoft has rightly pulled out all the stops to make the perfect device for you. Does it cost a lot? Of course it does! But as Apple has demonstrated for years price is less important than differentiation. To insist that Microsoft make the product cheaper is the exact wrong strategy for a niche device maker, and I suspect it is rooted in the false assumption that the PC is a general purpose device that ought to appeal to everyone.

Certainly the Studio’s success is not assured, in large part because Microsoft’s target audience is using OS X. It is a tall order to get people to switch operating systems they are familiar with, which emphasizes the importance of Microsoft focusing on differentiation, not price (if price was all that mattered Windows would have never lost share to OS X in the first place). Just as importantly, though, is that today more people are even considering the possibility: the single best way to build a brand that attracts (as opposed to being the default) is to build a product that is the best possible one you can build, and only then make it as affordable as possible. For too long Microsoft approached that problem backwards, exacerbating the secular shift from PCs to smartphones.

The Nintendo Switch

As for Nintendo, the beloved gaming company has certainly taken steps in the right direction: the company lent its intellectual property (and investment dollars) to Niantic, driving the phenomenal success of Pokémon Go. Perhaps more significantly for the bottom line, the company has leveraged its partnership with mobile game developer DeNA to develop the upcoming Super Mario Run; encouragingly, like Pokémon Go, Nintendo is taking an established concept (a “runner”, in this case) and applying its intellectual property on top. It’s a great way to leverage Nintendo’s most valuable assets.

There are encouraging signs when it comes to the Switch as well:

  • It looks like (but is not confirmed) that Nintendo is abandoning touch as an input, which is exactly right. Touch on Nintendo products debuted in 2004 on the Nintendo DS, when the iPhone was but a gleam in Steve Jobs’ eye; today touch-focused games are going to be developed for the far larger smartphone market, and keeping the technology will only make Nintendo’s product significantly more expensive and/or far worse with regards to screen quality (the current 3DS and Wii U touch screens are embarrassing).
  • What Nintendo is doubling down on is controllers, another smart move. I argued in 2014 that controllers are so important to the user experience of consoles that they will hold off general purpose devices like Apple TVs when it comes to living room gaming; Nintendo’s bet is that they can attract gamers who want mobility by offering high fidelity control that smartphones can not4.
  • Nintendo also looks set to unleash a flurry of first-party titles: the company clearly gave up on the Wii U quite a while ago, shifting resources to the Switch. If the product launches with titles like Mario, Zelda, etc. it will provide a big boost that may actually attract third party developers

However, there remain two big questions: first, while Microsoft is a highly diversified business that can afford to sell Surface Studio to a very narrow niche, Nintendo’s entire business, outside of its nascent smartphone efforts, is its consoles. There is definitely a console niche, but Sony and Microsoft are filling it admirably. Is a portable niche big enough to support Nintendo?

Secondly, will Nintendo fully embrace a niche strategy? As I noted above, in a niche it is most important to convince consumers that they want your device; only then will they decide if they can afford it. If Nintendo skimps on the quality of its components, the performance of the device, or battery life just to save a few bucks, they may well please the people who were going to buy the device anyways, but plenty of others will stick with their good-enough smartphones or clearly superior consoles. Obviously the Switch should not be absurdly expensive, but it can definitely be too cheap.


Over the last couple of years, as it has become clear that rounded rectangles of glass and aluminum running either iOS or Android “won” the smartphone wars, it has been tempting to fret that hardware innovation would slow; and, arguably, in the case of smartphones, it has. In fact, though, I expect that the reality of the smartphone being the dominant general purpose device will open the doors for more and more devices like Surface Studio and the Nintendo Switch.

What might be created if you start with the assumption that the smartphone exists? Perhaps you would make sunglasses with a camera, or a watch, or an activity tracker, or a drone. I noted in Snapchat Spectacles and the Future of Wearables that the establishment of the PC led to an explosion of dedicated devices like PDAs, digital cameras, GPS devices, and digital music players. Now that those have been subsumed into the smartphone there are new opportunities, and in a twist of fate it is smartphone also-rans like Microsoft and Nintendo — along with smartphone native companies like Snapchat — that have more freedom to experiment given they have nothing to protect. It’s never been better to be a geek!

  1. Including bundled versions, breaking the record held for 20 years by Super Mario Bros. []
  2. I am painfully aware of this point, as I was on the Windows 8 team charged with bringing 3rd-party applications on board []
  3. To be fair, the majority of revenue in both divisions — from Office and Windows Server and its related products — still rests on Windows PC being the center of enterprise []
  4. Yes, there are add-on controllers for the iPhone, but games can’t require them which dramatically reduces their utility) []

The IT Era and the Internet Revolution

I like to say that I write about media generally and journalism specifically because the industry is a canary in the coal mine when it comes to the impact of the Internet: text shifted from newsprint to the web seamlessly, completely upending the industry’s business model along the way.

Of course I have a vested interest in this shift: for better or worse I, by virtue of making my living on Stratechery, am a member of the media, and it would be disingenuous to pretend that my opinions aren’t shaped by the fact I have a personal stake in the matter. Today, though, and somewhat reluctantly, I am not just acknowledging my interests but explicitly putting Stratechery forward as an example of just how misguided the conventional wisdom is about the Internet’s long-term impact on society, in ways that extend far beyond newspapers (but per my point, let’s start there).

What Killed Newspapers

On Monday Jack Shafer, the current dean of media critics, asked What If the Newspaper Industry Made a Colossal Mistake?:

What if, in the mad dash two decades ago to repurpose and extend editorial content onto the Web, editors and publishers made a colossal business blunder that wasted hundreds of millions of dollars? What if the industry should have stuck with its strengths — the print editions where the vast majority of their readers still reside and where the overwhelming majority of advertising and subscription revenue come from — instead of chasing the online chimera?

That’s the contrarian conclusion I drew from a new paper written by H. Iris Chyi and Ori Tenenboim of the University of Texas and published this summer in Journalism Practice. Buttressed by copious mounds of data and a rigorous, sustained argument, the paper cracks open the watchworks of the newspaper industry to make a convincing case that the tech-heavy Web strategy pursued by most papers has been a bust. The key to the newspaper future might reside in its past and not in smartphones, iPads and VR. “Digital first,” the authors claim, has been a losing proposition for most newspapers.

Shafer’s theory is that the online editions of newspapers is inferior to print editions; ergo, people read them less. To buttress his point Shafer cites statistics showing that most local residents don’t read their local newspaper online.

The flaw in this reasoning should be obvious to any long-time Stratechery reader: people in the pre-Internet era didn’t read local newspapers because holding an unwieldy ink-staining piece of flimsy newsprint was particularly enjoyable; people read local newspapers because it was the only option. And, by extension, people don’t avoid local newspapers’ websites because the reading experience sucks — although that is true — they don’t even think to visit them because there are far better ways to occupy their finite attention.

Moreover, while some of those alternatives are distractions like games or social networking, any given newspaper’s real competitors are other newspapers and online-only news sites. When I was growing up in Wisconsin I could get the Wisconsin State Journal in my mailbox or I could go to a bookstore to buy the New York Times; it didn’t matter if the latter was “better”, it was too inconvenient for most. Now, though, the only inconvenience is tapping a different app. Of course most readers don’t even bother to do that: they just click on whatever is in their Facebook feed, interspersed with advertisements that are both more targeted and more measurable than newspaper advertisements ever were.

The truth is there is no one to blame for the demise of newspapers — not Google or Facebook, and not 1990s era publishers. The entire linchpin of the newspaper business model was controlling distribution, and when that linchpin was obliterated by the Internet it was inevitable that the entire apparatus would collapse.

The IT Era

Make no mistake, this sucks for journalists in particular; newsroom employment has plummeted over the last decade:

screen-shot-2016-10-19-at-4-27-37-pm

Still, just for a moment set aside those disappearing jobs and look at what happened from roughly 1985 to 2007: at a time when newspaper revenue continued to grow jobs didn’t grow at all; naturally, newspaper companies were enjoying record profits.

What had happened was information technology: copy could be made on computers, passed on to editors via local area networks, then laid out digitally. It was a massive efficiency improvement over typewriters, halftone negatives, and literal cutting-and-pasting:

newspaper-copy

Newspapers obviously weren’t the only industry to benefit from information technology: the rise of ERP systems, databases, and personal computers provided massive gains in productivity for nearly all businesses (although it ended up taking nearly a decade for the improvements to show up). What this first wave of information technology did not do, though, was fundamentally change how those businesses worked, which meant nine of the ten largest companies in 1980 were all amongst the 21 largest companies in 19951. The biggest change is that more and more of those productivity gains started accruing to company shareholders, not the workers — and newspapers were no exception.

This is why I believe it is critical to draw a clear line between the IT era and the Internet era: the IT era saw the formation of many still formidable technology companies, but their success was based on entrenching deep-pocketed incumbent enterprises who could use technology to increase the productivity of their workers. What makes the Internet era a much bigger deal is that it challenges the very foundations of those enterprises.

The Internet Revolution

I already explained what happened to newspapers when distribution erased local newspapers moats in the blink of an eye; as I suggested at the beginning, though, this was not an isolated incident but a sign of what was to come. Back in July I laid out how the acquisition of Dollar Shaving Club suggested the same process was happening to consumer packaged goods companies: leveraging size to secure shelf space supported by TV advertising was no longer the only way to compete. A few weeks before that I pointed out that television was intertwined with its advertisers; the Internet was eroding the business of linear TV, CPG companies, retailers, and even automative companies simultaneously, leaving the entire post World War II economic order dependent on sports to hold everything together.

The ways these changes arrive are strikingly similar; I call it the FANG Playbook after Facebook-Amazon-Netflix-Google:

None of the FANG companies created what most considered the most valuable pieces of their respective ecosystems; they simply made those pieces easier for consumers to access, so consumers increasingly discovered said pieces via the FANG home pages. And, given that Internet made distribution free, that meant the FANG companies were well on their way to having far more power and monetization potential than anyone realized…

By owning the consumer entry point — the primary choke point — in each of their respective industries the FANG companies have been able to modularize and commoditize their suppliers, whether those be publishers, merchants and suppliers, content producers, or basically anyone who needs to be found on the Internet.

This is the critical difference between the IT-era and the Internet revolution: the first made existing companies more efficient, the second, primarily by making distribution free, destroyed those same companies’ business models.

The Internet Upside

What is easy to forget in this tale of woe is that all of these upheavals have massively benefited consumers: that I can read any newspaper in the world is a good thing. That we have access to many more products at much lower price points is amazing. That one can search the entire corpus of human knowledge from just about anywhere in the world, or connect to billions of people, or more prosaically, watch what one wants to watch when one wants to watch it, is pretty great.

Beyond that, what are even more difficult to see are the new possibilities that arise from said upheaval, which is where Stratechery comes in. By no means is this site a replacement for newspapers: I’m pretty explicit about the fact I don’t do original reporting. And yet, I certainly wouldn’t classify the time spent reading this site in the same category as the diversions of gaming and social networking I mentioned earlier. Rather, my goal is to deliver something completely new: deep, ongoing analysis into the business and strategy of technology. It is a viewpoint that wasn’t worth cutting-and-pasting into a broadsheet meant to serve a geographically limited market, but when the addressable market is the entire world the economics suddenly work very well indeed.

Oh, and did you catch that? Saying “the addressable market is the whole world” is the exact same thing as saying that newspapers suddenly had to compete with every other news source in the world; it’s not that the Internet is inherently “good” or “bad”, rather it is a new reality, and just because industries predicated on old assumptions must now fail should not obscure the fact that entirely new industries built with new assumptions — including huge new opportunities for small-scale entrepreneurship by individuals or small teams — are now possible. See YouTube or Etsy or yes, journalism, and this is only the beginning.

The Importance of Secondary Effects

I’d certainly like to think the benefits of this change run deeper than simply ensuring I earn a decent living; it is deeply meaningful to me when I have readers say my writing helped them land a job, or that they are applying my frameworks to a particularly difficult decision they are facing, or even that they feel like they are getting a business school education for practically free. To be certain that last point is overstating things, at least from a business school’s perspective: the breadth of material covered, the degree on your resume, and of course the friends you make are big advantages. But it used to be that the choice was a binary one: spend six figures on business school or don’t; if one can pick up a useful bit of business thinking for $10/month while still being a productive worker then isn’t that a win for society?

These secondary effects will be the key to building a prosperous society amidst the ruin of the Internet’s creative destruction: what is so exciting about Uber is not the fact it is wiping out the taxi industry, but rather that transportation as a service has the potential to radically transform our cities. What happens when parking lots go away, commutes in self-driving cars lend themselves to increased productivity, or going out is as easy as tapping an app? What kind of new jobs and services might arise?

As another example, I wrote last month about the dramatic shift in enterprise software that is being enabled by the cloud. The simple ability to pay-as-you-go has already had a big impact on startups and venture capital, but the initial impact on established companies has been to operationalize costs and increase scalability for established processes; the true transformation — building and selling software in completely new ways — is only getting started. Again, there is a difference between making an existing process more efficient and enabling a completely new approach. The gains from the former are easy to measure; the transformation of the latter is only apparent in retrospect, in part because the old way takes time to die and repurpose.

This two stage process is going to be the most traumatic when it comes to the already-started-and-accelerating introduction of automation and artificial intelligence. The downsides are obvious to everyone: if computers can do the job of a human, then the human no longer has a job. In the long run, though, what might that human do instead? To presume that displaced workers will only ever sit around collecting a universal basic income2 is to, in my mind, sell short the human drive and ingenuity that has already carried us far from our cavemen ancestors.

I know that my perspective is a privileged one: I am a clear beneficiary of this new world order. Moreover, I know that very good and important things will be lost, at least for a time, in these transitions. I strongly agree with Shafer, for example, that newspapers “still publish a disproportionate amount of the accountability journalism available” and that “we stand to lose one of the vital bulwarks that protect and sustain our culture.”

Fixing that and the many other problems wrought by the Internet, though, requires looking forwards, not backwards. The most fundamental assumptions underlying businesses — critical institutions in any society — have changed irrevocably, and to pretend they haven’t is a colossal mistake.

  1. Gulf Oil, which was the 7th largest company in 1980, was the exception []
  2. Which I support []