Stratechery Plus Update

  • The European Commission Versus Android

    To understand how Google ended up with a €4.3 billion fine and a 90-day deadline to change its business practices around Android, it is critical to keep one date in mind: July 2005.1 That was when Google acquired a still in-development mobile operating system called Android, and to put the acquisition in context, Steve Jobs was, at least publicly, “not convinced people want to watch movies on a tiny little screen”. He was, of course, referring to the iPod; Apple would go on to release an iPod with video playback a few months later, but the iPhone was still a year-and-a-half away from being revealed.

    In other words, Android, at least at the beginning, wasn’t a response to Apple;2 the real target was Microsoft (and to a lesser extent Blackberry), which seemed poised to dominate smartphones just as they had the desktop. That was an untenable situation for Google; then Vice-President of Product Management Sundar Pichai wrote on the Google Public Policy blog about the company’s challenges on PCs:

    Google believes that the browser market is still largely uncompetitive, which holds back innovation for users. This is because Internet Explorer is tied to Microsoft’s dominant computer operating system, giving it an unfair advantage over other browsers. Compare this to the mobile market, where Microsoft cannot tie Internet Explorer to a dominant operating system, and its browser therefore has a much lower usage.

    What mattered to Google was access to end users: that is what makes the Aggregation flywheel turn. On PCs the company had succeeded through a combination of flat-out being better, the fact that it was very simple to visit a new URL (and make it your homepage), and deals with OEMs to set Google as the homepage from the beginning. All would be more difficult to achieve on mobile, at least mobile as it was understood in 2005: applications were notoriously difficult to find and install, and Microsoft and Blackberry had locked down their operating systems to a much greater extent than Microsoft had on the PC.

    Thus the Android gambit: Google decided to take on Microsoft directly in mobile operating systems, and its most powerful tool would not be the quality of the operating system, but the business model. To that end, while Google did, naturally, retool Android’s user interface once the iPhone was announced, the business model remained Microsoft kryptonite: whereas Microsoft charged a per-device licensing fee, just as it had with Windows, Android would not only be free and open-source, Google would actually share search revenue derived from Android with OEMs that installed the operating system.

    Of course Android also ended up being a much better experience than Windows Mobile in the post-iPhone world, and the deal was irresistible to OEMs flailing for a response to the iPhone: get a (somewhat) comparable (sort-of) touch-based operating system for free, and even make money after the initial sale! Indeed, not only did Android effectively kill Microsoft’s mobile efforts, it went on to take over the world via a massive ecosystem of device makers and mobile carriers that competed to drive down costs and increase distribution.

    Android’s Success

    That Android increases competition was the focus of Pichai’s — now the CEO of Google — latest blog post in response to the ruling:

    Today, the European Commission issued a competition decision against Android, and its business model. The decision ignores the fact that Android phones compete with iOS phones, something that 89 percent of respondents to the Commission’s own market survey confirmed. It also misses just how much choice Android provides to thousands of phone makers and mobile network operators who build and sell Android devices; to millions of app developers around the world who have built their businesses with Android; and billions of consumers who can now afford and use cutting-edge Android smartphones. Today, because of Android, there are more than 24,000 devices, at every price point, from more than 1,300 different brands…

    What Pichai doesn’t say is that this competition is not so much a feature as it was the point: open-sourcing Android commoditized smartphone development meaning anyone could enter, even as few were in a position to profit over time. That included Google, at least at the beginning, which was by design: remember, the point of Android was not to make money like Windows, it was to stop Windows or any other operating system from getting between Google and users. Venture capitalist Bill Gurley explained in a 2011 post entitled The Freight Train That Is Android:

    Android, as well as Chrome and Chrome OS for that matter, are not “products” in the classic business sense. They have no plan to become their own “economic castles.” Rather they are very expensive and very aggressive “moats,” funded by the height and magnitude of Google’s castle [(search advertising)]. Google’s aim is defensive not offensive. They are not trying to make a profit on Android or Chrome. They want to take any layer that lives between themselves and the consumer and make it free (or even less than free). Because these layers are basically software products with no variable costs, this is a very viable defensive strategy. In essence, they are not just building a moat; Google is also scorching the earth for 250 miles around the outside of the castle to ensure no one can approach it. And best I can tell, they are doing a damn good job of it.

    Indeed they were, but the strategy had a built-in problem: Android was, well, open source, and just as that helped Android spread, it could just as easily be forked into an initially compatible operating system that didn’t connect to Google’s services — the castle that Google was trying to protect all along. Google needed a wall for its moat, and found one in the Google Play Store.

    The Google Play Store and Google Play Services

    The Google Play Store, not unlike Android’s user interface, was a response to the iPhone, specifically the highly successful launch of the App Store in 2008. And while the Play Store often lagged the App Store when it came to cutting-edge apps, particularly in the early days, it quickly became one of Google’s most valuable services, both in terms of making Android useful as well as making Google money.

    Note, though, that the Play Store is not a part of Android: it has always been closed-source and exclusive to Google’s version of Android, just like other Google services like Gmail, Maps, and YouTube. The problem Google had with all of those apps, though, was that they were updated with the operating system, and OEMs and carriers — who only made money when a device was initially sold — were not particularly incentivized to update the operating system.

    Google’s solution was Google Play Services; first released in 2010 as a part of Android 2.2 Froyo, Google Play Services was distributed via the Play Store and provided an easily updatable API layer that would, in the initial version, allow Google to update its own apps independent of operating system updates. It was an elegant solution to a real problem inherent in the free-wheeling model Google had taken for Android distribution: widespread fragmentation. Soon all of Google’s apps were built on top of Google Play Services, and then, in 2012, Google started opening it up to developers.

    The initial version was quite modest; here is the announcement on Google+:

    At Google I/O we announced a preview of Google Play services, a new development platform for developers who want to integrate Google services into their apps. Today, we’re kicking off the full launch of Google Play services v1.0, which includes Google+ APIs and new OAuth 2.0 functionality. The rollout will cover all users on Android 2.2+ devices running the latest version of the Google Play store.

    Over the next several years, though, Google devoted more and more of its effort — and its most interesting APIs, like location and maps and gaming services — to Google Play Services; meanwhile, whatever equivalent service was in the open-source version of Android was effectively frozen in time. The net result is incredibly significant to teasing out this case: Google Play Services silently shifted ever more apps from Android apps to Google Play apps; today, no Google app will function on open-source Android without extensive reworking, and the same applies to ever more 3rd-party apps as well.

    That noted, it is hard, in my estimation, to see this as being an antitrust violation. The fact of the matter is that Google was addressing a legitimate problem in the Android ecosystem, and the company didn’t make any developer use Google Play Services APIs instead of the more basic ones still available even today.

    The European Commission Case

    The European Commission found Google guilty of breaching EU antitrust rules in three ways:

    • Illegally tying Google’s search and browser apps to the Google Play Store; to get the Google Play Store and thus a full complement of apps, OEMs have to pre-install Google search and Chrome and make them available within one screen of the home page.
    • Illegally paying OEMs to exclusively pre-install Google Search on every Android device they made.
    • Illegally barring OEMs that installed Google’s apps from selling any device that ran an Android fork.

    Taken in isolation, these seem to run from least problematic to most problematic.

    • The Google Play Store has always been an exclusive Google app; it seems that Google ought to be able to distribute it exclusively as part of a bundle if it so chooses.
    • Pinning all revenue from Google Search to exclusivity on all devices quite obviously makes it very difficult for alternative search services to build share (as they lack access to pre-installs, one of the most effective channels for customer acquisition); this seems to be more of a Google Search dominance issue than an Android dominance issue though.
    • Predicating the availability of any of Google’s apps, including the Google Play Store, on OEMs not taking advantage of the open source nature of Android on devices that will not include Google apps seems much more problematic than Google insisting its apps be distributed in a bundle. The latter is Google’s prerogative; the former is dictating OEM actions just because Google can.

    This is where the history of Android matters; before Google Play Services, the primary challenge in building a competitive fork of Android would have been convincing developers to upload their apps to a new app store (since Google would obviously not want to put its apps, including the Play Store, on said fork). That fork, though, never materialized because of Google’s contractual terms barring OEMs from selling any devices built on such a fork.

    Today the situation is very different: that contractual limitation could go away tomorrow (or, more accurately, in 90 days), and it wouldn’t really matter because, as I explained above, many apps are no longer Android apps but are rather Google Play apps. To run on an Android fork is by no means impossible, but most would require more rework than simply uploading to a new App Store.

    In short, in my estimation the real antitrust issue is Google contractually foreclosing OEMs from selling devices with non-Google versions of Android; the only way to undo that harm in 2018, though, would be to make Google Play Services available to any Android fork.

    The Commission’s Remedies

    To be sure, that’s not exactly what the European Commission ordered (in fact, “Google Play Services” does not appear a single time in the press release); the Commission seems to feel that the three issues do stand alone. That means that Google has to respond to each individually:

    • Google has to untie the Play Store from Search and the Chrome browser
    • Google has already stopped paying OEMs for portfolio-wide search exclusivity
    • Google can no longer stop OEMs from selling devices with Android forks

    The most momentous by far is the first (despite the fact it is the weakest allegation, in my estimation). Samsung, or any other OEM, could in 90 days sell a device with Bing search only and the Google Play Store (where of course Google Search could be downloaded). This will likely accrue to consumers’ benefit: Microsoft, Google, and other providers will soon be bidding to be the default search option, and, given the commoditized nature of Android devices, it is likely that most of what they are willing to pay will go towards lower prices.

    Still, it is an unsatisfying remedy: Google built Android for the express purpose of monetizing search, and to be denied that by regulatory edict feels off; Google, though, bears a lot of the blame for going too far with its contracts.

    More broadly, the European Commission continues to be a bit too cavalier about denying companies — well, Google, mostly — the right to monetize the products they spend billions of dollars at significant risk to develop; this was my chief objection to last year’s Google Shopping case. In this case I narrowly come down on the Commission’s side almost by accident: I think Google acted illegally by contractually foreclosing Android competitors at a time when it might have made a difference, but I am concerned that the Commission’s publicly released reasoning doesn’t seem to grasp exactly how Android has developed, the choices Google made, and why.

    That noted, I highly doubt Google would do anything differently: when it comes to the company’s goals, Android could not be a bigger success — if anything, this ruling is evidence of just how successful the product was.


    1. The exact date of the acquisition is unknown 

    2. For those wondering, then-Google CEO Eric Schmidt didn’t join Apple’s Board of Directors until August 2006 


  • Intel and the Danger of Integration

    Last week Brian Krzanich resigned as the CEO of Intel after violating the company’s non-fraternization policy. The details of Krzanich’s departure, though, ultimately don’t matter: his tenure was an abject failure, the extent of which is only now coming into view.

    Intel’s Obsolete Opportunity

    When Krzanich was appointed CEO in 2013 it was already clear that arguably the most important company in Silicon Valley’s history was in trouble: PCs, long Intel’s chief money-maker, were in decline, leaving the company ever more reliant on the sale of high-end chips to data centers; Intel had effectively zero presence in mobile, the industry’s other major growth area.

    Still, I framed the situation that faced Krzanich as an opportunity, and drew a comparison to the challenges that faced the legendary Andy Grove three decades ago:

    By the 1980s, though, it was the microprocessor business, fueled by the IBM PC, that was driving growth, while the DRAM business was fully commoditized and dominated by Japanese manufacturers. Yet Intel still fashioned itself a memory company. That was their identity, come hell or high water.

    By 1986, said high water was rapidly threatening to drag Intel under. In fact, 1986 remains the only year in Intel’s history that they made a loss. Global overcapacity had caused DRAM prices to plummet, and Intel, rapidly becoming one of the smallest players in DRAM, felt the pain severely. It was in this climate of doom and gloom that Grove took over as CEO. And, in a highly emotional yet patently obvious decision, he once and for all got Intel out of the memory manufacturing business.

    Intel was already the best microprocessor design company in the world. They just needed to accept and embrace their destiny.

    Fast forward to the challenge that faced Krzanich:

    It is into a climate of doom and gloom that Krzanich is taking over as CEO. And, in what will be a highly emotional yet increasingly obvious decision, he ought to commit Intel to the chip manufacturing business, i.e. manufacturing chips according to other companies’ designs.

    Intel is already the best microprocessor manufacturing company in the world. They need to accept and embrace their destiny.

    That article is now out of date: in a remarkable turn of events, Intel has lost its manufacturing lead. Ben Bajarin wrote last week in Intel’s Moment of Truth:

    Not only has the competition caught Intel they have surpassed them. TSMC is now sampling on 7nm and AMD will ship their architecture on 7nm technology in both servers and client PCs ahead of Intel. For those who know their history, this is the first time AMD has ever beat Intel to a process node. Not only that, but AMD will likely have at least an 18 month lead on Intel with 7nm, and I view that as conservative.

    As Bajarin notes, 7nm for TSMC (or Samsung or Global Foundries) isn’t necessarily better than Intel’s 10nm; chip-labeling isn’t what it used to be. The problem is that Intel’s 10nm process isn’t close to shipping at volume, and the competition’s 7nm processes are. Intel is behind, and its insistence on integration bears a large part of the blame.

    Intel’s Integrated Model

    Intel, like Microsoft, had its fortunes made by IBM: eager to get the PC an increasingly vocal section of its customer base demanded out the door, the mainframe maker outsourced much of the technology to third party vendors, the most important being an operating system from Microsoft and a processor from Intel. The impact of the former decision was the formation of an entire ecosystem centered around MS-DOS, and eventually Windows, cementing Microsoft’s dominance.

    Intel was a slightly different story; while an operating system was simply bits on a disk, and thus easily duplicated for all of the PCs IBM would go on to sell, a processor was a physical device that needed to be manufactured. To that end IBM insisted on having a “second source”, that is, a second non-Intel manufacturer for Intel’s chips. Intel chose AMD, and licensed first the 8086 and 8088 designs that were in the original IBM PC, and later, again under pressure from IBM, the 80286 design; the latter was particularly important because it was designed to be upward compatible with everything that followed.

    This laid the groundwork for Intel’s strategy — and immense profitability — for the next 35 years. First off, the dominance of Intel’s x86 design was assured thanks to its integration with DOS/Windows: specifically, DOS/Windows created a two-sided market of developers and PC users, and DOS/Windows ran on x86.

    Microsoft and Intel were integrated in the PC value chain

    However, thanks to its licensing deal with AMD, Intel wasn’t automatically entitled to all of the profits that would result from that integration; thus Intel doubled-down on an integration of its own: the design and manufacture of x86 chips. That is, Intel would invest huge sums of money into creating new and faster designs (the 386, the 486, the Pentium, etc.), and also invest huge sums of money into ever smaller and more efficient manufacturing processes that would push the limits of Moore’s Law. This one-two punch would ensure that, despite AMD’s license, Intel’s chips would be the only realistic choice for PC makers, allowing the company to capture the vast majority of the profits created by the x86’s integration with DOS/Windows.

    Intel was largely successful. AMD did take the performance crown around the turn of the century with the Athlon 64, but the company was unable to keep up with Intel financially when it came to fabs, and Intel illegally leveraged its dominant position with OEMs to keep them buying mostly Intel parts; then, a few years later, Intel not only took back the performance lead with its Core architecture, but settled into the “tick-tock” strategy where it alternated new designs and new manufacturing processes on a regular schedule. The integration advantage was real.

    TSMC’s Modular Approach

    In the meantime there was a revolution brewing in Taiwan. In 1987, Morris Chang founded Taiwan Semiconductor Manufacturing Company (TSMC) promising “Integrity, commitment, innovation, and customer trust”. Integrity and customer trust referred to Chang’s commitment that TSMC would never compete with its customers with its own designs: the company would focus on nothing but manufacturing.

    This was a completely novel idea: at that time all chip manufacturing was integrated a la Intel; the few firms that were only focused on chip design had to scrap for excess capacity at Integrated Device Manufacturers (IDMs) who were liable to steal designs and cut off production in favor of their own chips if demand rose. Now TSMC offered a much more attractive alternative, even if their manufacturing capabilities were behind.

    In time, though, TSMC got better, in large part because it had no choice: soon its manufacturing capabilities were only one step behind industry standards, and within a decade had caught-up (although Intel remained ahead of everyone). Meanwhile, the fact that TSMC existed created the conditions for an explosion in “fabless” chip companies that focused on nothing but design. For example, in the late 1990s there was an explosion in companies focused on dedicated graphics chips: nearly all of them were manufactured by TSMC. And, all along, the increased business let TSMC invest even more in its manufacturing capabilities.

    Integrated intel was competing with a competitive modular ecosystem

    This represented into a three-pronged assault on Intel’s dominance:

    • Many of those new fabless design companies were creating products that were direct alternatives to Intel chips for general purpose computing. The vast majority of these were based on the ARM architecture, but also AMD in 2008 spun off its fab operations (christened GlobalFoundries) and became a fabless designer of x86 chips.
    • Specialized chips, designed by fabless design companies, were increasingly used for operations that had previously been the domain of general purpose processors. Graphics chips in particular were well-suited to machine learning, cryptocurrency mining, and other highly “embarrassingly parallel” operations; many of those applications have spawned specialized chips of their own. There are dedicated bitcoin chips, for example, or Google’s Tensor Processing Units: all are manufactured by TSMC.
    • Meanwhile TSMC, joined by competitors like GlobalFoundries and Samsung, were investing ever more in new manufacturing processes, fueled by the revenue from the previous two factors in a virtuous cycle.

    Intel’s Straitjacket

    Intel, meanwhile, was hemmed in by its integrated approach. The first major miss was mobile: instead of simply manufacturing ARM chips for the iPhone the company presumed it could win by leveraging its manufacturing to create a more-efficient x86 chip; it was a decision that evinced too much knowledge of Intel’s margins and not nearly enough reflection on the importance of the integration between DOS/Windows and x86.

    Intel took the same mistaken approach to non general-purpose processors, particularly graphics: the company’s Larrabee architecture was a graphics chip based on — you guessed it — x86; it was predicated on leveraging Intel’s integration, instead of actually meeting a market need. Once the project predictably failed Intel limped along with graphics that were barely passable for general purpose displays, and worthless for all of the new use cases that were emerging.

    The latest crisis, though, is in design: AMD is genuinely innovating with its Ryzen processors (manufactured by both GlobalFoundries and TSMC), while Intel is still selling varations on Skylake, a three year-old design. Ashraf Eassa, with assistance from a since-deleted tweet from a former Intel engineer, explains what happened:

    According to a tweet from ex-Intel engineer Francois Piednoel, the company had the opportunity to bring all-new processor technology designs to its currently shipping 14nm technology, but management decided against it.

    my post was actually pointing out that market stalling is more troublesome than Ryzen, It is not a good news. 2 years ago, I said that ICL should be taken to 14nm++, everybody looked at me like I was the craziest guy on the block, it was just in case … well … now, they know

    — François Piednoël (@FPiednoel) April 26, 2018

    The problem in recent years is that Intel has been unable to bring its major new manufacturing technology, known as 10nm, into mass production. At the same time, the issues with 10nm seemed to catch Intel off-guard. So, by the time it became clear that 10nm wouldn’t go into production as planned, it was too late for Intel to do the work to bring one of the new processor designs that was originally developed to be built on the 10nm technology to its older 14nm technology…

    What Piednoel is saying in the tweet I quoted above is that when management had the opportunity to start doing the work to bring their latest processor design, known as Ice Lake (abbreviated “ICL” in the tweet), [to the 14nm process] they decided against doing so. That was likely because management truly believed two years ago that Intel’s 10nm manufacturing technology would be ready for production today. Management bet incorrectly, and Intel’s product portfolio is set to suffer as a result.

    To put it another way, Intel’s management did not break out of the integration mindset: design and manufacturing were assumed to be in lockstep forever.

    Integration and Disruption

    It is perhaps simpler to say that Intel, like Microsoft, has been disrupted. The company’s integrated model resulted in incredible margins for years, and every time there was the possibility of a change in approach Intel’s executives chose to keep those margins. In fact, Intel has followed the script of the disrupted even more than Microsoft: while the decline of the PC finally led to The End of Windows, Intel has spent the last several years propping up its earnings by focusing more and more on the high-end, selling Xeon processors to cloud providers. That approach was certainly good for quarterly earnings, but it meant the company was only deepening the hole it was in with regards to basically everything else. And now, most distressingly of all, the company looks to be on the verge of losing its performance advantage even in high-end applications.

    This is all certainly on Krzanich, and his predecessor Paul Otellini. Then again, perhaps neither had a choice: what makes disruption so devastating is the fact that, absent a crisis, it is almost impossible to avoid. Managers are paid to leverage their advantages, not destroy them; to increase margins, not obliterate them. Culture more broadly is an organization’s greatest asset right up until it becomes a curse. To demand that Intel apologize for its integrated model is satisfying in 2018, but all too dismissive of the 35 years of success and profits that preceded it.

    So it goes.


  • AT&T, Time Warner, and the Need for Neutrality

    The first thing to understand about the decision by a federal judge to approve AT&T’s acquisition of Time Warner, over the objection of the U.S. Department of Justice, is that it is very much in-line with the status quo: this is a vertical merger, and both the Department of Justice and the courts have defaulted towards approving such mergers for decades.1

    Second, that there is an explosion of merger activity in and between the television production and distribution space is hardly a surprise: the Multichannel Video Programming Distributor (MVPD) business — that is, television distributed by cable, broadband, or satellite — has been shrinking for years now, and in a world where the addressable market is decreasing, the only avenues for growth are winning share from competitors, acquiring competitors, or vertically integrating.

    Third, that last paragraph overstates the industry’s travails, at least in terms of television distribution, because most TV distributors are also internet service providers (ISPs), which means they are getting paid by consumers using the services disrupting MVPDs, including Netflix, Google, Facebook, and the Internet generally.

    What was both unsurprising and yet odd about this case was the degree to which it was fought over point number two, with minimal acknowledgement of point number three. That is, it seems clear to me that AT&T made this acquisition with an eye on point number three, yet the government’s case was predicated on point number two; to that end, the government, in my eyes, rightly lost given the case they made. Whether they should have lost a better case is another question entirely.

    Why AT&T Bought Time Warner

    What is the point of a merger, instead of a contract? This is a question that always looms large in any acquisition, particularly one of this size: AT&T is paying $85 billion for Time Warner, and that’s an awfully steep price to simply hang out with movie stars.

    The standard explanation for most mergers is “synergies”, the idea that there are significant cost savings from combining the operations of two companies; the reason this explanation is popular is because saving money is not an issue for antitrust, while the corresponding possibility — charging higher prices by achieving a stronger market position through consolidation — is. Such an explanation, though, is usually applied in the case of a horizontal merger, not a vertical one like AT&T and Time Warner.

    To that end, AT&T was remarkably honest in its press release announcing the merger back in 2016:2

    “With great content, you can build truly differentiated video services, whether it’s traditional TV, OTT or mobile. Our TV, mobile and broadband distribution and direct customer relationships provide unique insights from which we can offer addressable advertising and better tailor content,” [AT&T CEO Randall] Stephenson said. “It’s an integrated approach and we believe it’s the model that wins over time…

    AT&T expects the deal to be accretive in the first year after close on both an adjusted EPS and free cash flow per share basis…Additionally, AT&T expects the deal to improve its dividend coverage and enhance its revenue and earnings growth profile.

    Start with the second point: as I noted at the time, it’s not very sexy, but it matters to AT&T, a 34-year member of the Dividend Aristocrats, that is, a company in the S&P 500 that raised its dividend for 25 years straight or more. It’s a core part of AT&T’s valuation, but the company’s free cash flow has been struggling to keep up with its rising dividends. Time Warner will help significantly in this regard, as did the previous acquisition of DirecTV.

    It is the first point, though, that is pertinent to this analysis: how exactly might Time Warner allow AT&T to “build truly differentiated video services”?

    The Government’s Case

    While the AT&T press release noted that those “truly differentiated video services” could be delivered via traditional TV, OTT, or mobile, the government’s case was entirely concerned with traditional TV. The original complaint stated:

    Were this merger allowed to proceed, the newly combined firm likely would — just as AT&T/DirecTV has already predicted — use its control of Time Warner’s popular programming as a weapon to harm competition. AT&T/DirecTV would hinder its rivals by forcing them to pay hundreds of millions of dollars more per year for Time Warner’s networks, and it would use its increased power to slow the industry’s transition to new and exciting video distribution models that provide greater choice for consumers. The proposed merger would result in fewer innovative offerings and higher bills for American families.

    The idea is that AT&T could leverage its ownership of DirecTV to demand higher prices for Turner networks from other MVPDs, because if the MVPDs refused to pay customers would be driven to switch to DirectTV. The problem is that, as was easily calculable, this makes no economic sense: the amount of money AT&T would lose by blacking out Turner would almost certainly outweigh whatever gains it might accrue. The judge agreed, and that was that.

    AT&T’s Real Goals

    Remember, though, that AT&T did not limit its options to traditional TV: what is far more compelling are the possibilities Time Warner content presents for OTT and mobile. The question is not what AT&T can do to increase the revenue potential of Time Warner content (which was the government’s focus), but rather what Time Warner content can do to increase the potential of AT&T’s services, particularly mobile.

    Forgive the long excerpt, but I covered this angle at length in a Daily Update when the deal was announced:

    AT&T’s core wireless business is competing in a saturated market with few growth prospects. Apple’s gift to the wireless industry of customers demanding high-priced data plans has largely run its course, with AT&T perhaps the biggest winner: the company acquired significant market share even as it increased its average revenue per user for nearly a decade, primarily thanks to the iPhone. Now, though, most everyone has a smartphone and, more pertinently, a data plan…

    The implication of a saturated market is that growth is increasingly zero sum, which presents both a problem and an opportunity for AT&T. The problem is primarily T-Mobile: fueled by the massive break-up fee paid by AT&T for the aforementioned failed acquisition, T-Mobile has embarked on an all-out assault against the incumbent wireless carriers, and AT&T has felt the pain the most, recording a negative net change in postpaid wireless customers for eight straight quarters. Unable or unwilling to compete with T-Mobile on price, AT&T needs a differentiator, ideally one that will not only forestall losses but actually lead to gains.

    At first glance this doesn’t explain the Time Warner acquisition either: per my point above these are two very different companies with two very different strategic views of content. A distributor in a zero-sum competition for subscribers (like AT&T) has a vertical business model: ideally there should be services and content that are exclusive to the distributor, thus securing customers. Time Warner, though, is a content company, which means it has a horizontal business model: content is made once and then monetized across the broadest set of potential customers possible, taking advantage of content’s zero marginal cost. The assumption of this sort of horizontal business model underlay Time Warner’s valuation; to suddenly make Time Warner’s content exclusive to AT&T would be massively value destructive (this is a reality often missed by suggestions that Apple, for example, should acquire content companies to differentiate its hardware).

    AT&T, however, may have found a loophole: zero rating. Zero rating is often conflated with net neutrality, but unlike the latter, zero rating does not entail the discriminatory treatment of data; it just means that some data is free (sure, this is a violation of the idea of net neutrality, but this is why I was critical of the narrow focus on discriminatory treatment of data by net neutrality advocates). AT&T is already using zero rating to push DirecTV:

    This is almost certainly the plan for Time Warner content as well: sure, it will continue to be available on all distributors, but if you subscribe to AT&T you can watch as much as you want for free; moreover, this offering is one that is strengthened by secular trends towards cord-cutting and mobile-only video consumption. If those trends continue on their current path AT&T will not only strengthen the moat of its wireless service against T-Mobile but maybe even start to steal share.

    That this point never came up in the government’s case, and, by extension, the judge’s ruling, is truly astounding.

    That noted, it is very fair to wonder why exactly the Department of Justice sued to block this acquisition: President Trump was very outspoken in his opposition to this deal and even more outspoken in his antipathy towards Time Warner-owned CNN. At the same time, Makan Delrahim, the Assistant Attorney General for Antitrust who led the case, didn’t see a problem with the merger before his appointment. That the government’s complaint rested on both the most obvious angle and, from AT&T’s perspective, the least important, suggests a paucity of rigor in the prosecution of this case; it is very reasonable to wonder if the order to oppose the merger came from the top, and that the easiest case was the obvious out.

    The Neutrality Solution

    Thus we are in the unfortunate scenario where a bad case by the government has led to, at best, a merger that was never examined for its truly anti-competitive elements, and at worst, bad law that will open the door for similar tie-ups. To be sure, it is not at all clear that the government would have won had they focused on zero rating: there is an obvious consumer benefit to the concept — that is why T-Mobile leveraged it to such great effect! — and the burden would have been on the government to show that the harm was greater.

    The bigger issue, though, is the degree to which laws surrounding such issues are woefully out-of-date. Last fall I argued that Title II was the wrong framework to enforce net neutrality, even though net neutrality is a concept I absolutely support; I came to that position in part because zero rating was barely covered by the FCC’s action.3

    What is clearly needed is new legislation, not an attempt to misapply ancient regulation in a way that is trivially reversible. Moreover, AT&T has a point that online services like Google and Facebook are legitimate competitors, particularly for ad dollars; said regulation should address the entire sector. To that end I would focus on three key principles:

    • First, ISPs should not purposely slow or block data on a discriminatory basis. I am not necessarily opposed to the concept of “fast lanes”, as I believe that offers significant potential for innovative services, although I recognize the arguments against them; it should be non-negotiable, though, that ISPs cannot purposely disfavor certain types of content.
    • Second, and similarly, dominant internet platforms should not be allowed to block any legal content from their services. At the same time, services should have discretion in monetization and algorithms; that anyone should be able to put content on YouTube, for example, does not mean that one has a right to have Google monetize it on their behalf, or surface it to people not looking for it.
    • Third, ISPs should not be allowed to zero-rate their own content, and platforms should not be allowed to prioritize their own content in their algorithms. Granted, this may be a bit extreme; at a minimum there should be strict rules and transparency around transfer pricing and a guarantee that the same rates are allowed to competitive services and content.

    The reality of the Internet, as noted by Aggregation Theory, is increased centralization; meanwhile, the impact on the Internet on traditional media is an inexorable drive towards consolidation. Our current laws and antitrust jurisprudence are woefully unprepared to deal with this reality, and a new law guaranteeing neutrality is the best solution.


    1. Whether or not the presumption that vertical mergers are not anti-competitive is a worthwhile, albeit separate, discussion 

    2. To be fair, the company also mentioned synergies, but it was hardly the point of the press release. 

    3. The FCC said it would take it case-by-case, and did argue in the waning days of the Obama administration that zero rating one’s own services as AT&T is clearly trying to do was a violation, but that was never tested in court and was quickly rolled back 


  • The Scooter Economy

    As I understand it, the proper way to open an article about electric scooters is to first state one’s priors, explain the circumstances of how one came to try scooters, and then deliver a verdict. Unfortunately, that means mine is a bit boring: while most employing this format wanted to hate them,1 I was pretty sure scooters would be awesome — and they were!2

    For me the circumstances were a trip to San Francisco; I purposely stayed at a hotel relatively far from where most of my meetings were, giving me no choice but to rely on some combination of scooters, e-bikes, and ride-sharing services. The scooters were a clear winner: fast, fun, and convenient — as long as you could find one near you. The city needs five times as many.

    So, naturally, San Francisco banned them, at least temporarily: companies will be able to apply for their share of a pool of a mere 1,250 permits; that number may double in six months, but for now the scooter-riding experience will probably be more of a novelty, not something you can rely on. In fact, by the end of my trip, if I were actually in a rush, I knew to use a ride-sharing service.

    It’s no surprise that ride-sharing services have higher liquidity: San Francisco is a car-friendly town. The city has a population of 884,363 humans and 496,843 vehicles, mostly in the city’s 275,000 on-street parking spaces. Granted, most of the Uber and Lyft drivers come from outside the city, but there is no congestion tax to deter them.

    The result is an urban area stuck on a bizarre local maxima: most households have cars, but rarely use them, particularly in the city, because traffic is bad and parking is — relative to the number of cars — sparse; the alternative is ride-sharing, which incurs the same traffic costs but at least doesn’t require parking. And yet, San Francisco, for now anyways, will only allow about 60 parking spaces-worth of scooters onto the streets.

    Everything as a Service

    This is hardly the forum to discuss the oft-head-scratching politics of tech’s de facto capital city, and I can certainly see the downside of scooters, particularly the haphazard way with which they are being deployed; in an environment built for cars scooters get in the way.

    It’s worth considering, though, just how much sense dockless scooters make: the concept is one of the purest manifestations of what I referred to in 2016 as Everything as a Service:

    What happens, though, if we apply the services business model to hardware? Consider an airplane: I fly thousands of miles a year, but while Stratechery is doing well, I certainly don’t own my own plane! Rather, I fly on an airplane that is owned by an airline that is paid for in part through some percentage of my ticket cost. I am, effectively, “renting” a seat on that airplane, and once that flight is gone I own nothing other than new GPS coordinates on my phone.

    Now the process of buying an airplane ticket, identifying who I am, etc. is far more cumbersome than simply hopping in my car — there are significant transaction costs — but given that I can’t afford an airplane it’s worth putting up with when I have to travel long distances. What happens, though, when those transaction costs are removed? Well, then you get Uber or its competitors: simply touch a button and a car that would have otherwise been unused will pick you up and take you where you want to go, for a price that is a tiny fraction of what the car cost to buy in the first place. The same model applies to hotels — instead of buying a house in every city you visit, simply rent a room — and Airbnb has taken the concept to a new level by leveraging unused space.

    The enabling factor for both Uber and Airbnb applying a services business model to physical goods is your smartphone and the Internet: it enables distribution and transactions costs to be zero, making it infinitely more convenient to simply rent the physical goods you need instead of acquiring them outright.

    What is striking about dockless scooters — at least when one is parked outside your door! — is that they make ride-sharing services feel like half-measures: why even wait five minutes, when you can just scan-and-go? Steve Jobs described computers as bicycles of the mind; now that computers are smartphones and connected to the Internet they can conjure up the physical equivalent as well!

    Indeed, the only thing that could make the experience better — for riders and for everyone else — would be dedicated lanes, like, for example, the 900 miles worth of parking spaces in San Francisco. To be sure, the city isn’t going to make the conversion overnight, or, given the degree to which San Francisco is in thrall to homeowners, probably ever, but that is particularly a shame in 2018: venture capitalists are willing to fund the entire thing, and I’m not entirely sure why.

    Missing Moats

    Late last month came word that Sequoia Capital was leading a $150 million funding round for Bird, one of the electric scooter companies, valuing the company at $1 billion; a week later came reports that GV was leading a $250 million investment in rival Lime.

    One of the interesting tidbits in Axios’s reporting on the latter was that each Lime scooter is used on average between 8 and 12 times a day; plugging that number into this very useful analysis of scooter-sharing unit costs suggests that the economics of both startups are very strong (certainly the size of the investments — and the quality of the investors — suggests the same).

    The key word in that sentence, though, is “both”: what, precisely, might make Bird and Lime, or any of their competitors, unique? Or, to put it in business parlance, where is the moat? This is where the comparison to ride-sharing services is particularly instructive; I explained back in 2014 why there was more of a moat to be had in ride-sharing than most people thought:

    • There is a two-sided network between drivers and riders
    • As one service gains share, its increased utility of drivers will restrict liquidity on the other service, favoring the larger player
    • Riders will, all things being equal, use one service habitually

    This leads to winner-take-all dynamics in a particular geographic area; then, when it comes times to launch in new areas, travelers and brand will give the larger service a head start.

    To be sure, these interactions are complicated, and not everything is equal (see, for example, the huge amounts of share Lyft took last year thanks to Uber’s self-inflicted crises). It is that complication, though, and the fact it is exponentially more difficult to build a two-sided network (instead of, say, plopping a bunch of scooters on the street), that creates the conditions for a moat: the entire point of a moat is that it is hard to build.

    Uber’s Self-Driving Mistake

    This is why I have long maintained that the second-biggest mistake3 former Uber CEO Travis Kalanick made was the company’s head-first plunge into self-driving cars. On a surface level, the logic is obvious: Uber’s biggest cost is the driver, which means getting rid of them is an easy route to profitability — or, should someone else deploy self-driving cars first, then Uber could be undercut in price.

    The mistake in Kalanick’s thinking is two-fold:

    • First, up-and-until the point that self-driving cars are widely available — that is, not simply invented, but built-and-deployed at scale — Uber’s drivers are its biggest competitive advantage. Kalanick’s public statements on the matter hardly evinced understanding on this point.
    • Second, bringing self-driving cars to market would entail huge amounts of capital investment. For one, this means it would be unlikely that Google, a company that rushes to reassure investors when it loses tens of basis points in margin, would do so by itself, and for another, whatever companies did make such an investment would be highly incentivized to maximize utilization of said investment as soon as possible. That means plugging into the dominant transportation-as-a-service network, which means partnering with Uber.

    My contention is that Uber would have been best-served concentrating all of its resources on its driver-centric model, even as it built relationships with everyone in the self-driving space, positioning itself to be the best route to customers for whoever wins the self-driving technology battle.

    Uber’s Second Chance

    Interestingly, scooters and their closely-related cousin, e-bikes, may give Uber a second chance to get this right. Absent two-sided network effects, the potential moats for, well, self-riding scooters and e-bikes are relatively weak: proprietary technology is likely to provide short-lived advantages at best, and Bird and Lime have plenty of access to capital. Both are experimenting with “charging-sharing”, wherein they pay people to charge the scooters in their homes, but both augment that with their own contractors to both charge vehicles and move them to areas with high demand.

    What remains under-appreciated is habit: your typical tech first-adopter may have no problem checking multiple apps to catch a quick ride, but I suspect most riders would prefer to use the same app they already have on their phone. To that end, there is certainly a strong impetus for Bird and Lime to spread to new cities, simply to get that first-app-installed advantage, but this is where Uber has the biggest advantage of all: the millions of people who already have the Uber app.

    To that end, I thought Uber’s acquisition of Jump Bikes was a good idea, and scooters should be next (an acquisition of Bird or Lime may already be too pricey, but Jump has a strong technical team that should be able to get an Uber-equivalent out the door soon). The Uber app already handles multiple kinds of rides; it is a small step to handling multiple kinds of transportation — a smaller step than installing yet another app.

    More Tech Surplus

    More generally, in a world where everything is a service, companies may have to adapt to shallower moats than they may like. If you squint, what I am recommending for Uber looks a bit like a traditional consumer packaged goods (CPG) strategy: control distribution (shelf-space | screen-space) with a few dominant products (e.g. TIDE | UberX) that provide leverage for new offerings (e.g. Swiffer | Jump Bikes). The model isn’t nearly as strong, but there may be other potential lock-ins, particularly in terms of exclusive contracts with cities and universities.

    Still, that is hardly the sort of dominance that accrues to digital-only aggregators like Facebook or Google, or even Netflix; the physical world is much harder to monopolize. That everything will be available as a service means a massive increase in efficiency for society broadly — more products will be available to more people for lower overall costs — even as the difficulty in digging moats means most of that efficiency becomes consumer surplus. And, as long as venture capitalists are willing to foot the bill, cities like San Francisco should take advantage.

    I wrote a follow-up to this article in this Daily Update.


    1. That article is perhaps more revealing than the author appreciated 

    2. Note: this article is going to focus on San Francisco for simplicity’s sake, although the broader points have nothing to do with San Francisco specifically; I am aware that the transportation situation is different in different cities — I do live in a different country, after all, in a city with fantastic public transportation and a plethora of personal transportation options. 

    3. The first was not buying Lyft 


  • The Cost of Developers

    Yesterday saw three developer-related announcements, two from Apple, and one from Microsoft. The former came as part of Apple’s annual Worldwide Developers Conference keynote:

    • The iOS App Store, which turns 10 next month, serves 500 million weekly visitors, and as of later this week will have earned developers over $100 billion.
    • Sometime next year developers will be able to write apps for the Mac using iOS user interface frameworks (known as UIKit).

    Microsoft, meanwhile, for the second time in three years, outshone Apple’s keynote with a massive acquisition. From the company’s press release:

    Microsoft Corp. on Monday announced it has reached an agreement to acquire GitHub, the world’s leading software development platform where more than 28 million developers learn, share and collaborate to create the future. Together, the two companies will empower developers to achieve more at every stage of the development lifecycle, accelerate enterprise use of GitHub, and bring Microsoft’s developer tools and services to new audiences.

    “Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

    Under the terms of the agreement, Microsoft will acquire GitHub for $7.5 billion in Microsoft stock.

    Developers can be quite expensive indeed!

    Platform-Developer Symbiosis

    Over the last few weeks, particularly in The Bill Gates Line, I have been exploring the differences between aggregators and platforms; while aggregators generally harvest already produced content or goods, developers leverage the platform to create something entirely new.

    Platforms facilitate while aggregators intermediate

    This results in a symbiosis between developers and platforms: from a technical perspective, platforms provide the fundamental building blocks (i.e. application program interfaces, or APIs) necessary for developers to build new experiences, and from a marketing perspective, those new experiences give customers a reason to buy the platform in the first place, or to upgrade.

    The degree to which applications drive adoption of the underlying platform can, of course, vary; unsurprisingly the monetization potential of the platform relative to developers varies in a correlated way. Traditional Windows, for example, provided very little end user functionality; what made it so valuable were all of the applications built on top of its open platform.

    Windows was an open platform

    Here “open” means two things: first, the Windows API was available to anyone to build on, and two, developers built relationships directly with end users, including payment. This led to many huge software companies and, in 2003, to the creation of a platform on top of Windows: Valve’s Steam.

    What Valve realized is that playing a game is only one part of the overall customer experience; the experience of discovering and buying the game matters as well, as does the installation and upgrade process. Moreover, these customer pain points were developer pain points as well; the original impetus to develop Steam, for example, was the difficulty in getting players to upgrade en masse, something that was essential for games in which players competed online. And, while Valve is a private company and has never announced Steam’s revenue numbers, reports suggest the platform generates billions of dollars a year.

    Even that, though, pales in comparison to the iOS App Store: Apple took Steam’s app store idea and integrated it with the platform, such that iOS users and developers had no choice but to use Apple’s owned-and-operated distribution channel, with all of the various limitations and costs — 30%, to be precise — that that entailed.

    The iPhone platform with an intermediation layer

    Apple was able to accomplish this first and foremost because the underlying products — the iPhone and iPad — inspired demand in their own right, independent of applications. Apple had the users that developers needed to make money.

    Second, the App Store, like Steam before it, really was a better experience that drove more downloads and purchases by end users. This meant that developing for iOS wasn’t simply attractive because of the number of users, but also because those users were willing to buy more than they would have on another platform.

    Third — and this applies to Steam as well — the App Store dramatically lowered the barriers to entry for developers; this led to more apps, which attracted more users, which led to more apps, both locking in apps as a competitive advantage and also ensuring that no one app had outsized power (leaving Apple free to restrict Steam-like competitors by fiat).

    Apple’s Platform Announcements

    This frames the two Apple announcements I noted above. Start with the news of $100 billion for iOS developers: that means that Apple has collected around $40 billion, and at a very high margin to boot.

    Moreover, the vast majority of Apple’s announcements were, if anything, about competing with those developers: the first new app announced, Measure, should immediately wipe out the only obviously useful Augmented Reality apps in the store. Apple also announced a new Podcasts app for Watch, update News, Stocks, and Voice Memo apps, and the only third party demos were about how one of the largest software companies there is — Adobe — would be supporting Apple’s preferred 3D-image format. And why not! The implication of owning all of those high-value users is that, on iOS anyways, developers are cheap.

    The Mac, though, is a different story: the platform is far smaller than the iPhone; that there remain a number of high quality independent software vendors supporting the Mac is a testament to how valuable it is for developers to be able to build direct relationships with customers that can span years and multiple transactions. Still, there seems little question that the number of Mac apps is, if not trending in the wrong direction, certainly not growing in any meaningful way; there simply aren’t enough users to entice developers.

    That means Apple’s approach has to be very different from iOS: instead of dictating terms to developers, Apple announced that it is in the middle of a multi-year project to make it easier to port iOS apps to the Mac. This is, in a fashion, Apple paying for Mac apps; no, the money isn’t going to developers, but Apple is voluntarily taking on a much greater portion of the porting workload. Developers are much more expensive when you don’t have nearly as many users.

    The Cost of GitHub

    Still, whatever it is costing Apple to build this porting framework, it surely is a lot less than $7.5 billion, the price Microsoft is paying for GitHub. Then again, at first glance, it may not be clear what the point of comparison is.

    Go back to Windows: Microsoft had to do very little to convince developers to build on the platform. Indeed, even at the height of Microsoft’s antitrust troubles, developers continued to favor the platform by an overwhelming margin, for an obvious reason: that was where all the users were. In other words, for Windows, developers were cheap.

    That is no longer the case today: Windows remains an important platform in the enterprise and for gaming (although Steam, much to Microsoft’s chagrin, takes a good amount of the platform profit there), but the company has no platform presence in mobile, and is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft.

    This is the context for thinking about the acquisition of GitHub: lacking a platform with sufficient users to attract developers, Microsoft has to “acquire” developers directly through superior tooling and now, with GitHub, a superior cloud offering with a meaningful amount of network effects. The problem is that acquiring developers in this way, without the leverage of users, is extraordinarily expensive; it is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price.

    Again, though, GitHub revenue is not the point; Microsoft has plenty of revenue. What it also has is a potentially fatal weakness: no platform with user-based leverage. Instead Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future, and that there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them.

    This, by the way, is precisely why Microsoft is the best possible acquirer for GitHub, a company that, having raised $350 million in venture capital, was possibly not going to make it as an independent entity. Any company with a platform with a meaningful amount of users would find it very hard to resist the temptation to use GitHub as leverage; on the other side of the spectrum, purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company.

    What Microsoft wants is much fuzzier: it wants to be developers’ friend, in large part because it has no other option. In the long run, particularly as Windows continues to fade, the company will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users.

    That, though, is exactly why Microsoft had to pay so much: buying in directly is a whole lot more expensive than using leverage, which can produce equivalent — or better! — returns for much less investment.


  • The Bill Gates Line

    Two of the more famous military sayings are “Generals are always preparing to fight the last war”, and “Never interrupt your enemy while he is making a mistake.” I thought of the latter at the conclusion of last Sunday’s 60 Minutes report on Google:

    Google declined our request for an interview with one of its executives for this story, but in a written response to our questions, the company denied it was a monopoly in search or search advertising, citing many competitors including Amazon and Facebook. It says it does not make changes to its algorithm to disadvantage competitors and that, “our responsibility is to deliver the best results possible to our users, not specific placements for sites within our results. We understand that those sites whose ranking falls will be unhappy and may complain publicly.”

    The 60 Minutes report was not exactly fair-and-balanced; it featured an anti-tech-monopoly crusader1, an anti-tech-monopoly activist, an anti-tech-monopoly regulator, and Yelp CEO Jeremy Stoppelman. And, in what seems highly unlikely to have been a coincidence, Yelp this week filed a new antitrust complaint in the EU against Google. To be sure, just because a report was biased does not mean it was wrong; while I am a bit skeptical of the EU’s antitrust case against Google Shopping, the open case about Android seems pretty clear-cut. Neither, though, is Yelp’s direct concern.

    Yelp’s Case Against Google

    This is from a blog post about the 60 Minutes feature:

    Yelp did participate in the piece because Google is doing the opposite of “delivering the best results possible,” and instead is giving its own content an unlawful advantage. We’ve made a video to explain exactly how Google puts its own interests ahead of consumers in local search, which you can watch here:

    Yelp’s position, at least in this video, appears to be that Google’s answer box is anticompetitive because it only includes reviews and ratings from Google; presumably the situation could be resolved were Google to use sources like Yelp. There are three problems with this argument, though:

    • First, the answer box originally included content scraped from sources like Yelp and other vertical search sites; under pressure from the FTC, driven in part by complaints from Yelp and other vertical search engines, Google agreed to stop doing so in 2013.2
    • Second, in a telling testament to the power of being on top of search results, Google’s ratings and reviews have improved considerably in the two years since that video was posted; this isn’t a static market (to be sure, this is an argument that could be used on both sides).
    • Third — and this is the point of this article — what Yelp seems to want will only serve to make Google stronger.

    No wonder Google declined the request for an interview.

    The Bill Gates Line

    Over the last few weeks I have been exploring what differences there are between platforms and aggregators, and was reminded of this anecdote from Chamath Palihapitiya in an interview with Semil Shah:

    Semil Shah: Do you see any similarities from your time at Facebook with Facebook platform and connect, and how Uber may supercharge their platform?

    Chamath: Neither of them are platforms. They’re both kind of like these comical endeavors that do you as an Nth priority. I was in charge of Facebook Platform. We trumpeted it out like it was some hot shit big deal. And I remember when we raised money from Bill Gates, 3 or 4 months after — like our funding history was $5M, $83 M, $500M, and then $15B. When that 15B happened a few months after Facebook Platform and Gates said something along the lines of, “That’s a crock of shit. This isn’t a platform. A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.”

    By this measure Windows was indeed the ultimate platform — the company used to brag about only capturing a minority of the total value of the Windows ecosystem — and the operating system’s clear successors are Amazon Web Services and Microsoft’s own Azure Cloud Services. In all three cases there are strong and durable businesses to be built on top.

    A drawing of Platform Businesses Attract Customers by Third Parties
    From Tech’s Two Philosophies

    Once a platform dips under the Bill Gates Line, though, the long-term potential of a business built on a “platform” starts to decline. Apple’s App Store, for example, has all of the trappings of a platform, but Apple quite clearly captures the vast majority of the overall ecosystem, both because of the profitability of the iPhone and also because of its control of App Store economics; the paucity of strong and durable businesses on the App Store is a natural outgrowth of that.

    The App Store intermediates 3rd parties and users

    Note that Apple’s ability to control the economics of its developers comes from intermediating the relationship of those developers with customers.

    Aggregators, Not Platforms

    Facebook and Google take this intermediation to the extreme, leveraging their ability to drive discovery of the sheer abundance of information on their network and the Internet broadly:

    A drawing of Aggregators Own Customer Relationships and Suppliers Follow
    In the aggregator business model the aggregator owns customers and suppliers follow

    It follows that Facebook and Google’s “platforms” not only don’t meet the Bill Gates Line, they don’t even register on the graph: they are the purest expression of aggregators. From my original formulation:

    The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

    This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be aggregated at scale leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

    The result is the shift in value predicted by the Conservation of Attractive Profits. Previous incumbents, such as newspapers, book publishers, networks, taxi companies, and hoteliers, all of whom integrated backwards, lose value in favor of aggregators who aggregate modularized suppliers — which they often don’t pay for — to consumers/users with whom they have an exclusive relationship at scale.

    This is ultimately the most important distinction between platforms and aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; aggregators, on the other hand, intermediate and control it.

    Moreover, at least in the case of Facebook and Google, the point of integration in their respective value chains is the network effect. This is what I was trying to get at last week in The Moat Map with my discussion of the internalization of network effects:

    • Google has had the luxury of operating in an environment — the world wide web — that was by default completely open. That let the best technology win, and that win was augmented by the data that comes from serving an ever-increasing portion of the market. The end result was the integration of end users and the data feedback cycle that made Google search better and better the more it was used.
    • Facebook’s differentiator, meanwhile, is the relationships between friends and family; the company has subsequently integrated that network effect with consumer attention, forcing all of the content providers to jostle for space in the Newsfeed as pure commodities.

    It’s worth noting, by the way, why it was that Facebook could come to be a rival to Google in the first place; specifically, Facebook had exclusive data — those relationships and all of the behavior on Facebook’s site that resulted — that Google couldn’t get to. In other words, Facebook succeeded not by being a part of Google, but by being completely separate.

    Succeeding in a World of Aggregators

    This gets at why I find Yelp’s complaints a bit besides the point: the company seems to be expending an awful lot of energy to regain the right to give Google the content Yelp worked hard to acquire. There is revenue there, of course, just as there is in the production of commodities generally, but without a sustainable cost advantage it’s not the best route to building a strong and durable business.

    Of course that is the bigger problem: I noted above that Google’s library of ratings and reviews has grown substantially over the past few years; users generating content are the ultimate low-cost supplier, and losing that supply to Google is arguably a bigger problem for Yelp than whatever advertising revenue it can wring out from people that would click through on a hypothetical Google Answer Box that used 3rd-party sources. And, it should be noted, that Yelp’s entire business is user-generated reviews: they and similar vertical sites are likely to do a far better job of generating, organizing, and curating such data.

    Still, I can’t help but wonder whether or not Yelp’s problem is not that Google is using its own content in the Answer Box, but rather the Answer Box itself; which of these set of results would be better for Yelp’s business, even in a hypothetical world where Answer Box content comes from Yelp?

    Yelp would get more visitors without the answer box

    Presuming that the answer is the image on the right — driving users to Yelp is both better for the bottom line and better for content generation, which mostly happens on the desktop — and it becomes clear that Yelp’s biggest problem is that the more useful Google is — even if it only ever uses Yelp’s data! — the less viable Yelp’s business becomes. This is exactly what you would expect in an aggregator-dominated value chain: aggregators completely disintermediate suppliers and reduce them to commodities.

    To that end, this is why the best strategies entail business models that avoid Google and Facebook completely: look no further than Amazon, which last month stopped buying Google Shopping ads, something the company can afford to do given that half of shoppers start their product searches on Amazon. To be sure, Amazon is plenty powerful in its own right, but it is a hard-to-ignore example of Google’s favorite argument that “competition is only a click away.”

    Yelp Versus Google

    Still, I have sympathy for Yelp’s position; Stoppelman told 60 Minutes:

    If I were starting out today, I would have no shot of building Yelp. That opportunity has been closed off by Google and their approach…because if you provide great content in one of these categories that is lucrative to Google, and seen as potentially threatening, they will snuff you out.

    Stoppelman is right, but the reason is perhaps less nefarious than it seems; the 60 Minutes report explained why in the voiceover:

    Yelp and countless other sites depend on Google to bring them web traffic — eyeballs for their advertisers.

    Yelp, like many other review sites, has deep roots in SEO — search-engine optimization. Their entire business was long predicated on Google doing their customer acquisition for them. To the company’s credit it has become a well-known brand in its own right, and now gets around 70% of its visits via its mobile app. Those visits are very much in the Amazon model I highlighted above: users are going straight to Yelp and bypassing Google directly.

    That, though, isn’t great for Google! It seems a bit rich that Yelp should be free to leverage its app to avoid Google completely, and yet demand that Google continue to feature Yelp prominently in its search results, particularly on mobile, where the Answer Box has particular utility. I get that Yelp feels like Google has changed the terms of the deal from when Yelp was founded in 2004, but the reality is that the change that truly mattered was mobile.

    What I do find compelling is a new video that Yelp put out yesterday; while it makes many of the same points as the one above, instead of being focused on regulators it is targeting Google itself, arguing that Google isn’t living up to its own standards by not featuring the best results, and not driving traffic back to sites that make the content Google needs (by, for example, not including prominent links to the content filling its answer boxes; Yelp isn’t asking that they go away, just that they drive traffic to 3rd parties). Google may be an aggregator, but it still needs supply, which means it needs a sustainable open web. The company should listen.

    Facebook and Data Portability

    Facebook, unfortunately for its suppliers, faces no such constraints: the content that is truly differentiated is made by Facebook’s users, and it is wholly owned by Facebook. Facebook is even further from the Bill Gates Line than Google is: the latter at least needs commoditized suppliers; the former can take or leave them on a whim, and does.

    That is why I’ve come to realize a popular prescription for Facebook’s dominance, data portability, put forward this week by a coalition of progressive organizations under the umbrella Freedom From Facebook, is so mistaken.3 The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?

    Facebook at the center of data exchange
    From The Facebook Brand

    Remember the conditions that led to Facebook’s rise in the first place: the company was able to circumvent Google, go directly to users, and build a walled garden of data that the search company couldn’t touch. Partnering or interoperating with companies below the Bill Gates Line, particularly aggregators, is simply an invitation to be intermediated. To demand that governments enforce exactly that would be a mistake that only helps Facebook.4


    The broader takeaway is that distinguishing between platforms and aggregators isn’t simply an academic exercise: it should affect how companies think about their competitive environment vis-à-vis the biggest companies in tech, and, just as importantly, it should weigh heavily on regulators. The Microsoft antitrust battles of the 2000s were in many respects about enforcing interoperability as a way of breaking into the Microsoft platform; today antitrust should be far more concerned about aggregators capturing everything they touch by virtue of their control of end users.

    That’s the thing about the “Generals fight the last war” saying; it’s usually applied to the losing side that made mistake after mistake while the victors leveraged the new world order.

    I wrote a follow-up to this article in this Daily Update.


    1. I’ve discussed why I disagree with Gary Reback’s views on monopoly and innovation in this Daily Update  

    2. With regard to that FTC decision, yes, as the Wall Street Journal reported, some FTC staff members recommended suing Google; what is not true is that the recommendation was unanimous, or that FTC commissioners ultimately deciding to go in another direction was unusual. In fact, other staff groups in other groups recommended against the suit, and the decision of the FTC commissioners was unanimous. Again, that is not to say it was the right decision, but that the popular conception — including what was reported in that 60 Minutes piece — is a bit off 

    3. To be fair, I’ve made the same argument previously, but I’ve changed my mind 

    4. The group’s demand that Facebook be forced to divest Instagram, WhatsApp, and Messenger makes much more sense in terms of this framework (with the exception of Messenger, which has always been a part of Facebook). I strongly believe that the single best antitrust remedy for aggregators is limiting acquisitions  


  • The Moat Map

    A subtext to last week’s article, Tech’s Two Philosophies, was the idea that there is a difference between Aggregators and Platforms; this was the key section:

    It is no accident that Apple and Microsoft, the two “bicycle of the mind” companies, were founded only a year apart, and for decades had broadly similar business models: sure, Microsoft licensed software, while Apple sold software-differentiated hardware, but both were and are at their core personal computer companies and, by extension, platforms…

    Google and Facebook, on the other hand, are products of the Internet, and the Internet leads not to platforms but to aggregators. While platforms need 3rd parties to make them useful and build their moat through the creation of ecosystems, aggregators attract end users by virtue of their inherent usefulness and, over time, leave suppliers no choice but to follow the aggregators’ dictates if they wish to reach end users.

    The distinction wasn’t entirely satisfying; first and foremost the power of both aggregators and platforms, however defined, ultimately rests on the size and strength of their userbase. Moreover, Google and Facebook have platform-type aspects to their business, and Apple has aggregator characteristics when it comes to its control of the App Store (that Microsoft does not is a symbol of the company’s mobile failure).

    Moreover, what of companies like Amazon, or Netflix? In a follow-up Daily Update I classified the former as a platform and the latter as an aggregator, but clearly both have very different businesses — and supplier relationships — than either Google and Facebook on one side or Apple and Microsoft on the other, even as they both derive their power from owning the customer relationship.

    Make no mistake, that bit about owning the customer relationship remains critical: that is the critical insight of Aggregation Theory. How that ownership of the customer translates into an enduring moat, though, depends on the interaction of two distinct attributes: supplier differentiation and network effects.

    The Supplier Differentiation Spectrum

    Consider the six companies I mentioned above: Facebook, Google, Amazon, Netflix, Apple, and Microsoft.1

    The degree of differentiation of tech company suppliers varies

    These companies exist on a spectrum in terms of supplier differentiation (and, by extension, supplier power):

    • Facebook has commoditized suppliers more than anyone: an article from the New York Times is treated no differently from a BuzzFeed quiz or the latest picture of your niece or an advertisement.
    • Google gives slightly more deference to established content providers, but not much; search results are presented the same regardless of their source (although Google increasingly presents results differently depending on the type of content).
    • Amazon is a little harder to classify — that’s kind of entailed in the name The Everything Store — but generally brands are much less important than they are in a world of limited shelf space, and few people even realize they are buying from the 3rd party merchants that make up over half of Amazon’s sales.
    • Differentiation matters more for Netflix, particularly when it comes to acquiring new users; still, users are transacting with Netflix and, the longer they stick with the streaming service, first opening Netflix and then looking for something to watch, as opposed to the other way around.
    • Apple first and foremost attracts and retains users through its integrated experience, but that experience would quickly be abandoned were there not third party apps.
    • Microsoft traditionally succeeded entirely because of its ecosystem, not just applications but also the entire universe of value-added resellers, systems integrators, etc.

    The extremes make the point: Facebook could lose all of its third party content providers overnight and still be a compelling service; Microsoft without third parties would be, well, we already saw with Windows Phone.

    The Network Effect Spectrum

    Another way to consider this spectrum is in terms of user-related network effects. The idea of a network effect is that an additional user increases the value of a good or service, and indeed all of these companies depend on network effects. However, the type of network effect differs considerably, as well as the extent to which the network effect directly improves a company’s core product (what I am calling an “internalized” versus “externalized” network effect):

    The internalization of network effects varies by tech company

    Again there is a spectrum:

    • For Facebook the network effect that matters is users — a social network’s most important feature is whether your friends and family are using it. This network — given it is the product! — is completely internal to Facebook.
    • Google has network effects of its own, but they are less about users and more about data: more people searching makes for better search results, because of the system Google has built to relentlessly harvest, analyze, and iterate on data. Like Facebook, Google’s network effect is largely internal to Google.
    • Amazon’s network effect is more subtle: there is an aspect where your shopping on Amazon improves my experience through things like rankings, reviews, and data feedback loops. Just as important, though, are two additional effects: first, the more people that shop on Amazon, the more likely suppliers are to come onto Amazon’s platform, increasing price and selection for everyone. In other words, Amazon, particularly as it transitions to being more of a commerce platform and less of a retailer, is a two-sided network. There is one more factor though: Amazon’s incredible service rests on hundreds of billions of dollars in investments; that fixed cost investment has to be born by customers at some point, which means the more customers there are the less any one customer is responsible for those fixed costs (this manifests indirectly through lower prices and better service).
    • Netflix is a hybrid much like Amazon: there are certainly data network effects when it comes to what shows are made, what are cancelled, recommendations, ratings, etc. An essential part of Netflix’s competitive advantage going forward, though, rests on its differentiated ability to invest in new shows; this investment capability is driven by the company’s huge and still-growing user base, which is the biggest way that additional users benefit users already on the service.
    • Apple certainly benefits from a large user base over which to spread the significant fixed costs of its products, but on this end of the spectrum it is the two-sided network of developers and users that is most important. The more users that are on a platform, the more developers there will be, which increases the value of the platform for everyone.
    • Microsoft, befitting the point I made above about the expansiveness of its ecosystem, has the most “externalized” network effect of all: there is very little about Windows, for example, that produces a network effect (Office is another story), but the ecosystem on top of Windows produced one of the greatest network effects ever.

    At this point, you may have noticed that these two spectrums run in roughly the same order: I don’t think that is a coincidence.

    The Moat Map

    Here are these two spectrums laid out on two orthogonal axis:

    The Map Moat represents the relationship between supplier differentiation and network externalization

    This relationship between the differentiation of the supplier base and the degree of externalization of the network effect forms a map of effective moats; to again take these six companies in order:

    • Facebook has completely internalized its network and commoditized its content supplier base, and has no motivation to, for example, share its advertising proceeds. Google similarly has internalized its network effects and commoditized its supplier base; however, given that its supply is from 3rd parties, the company does have more of a motivation to sustain those third parties (this helps explain, for example, why Google’s off-site advertising products have always been far superior to Facebook’s).
    • Netflix and Amazon’s network effects are partially internalized and partially externalized, and similarly, both have differentiated suppliers that remain very much subordinate to the Amazon and Netflix customer relationship.
    • Apple and Microsoft, meanwhile, have the most differentiated suppliers on their platforms, which makes sense given that both depend on largely externalized network effects. “Must-have” apps ultimately accrue to the platform’s benefit.

    It is just as useful to think about what happens when companies find themselves outside of the Moat Map.

    Missing Moats

    Start with Apple and apps: in August 1997, Steve Jobs, having just returned to Apple, took the stage at Macworld Boston and proceeded to humble himself: first, he talked about how much Apple needed Adobe, and then he announced a settlement with Microsoft that entailed Microsoft investing in Apple and developing Office for Mac for at least five years. That was followed by Bill Gates’ grinning visage appearing via satellite over Jobs’ head:

    An image from MacWorld Boston when Microsoft invested in Apple

    I wrote in 2013 that I believe this experience resulted in Apple making poor strategic choices with the iPhone and iPad: the company never again wanted to have its suppliers become too powerful. The way this played out, though, is that Apple for years neglected the business model needs of developers building robust productivity apps that could have meaningfully differentiated iOS devices from Android.

    To be sure, the company has been more than fine: its developer ecosystem is plenty strong enough to allow the company’s product chops to come to the fore. I continue to believe, though, that Apple’s moat could be even deeper had the company considered the above Moat Map: the network effects of a platform like iOS are mostly externalized,2 which means that highly differentiated suppliers are the best means to deepen the moat; unfortunately Apple for too long didn’t allow for suitable business models.

    Some company's and models outside of the Moat Map

    Another example is Uber: on the one hand, Uber’s suppliers are completely commoditized. This might seem like a good thing! The problem, though, is that Uber’s network effects are completely externalized: drivers come on to the platform to serve riders, which in turn makes the network more attractive to riders. This leaves Uber outside the Moat Map. The result is that Uber’s position is very difficult to defend; it is easier to imagine a successful company that has internalized large parts of its network (by owning its own fleet, for example), or done more to differentiate its suppliers. The company may very well succeed thanks to the power from owning the customer relationship, but it will be a slog.

    On the opposite side of the map are phone carriers in a post-iPhone world: carriers have strong network effects, both in terms of service as well as in the allocation of fixed costs. Their profit potential, though, was severely curtailed by the emergence of the iPhone as a highly differentiated supplier. Suddenly, for the first time, customers chose their carrier on the basis of whether or not their preferred phone worked there; today, every carrier has the iPhone, but the process of reaching that point meant the complete destruction of carrier dreams of value-added services, and a lot more competition on capital-intensive factors like coverage and price.

    Direction or Context?

    It’s worth noting that maps can take two forms: some give direction, and others provide context for what has already happened; I’m not entirely sure which best describes the Moat Map. In the case of Apple and apps, for example, I absolutely believe the company could have made different strategic choices had it fully appreciated the interaction between supplier differentiation and network effects.

    On the other hand, one could make a very strong case that the degree of supplier differentiation possible flows from the network effect involved: perhaps it was inevitable that Facebook and Google commoditized suppliers, for example, or that Amazon and Netflix would have to simultaneously pursue differentiated suppliers even as they sought to suppress them. What is always certain, though, is that there is no one perfect strategy: as always, it depends.

    Thanks to James Allworth, my co-host on the Exponent podcast, for helping me conceptualize this framework

    I wrote a follow-up to this article in this Daily Update.


    1. In this article when I refer to “Amazon” I am primarily referring to the e-commerce company; Microsoft the PC company. I will cover AWS and Azure in a follow-up in the Daily Update

    2. iMessage being an instructive exception 


  • Tech’s Two Philosophies

    Even though Apple’s developer conference is still a few weeks away, I think it’s safe to say that the demo of Google Duplex at yesterday’s Google I/O keynote will go down as the most impressive of the tech conference season. If you haven’t seen it, it is a must-watch:

    Once I picked my jaw up off the floor, though, what struck me about Google CEO Sundar Pichai’s presentation was how he opened the segment:

    Our vision for our assistant is to help you get things done.

    And how he closed it:

    A common theme across all this is we are working hard to give users back time. We’ve always been obsessed about that at Google. Search is obsessed about getting users to answers quickly and giving them what they want.

    In Google’s view, computers help you get things done — and save you time — by doing things for you. Duplex was the most impressive example — a computer talking on the phone for you — but the general concept applied to many of Google’s other demonstrations, particularly those predicated on AI: Google Photos will not only sort and tag your photos, but now propose specific edits; Google News will find your news for you, and Maps will find you new restaurants and shops in your neighborhood. And, appropriately enough, the keynote closed with a presentation from Waymo, which will drive you.

    The Google and Facebook Philosophy

    Rewind a week, and there was a specific section in Mark Zuckerberg’s keynote at the Facebook F8 conference that stuck out to me:

    I believe that we need to design technology to help bring people closer together. And I believe that that’s not going to happen on its own. So to do that, part of the solution, just part of it, is that one day more of our technology is going to need to focus on people and our relationships. Now there’s no guarantee that we get this right. This is hard stuff. We will make mistakes and they will have consequences and we will need to fix them. But what I can guarantee is that if we don’t work on this the world isn’t moving in this direction by itself.

    Zuckerberg, as so often seems to be the case with Facebook, comes across as a somewhat more fervent and definitely more creepy version of Google: not only does Facebook want to do things for you, it wants to do things its chief executive explicitly says would not be done otherwise. The Messianic fervor that seems to have overtaken Zuckerberg in the last year, though, simply means that Facebook has adopted a more extreme version of the same philosophy that guides Google: computers doing things for people.

    The Microsoft and Apple Philosophy

    Earlier this week, while delivering Microsoft’s Build conference keynote, CEO Satya Nadella struck a very different tone; after describing how computing was becoming invisible, because it is everywhere, Nadella said:

    That’s the opportunity that we have. It’s in some sense endless, but we also have responsibility. We have the responsibility to ensure that these technologies are empowering everyone, these technologies are creating equitable growth by ensuring that every industry is able to grow and create employment. But we also have a responsibility as a tech industry to build trust in technology.

    In fact Hans Jonas was a philosopher who worked in the 50s, 60s, and he wrote a paper on technology and responsibility…he talks about act so that the effects of your action are compatible with permanence or genuine life. That’s something that we need to reflect on, because he was talking about the power of technology being such that it far outstrips our ability to completely control it, especially its impact even on future generations. And so we need to develop a set of principles that guide the choices we make because the choices we make is what’s going to define the future…

    This opportunity and responsibility is what grounds us in our mission to empower every person and every organization on the planet to achieve more. We’re focused on building technology so that we can empower others to build more technology. We’ve aligned our mission, the products we build, our business model, so that your success is what leads to our success. There’s got to be complete alignment.

    This is technology’s second philosophy, and it is orthogonal to the other: the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently. And, with this philosophy, comes a different take on responsibility. Pichai, in the opening of Google’s keynote, acknowledged that “we feel a deep sense of responsibility to get this right”, but inherent in that statement is the centrality of Google generally and the direct culpability of its managers. Nadella, on the other hand, insists that responsibility lies with the tech industry collectively, and all of us who seek to leverage it individually.

    The Bicycle of the Mind

    This second philosophy, that computers are an aid to humans, not their replacement, is the older of the two; its greatest proponent — prophet, if you will — was Microsoft’s greatest rival, and his analogy of choice was, coincidentally enough, about transportation as well. Not a car, but a bicycle:

    Steve Jobs was exceptionally fond of this analogy: there are multiple clips of him making the point in mostly the same way; I usually link to this one because by the time this video was recorded1 Jobs had his delivery perfectly honed.

    Interestingly, though, the earliest known clip of Jobs telling this story, from 1980, doesn’t include the famous phrase “Bicycle of the Mind”; it’s worth watching, though, all the same:

    The best analogy I’ve ever heard is Scientific American, I think it was, did a study in the early 70s on the efficiency of locomotion, and what they did was for all different species of things in the planet, birds and cats and dogs and fish and goats and stuff, they measured how much energy does it take for a goat to get from here to there. Kilocalories per kilometer or something, I don’t know what they measured. And they ranked them, they published the list, and the Condor won. The Condor took the least amount of energy to get from here to there. Man was didn’t do so well, came in with a rather unimpressive showing about a third of the way down the list.

    But fortunately someone at Scientific American was insightful enough to test a man with a bicycle, and man with a bicycle won. Twice as good as the Condor, all the way off the list. And what it showed was that man is a toolmaker, has the ability to make a tool to amplify an inherent ability that he has. And that’s exactly what we’re doing here.

    This is precisely what Nadella was driving at: “to empower every person and every organization on the planet to achieve more” is to “amplify an inherent ability” those people and organizations have; the goal is not to do things for them, but to enable them to do things never before possible. And, I would hasten to add, Apple remains very much on the same side of this philosophical divide.

    The Chicken and Egg Question

    There is certainly an argument to be made that these two philosophies arise out of their historical context; it is no accident that Apple and Microsoft, the two “bicycle of the mind” companies, were founded only a year apart, and for decades had broadly similar business models: sure, Microsoft licensed software, while Apple sold software-differentiated hardware, but both were and are at their core personal computer companies and, by extension, platforms.

    In a platform business model 3rd parties attract customers

    Google and Facebook, on the other hand, are products of the Internet, and the Internet leads not to platforms but to aggregators. While platforms need 3rd parties to make them useful and build their moat through the creation of ecosystems, aggregators attract end users by virtue of their inherent usefulness and, over time, leave suppliers no choice but to follow the aggregators’ dictates if they wish to reach end users.

    In the aggregator business model the aggregator owns customers and suppliers follow

    The business model follows from these fundamental differences: a platform provider has no room for ads, because the primary function of a platform is provide a stage for the applications that users actually need to shine. Aggregators, on the other hand, particularly Google and Facebook, deal in information, and ads are simply another type of information.2 Moreover, because the critical point of differentiation for aggregators is the number of users on their platform, advertising is the only possible business model; there is no more important feature when it comes to widespread adoption than being “free.”

    Still, that doesn’t make the two philosophies any less real: Google and Facebook have always been predicated on doing things for the user, just as Microsoft and Apple have been built on enabling users and developers to make things completely unforeseen.

    Tech’s Yin and Yang

    That there are two philosophies does not necessarily mean that one is right and one is wrong: the reality is we need both. Some problems are best solved by human ingenuity, enabled by the likes of Microsoft and Apple; others by collective action. That, though, gets at why Google and Facebook are fundamentally more dangerous: collective action is traditionally the domain of governments, the best form of which is bounded by the popular will. Google and Facebook, on the other hand, are accountable to no one. Both deserve all of the recent scrutiny they have attracted, and arguably deserve more.

    That scrutiny, though, and whatever regulations that result, must keep in mind this philosophical divide: platforms that create new possibilities — and not just Apple and Microsoft! — are the single most important economic force when it comes to countering the oncoming wave of computers doing people’s jobs, and lazily written regulation that targets aggregators but constricts platforms will inevitably do more harm than good.

    The truth is that the Divine Discontent that I wrote about last week is not only an antidote to low-end disruption, but also a reason for optimism: companies like Apple and Amazon can, as I noted, win in the long run by offering a superior user experience, but more importantly, the dividend of discontent is a greenfield of opportunities to build new businesses and new jobs alleviating that discontent. For that we need platforms on which to build those businesses, and yes, we will need artificial intelligence to do things for us so we have the time.

    I wrote a follow-up to this article in this Daily Update.


    1. 1990 I believe, but I’m not certain 

    2. As I’ve written in the past, this is why mobile saved Facebook: the company desperately wanted to be a platform but being “just an app” left Facebook no choice but to be self-contained and thus a better ad company 


  • Divine Discontent: Disruption’s Antidote

    It is nothing but a number, no different than 999,999,999,999 for all practical purposes, but we humans are not practical creatures: we attach importance to all kinds of silly things, round numbers chief amongst them. To that end, an increasingly popular parlor question in the stocks as entertainment business is which company will be worth $1 trillion first?

    The market caps of the top five companies over time

    There are certainly cases to be made for Google and Microsoft and even Facebook, but most of the attention is focused on Amazon and Apple: the latter for being the closest, and the former for growing the fastest, at least recently:

    The market cap of Apple and Amazon over time

    It is interesting to consider these two companies in conjunction: they couldn’t be more different, but for the one thing that makes them both so valuable.

    Apple Versus Amazon

    I mean it when I say these companies are the complete opposite: Apple sells products it makes; Amazon sells products made by anyone and everyone. Apple brags about focus; Amazon calls itself “The Everything Store.” Apple is a product company that struggles at services; Amazon is a services company that struggles at product. Apple has the highest margins and profits in the world; Amazon brags that other’s margin is their opportunity, and until recently, barely registered any profits at all. And, underlying all of this, Apple is an extreme example of a functional organization, and Amazon an extreme example of a divisional one.

    These points are all, of course, interrelated: Apple’s organizational structure, focus, and release-focused development cycle enable it to create highly differentiated products, even as the exact same structure, focus, and development cycle underly the company’s struggles in iterative services.1 Similarly, Amazon’s highly modular structure, varied businesses, and iterative approach to those businesses enable it to create services with itself as its first, best, customer, and then extend those services to developers and retailers, even as the exact same factors lead to product disasters like the Fire Phone.

    Both, taken together, are a reminder that there is no one right organizational structure, product focus, or development cycle: what matters is that they all fit together, with a business model to match. That is where Apple and Amazon are arguable more alike than not: both are incredibly aligned in all aspects of their business. What makes them truly similar, though, is the end goal of that alignment: the customer experience.

    The iPhone Versus Disruption

    The first time Apple released two different iPhone form factors in the same year was 2013. There the new form factor was the iPhone 5C, but while the industrial design was new, the pricing wasn’t: the 5C slotted into the spot where the discontinued iPhone 5 would traditionally have gone — $100 less than the new flagship iPhone 5S. Analysts and pundits were aghast: how could Apple not produce a truly low-price iPhone? Didn’t they know this guaranteed disruption?

    I argued to the contrary in a piece entitled What Clayton Christensen Got Wrong. After recounting the many predictions by the father of disruption that the iPhone would not be a success, I came up with three specific reasons why Apple seemed immune to disruptive gravity:

    • First, it was folly to presume that consumers were rational, at least to the extent that rationality could be reduced to easily articulable features balanced against price (or appreciating that round numbers aren’t anything special).
    • Second, there are many attributes of a product that can’t be easily measured, but only experienced, and that they loom large when the person using the product is the same as the person buying the product.
    • Third, that modular products, by virtue of their prioritization of standardization and interconnectivity, would inevitably fall short on attributes directly connected to the experience of using the device.

    The key paragraph is here:

    The attribute most valued by consumers, assuming a product is at least in the general vicinity of a need, is ease-of-use. It’s not the only one — again, doing a job-that-needs-done is most important — but all things being equal, consumers prefer a superior user experience. What is interesting about this attribute is that it is impossible to overshoot.

    The term “overshoot” is right out of disruption theory. Christensen writes in his seminal book, The Innovator’s Dilemma:

    The second element of the failure framework, the observation that technologies can progress faster than market demand…means that in their efforts to provide better products than their competitors and earn higher prices and margins, suppliers often “overshoot” their market: They give customers more than they need or ultimately are willing to pay for. And more importantly, it means that disruptive technologies that may underperform today, relative to what users in the market demand, may be fully performance-competitive in that same market tomorrow.

    This was the basis for insisting that the iPhone must have a low-price model: surely Apple would soon run out of new technology to justify the prices it charged for high-end iPhones, and consumers would start buying much cheaper Android phones instead!

    In fact, as I discussed in after January’s earnings results, the company has gone in the other direction: more devices per customer, higher prices per device, and an increased focus on ongoing revenue from those same customers. Yesterday’s results were mostly more of the same: wearables were up a lot (more devices per customer); ASP’s were down from last quarter but still 11% higher than a year ago;2 services revenue, meanwhile, shot through the roof for reasons that are still a bit unclear, but impressive nonetheless.

    Also the same was a very modest increase in the number of iPhone sold: 3% more than a year ago. Apple seems to have mostly saturated the high end, slowly adding switchers even as existing iPhone users hold on to their phones longer; what is not happening, though, is what disruption predicts: Apple isn’t losing customers to low-cost competitors for having “overshot” and overpriced its phones. It seems my thesis was right: a superior experience can never be too good — or perhaps I didn’t go far enough.

    Amazon and Divine Discontent

    Jeff Bezos has been writing an annual letter to shareholders since 1997, and he attaches that original letter to one he pens every year. It included this section entitled Obsess Over Customers:

    From the beginning, our focus has been on offering our customers compelling value. We realized that the Web was, and still is, the World Wide Wait. Therefore, we set out to offer customers something they simply could not get any other way, and began serving them with books. We brought them much more selection than was possible in a physical store (our store would now occupy 6 football fields), and presented it in a useful, easy-to-search, and easy-to-browse format in a store open 365 days a year, 24 hours a day. We maintained a dogged focus on improving the shopping experience, and in 1997 substantially enhanced our store. We now offer customers gift certificates, 1-Click shopping, and vastly more reviews, content, browsing options, and recommendation features. We dramatically lowered prices, further increasing customer value. Word of mouth remains the most powerful customer acquisition tool we have, and we are grateful for the trust our customers have placed in us. Repeat purchases and word of mouth have combined to make Amazon.com the market leader in online bookselling.

    Over the last 20 years Amazon has dramatically changed, but Bezos’ annual focus on consumers has not. This year, after highlighting just how much customers love Amazon (answer: a lot), Bezos wrote:

    One thing I love about customers is that they are divinely discontent. Their expectations are never static — they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before — in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.

    Critically, when it comes to Internet-based services, this customer focus does not come at the expense of a focus on infrastructure or distribution or suppliers: while those were the means to customers in the analog world, in the online world controlling the customer relationship gives a company power over its suppliers, the capital to build out infrastructure, and control over distribution. Bezos is not so much choosing to prioritize customers insomuch as he has unlocked the key to controlling value chains in an era of aggregation.

    Bezos’s letter, though, reveals another advantage of focusing on customers: it makes it impossible to overshoot. When I wrote that piece five years ago, I was thinking of the opportunity provided by a focus on the user experience as if it were an asymptote: one could get ever closer to the ultimate user experience, but never achieve it:

    The asymptote version of the user experience

    In fact, though, consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.

    The ever-changing version of the user experience

    In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine; I’m not sure that Amazon will beat Apple to $1 trillion, but they surely have the best shot at two.

    The Disruption Antidote

    This analysis applies to Facebook and Google, two of the other companies in that chart, more than you might expect. While the two companies’ revenues are based on advertising, the attractiveness to advertisers rests on consumers using both services.3 Both, though, are disadvantaged to an extent because their means of making money operate orthogonally to a great user experience; both are protected by the fact would-be competitors inevitably have the same business model.4

    That is why, for all four companies, the first place to look for weaknesses is not in the supplier base or distribution or even regulation: it is with the end users. That is why it matters that Amazon is the most popular company in the United States, why Apple and Google continue to have two of the most respected brands, and why Facebook is right to be more concerned about the PR effect of its scandals than the regulatory ones. Owning the customer relationship by means of delivering a superior experience is how these companies became dominant, and, when they fall, it will be because consumers deserted them, either because the companies lost control of the user experience (a danger for Facebook and Google), or because a paradigm shift made new experiences matter more (a danger for Google and Apple).

    In the meantime, though, disruption5 has its antidote.


    1. Yes, the company is crushing it with regards to “Services” revenue; that is mostly from the App Store and also iCloud Storage, that is to say, predicated on iPhone dominance. This reference is to things like Siri and Maps 

    2. There was a lot of breathless speculation about iPhone X sales cratering, which clearly didn’t happen. That said, iPhone X sales clearly fell off a bit: iPhone ASP was down 9% sequentially, a much larger drop than the 6% drop a year ago. 

    3. Microsoft is an enterprise company, a very different beast 

    4. For a social network, the number one feature is how many of your friends are on it, which means a “free” service will always have an advantage; for a search engine, there is a weaker, but still significant, network effect that is based on data, which again augurs for a free service with the maximum number of users that entails. 

    5. Low-end disruption, to be clear 


  • Open, Closed, and Privacy

    Note: This article has nothing to do with open or closed source code

    It was eight years ago next month that Vic Gundotra, then-VP of Engineering at Google, delivered a blistering attack on Apple for not being open:1

    A slide from Google's 2010 I/O keynote criticizing Apple

    On [my] first day I met a man named Mr. Andy Rubin. Now I suspect most of you know who Andy Rubin is. At the time he was responsible for what was then a secret project codenamed Android, and on that first day Andy enthusiastically described to me the team’s mission and purpose. And as he spoke — I’ll level with you — I was skeptical. In fact, I interrupted Andy, and I said, “Andy, I don’t get it. Does the world really need another mobile operating system? Google is about advertising — shouldn’t we be on every phone?”

    To this day I remember Andy’s response, and he made two points. The first point Andy made was that it was critically important to provide a free mobile operating system — an open-source operating system — that would enable innovation at every level of the stack. In other words, OEMs should be free to build all kinds of devices — devices with keyboards, without keyboards, with front-facing cameras, two inches, three inches, four inches — that operators should be able to compete on the strength and coverage of their network — 2G, 3G, 4G, LTE, CDMA — and that in the end, with innovation coming at every layer, it would be the consumer who would be able to benefit by getting the best device on the best network for them.

    I remember Andy’s second point: he argued that if Google did not act, we faced a draconian future, a future where one man, one company, one device, one carrier, would be our only choice. That’s a future we don’t want! So if you believe in openness, if you believe in choice, if you believe in innovation from everyone, then welcome to Android.

    Gundotra repeated the word “open” like a mantra, appealing to the sensibilities of not just people in technology but also its critics, opposed to so-called “walled gardens”; the two primary offenders were deemed to be Apple and Facebook.

    This is what made Google’s low-key announcement of its latest plans for messaging on Android phones — an exclusive with The Verge about what it calls Chat — so striking: the company is introducing an open alternative to products like iMessage and WhatsApp, but only as a last resort, and the effort is being pilloried by critics to boot; Walt Mossberg was representative:

    Of course Google’s critics are not criticizing Chat for being open; they are, like Mossberg, criticizing it for being “insecure” — that is, not end-to-end encrypted like iMessage or WhatsApp. That, though, is the rub: being “secure” and being “open” are incompatible.

    How End-to-End Encryption Works

    A quick primer on how end-to-end encryption works, using iMessage as an example; I’m going to dramatically simplify this explanation, but you can read Apple’s security white paper to get the specifics:

    • When iMessage is turned on, “keys” are generated; these are produced in pairs, one private and one public. These two keys are related: the public key encrypts content such that it can only be decrypted by a private key; to analogize them to a safe, the public key locks the door, and the private key unlocks it.
    • The relationship between these two keys is, well, the key to understanding how encryption works in messaging (and all communications): anyone sending an encrypted message “locks” the content using a public key, which means that the only person that can “unlock” and read the message is whoever has the corresponding private key.
    • To that end, the private key is, as the name implies private: it is kept on the device that generated it (in fact, every device with iMessage generates its own encryption keys). The public key, meanwhile, is public: for anyone to be able to send you an encrypted message means that everyone must be able to find the public key that corresponds to your private key.

    This is the precise spot where “open” breaks down: you can, in fact, send encrypted content over open protocols like email. The problem is that the sender cannot just unilaterally decide to encrypt a message; rather, the receiver has to first generate a public-private key pair, then share the public key with the sender so that the email can be encrypted in a way that only the recipient — thanks to their private key — can read it. This is, needless to say, far beyond the capabilities of most users: not only do they not understand that there needs to be a conversation before the conversation, they don’t even know the language they need to use.

    And yet, over 100 billion messages are sent per day on WhatsApp and iMessage alone, and the reason is because both are closed. To continue with the iMessage explanation, public keys are sent to Apple’s servers to be stored in a directory service; there they (along with the public keys from all of the user’s devices) are associated with the user’s phone number or email address. This is the critical piece to making iMessage encryption easy-to-use: senders need only know the recipients phone number or email address; Apple will silently pass the appropriate public keys to the sender to encrypt the message such that only the recipient can read it.2

    In short, encryption is viable for the public at scale precisely because Apple controls everything: clients on both ends, and the server in the middle. It’s the same story with WhatsApp or any of the other encrypted messaging services: being closed makes end-to-end encryption actually usable at scale.

    And, as I explained on Monday, this option is not available to Google when it comes to Android: OEMs don’t want to deepen their Google dependence, and carriers do not want to undercut their lucrative SMS business (and Google can’t force the issue because of its looming antitrust problems). The only option was the one Gundotra lauded in 2010: an open standard that no one controls, for better or, in the case of the desire for end-to-end encryption, worse.3

    Encryption and Privacy

    The ongoing debate about data and privacy is directly related to the question of encryption in some important ways, as Mossberg’s tweet notes: messaging content is data that users would like to keep private, and encryption accomplishes that.

    Of course it is not the only data generated by messaging: entailed in the ease-of-use that comes from relying on centralized servers for key exchange is the necessary collection by those servers of metadata. Obviously email addresses and/or phone numbers and/or usernames have to be stored (so that they can be associated with public keys), and the very act of connecting two accounts will generate logs of who was communicating with whom and when, and often from where (through IP addresses). Services can and do differentiate based on how long they keep that metadata; Signal,4 for example, promises to flush metadata as soon as possible, whereas WhatsApp — which uses encryption developed by Signal — keeps such data indefinitely.

    That gets at the more important way that the relationship between open/closed and encryption is relevant to data and privacy: just as encryption at scale is only possible with a closed service, so it is with privacy. That is, to the extent we as a society demand privacy, the more we are by implication demanding ever more closed gardens, with ever higher walls. Just as a closed garden makes the user experience challenge of encryption manageable, so does the centralization of data make privacy — of a certain sort — a viable business model.

    The reality of digital services is that the amount of data each of us generates at basically all times is astronomical; your phone always knows where you are, but so does every app you use and every website you visit.

    A map of Stratechery readers
    Stratechery readers

    Google, of course, knows one’s every search, for many people their every email, and thanks to the company’s ad network, control of Chrome and Google analytics, and, of course Android, pretty much everything else one does online. Facebook’s knowledge is slightly less broad but arguably deeper: your friends, your interests — both stated and revealed — and thanks to its ‘Like’ button, your web activity as well.

    To focus on simply Google and Facebook, though, is to miss how much other data collection is going on: ad networks are tracking you on nearly every website you visit, your credit card company is tracking your purchases (and by extension your location), your grocery store is tracking your eating habit, the list goes on and on. Moreover, the further down you go down the data food chain, the more likely it is that data is bought and sold. That, of course, is as open as it gets.

    Data Collection Versus Data Leakage

    Still, the contrast between Google and Facebook is worth considering: Facebook is in hot water thanks to the revelation that some amount of the data it collects was sold to Cambridge Analytica, which bragged it helped elect Donald Trump president. One does wonder how much that allegation drives the outrage about the fact that Facebook shared that data to begin with, but leaving that aside, what is noteworthy is that the outrage stems from the sharing of the data, not its collection. Yes, some are outraged by that collection — but they were outraged before the current scandal, and their objections simply didn’t register with the broader public.

    This view is buttressed by the fact that Google has been largely unscathed by the current controversy; what seems significant is not the fact that the company collects data, but rather that it has been careful to keep that data inside its walled garden. Indeed, that was always the irony with Gundotra’s attack on Apple: Google has always been anything but open when it came to its proprietary technology or its money-making ad apparatus (of which user data plays an important part). Its insistence that Android be open was based not on principle but on sound strategy: challengers always want to commoditize their complements, and for Google, smartphones themselves were complements to Search and ads.

    The implication is quite far-reaching: being open, at least to the extent that openness involved user data of any sort, is increasingly unacceptable; that new companies and user benefits might result from that data no longer matters, a fate that all-too-often befalls the not-yet-created.

    The Entrenchment of Google and Facebook

    This entrenches Facebook and Google in three ways:

    • First, it is even more unlikely that a challenger to either will arise without meaningful access to their proprietary data. This, to be fair, was already quite unlikely: the entire industry learned from Instagram’s piggy-backing on Twitter’s social graph that sharing data with a potential competitor was a bad idea from a business perspective.
    • Second, Google and Facebook will increasingly be the only source of innovations that leverage their data; it will be too politically risky for either to share anything with third parties. That means new features that rely on user data must be built by one of the two giants, or, as is always the case in a centrally-planned system relative to a market, not built at all.
    • Third, Google and Facebook’s advertising advantage, already massive, is going to become overwhelming. Both companies generate the majority of their user data on their own platforms, which is to say their data collection and advertising business are integrated. Most of their competitors for digital advertising, on the other hand, are modular: some companies collect data, and other collect ads; such a model, in a society demanding ever more privacy, will be increasingly untenable.

    There are increasing expectation that this is exactly what will happen with the European Union’s General Data Protection Regulation (GDPR). From the Wall Street Journal:

    Brussels wants its new General Data Protection Regulation, or GDPR, to stop tech giants and their partners from pressuring consumers to relinquish control of their data in exchange for services. The EU would like to set an example for legislation around the world. But some of the restrictions are having an unintended consequence: reinforcing the duopoly of Facebook Inc. and Alphabet Inc.’s Google…

    Digital advertising companies, known as ad tech firms, say Google and Facebook’s strict interpretation of GDPR squeezes their business. The ad tech firms embed their own technology in publishers’ websites and apps, putting them in competition with the tech giants. Unlike the giants, the ad tech firms have no direct relationship with consumers. They say Google’s and Facebook’s response pressures publishers to seek consent on behalf of dozens of ad tech firms that people have never heard of.

    This is hardly a surprise — I predicted this months ago. And, while GDPR advocates have pointed to the lobbying Google and Facebook have done against the law as evidence that it will be effective, that is to completely miss the point: of course neither company wants to incur the costs entailed in such significant regulation, which will absolutely restrict the amount of information they can collect. What is missed is that the increase in digital advertising is a secular trend driven first-and-foremost by eyeballs: more-and-more time is spent on phones, and the ad dollars will inevitably follow. The calculation that matters, then, is not how much Google or Facebook are hurt in isolation, but how much they are hurt relatively to their competitors, and the obvious answer is “a lot less”, which, in the context of that secular increase, means growth.

    Privacy and Regulation

    There is a broader question from GDPR specifically and the idea that the tide is pushing towards walled gardens generally: what should the seemingly inevitable regulation of tech companies look like? It seems increasingly certain that privacy will be a major focus (it obviously already is in the European Union), but to stop there would be a mistake.

    Specifically, if an emphasis on privacy and the non-leakage of data is a priority, it follows that the platforms that already exist will be increasingly entrenched. And, if those platforms will be increasingly entrenched, then the more valuable might regulation be that ensures an equal playing field on top of those platforms. The reality is that an emphasis on privacy will only increase the walls on those gardens; it may be fruitful to rule out the possibility of unfair expansion.

    Note: I wrote a follow-up in the Daily Update that you can read in this footnote:5


    1. The picture is from his presentation 

    2. Because private keys are associated with devices, iMessage actually encrypts a single message multiple times, each time using the public key for a different recipient device 

    3. To be very clear, it is technically possible to layer encryption onto RCS, but it requires the cooperation of the carriers collectively and the addition of a trusted entity like certificate authorities for https; the entire point, though, is that carriers refuse to do this. 

    4. An example of a open-source software that is a closed service 

    5. So, I definitely messed up with yesterday’s article in a way none of you noticed; given that on Monday I wrote in-depth about Google’s new Chat initiative, I kind of skirted over the details in yesterday’s article, Open, Closed, and Privacy. Unfortunately, that meant I got a whole bunch of tweets and email from non-subscribers taking me to task for items, well, that I already explained (I didn’t get any from subscribers). The perils of paywalls!

      Probably the two biggest points of pushback were that Google could build an encrypted system if they wanted to (as I explained on Monday, they already tried, and they can’t really exercise Android leverage right now), and that carriers could build a federated key exchange system and/or something akin to the certificate authority framework that undergirds HTTPS. That is all true!

      My point, though — and the reality that Google had to accept, as The Verge feature explained — is that the carriers are not going to do that, full stop. The only way to achieve end-to-end encryption in the real world as it exists today is to build a separate centralized service that sits on top of phones (via apps) and runs over the Internet. To put it another way, Google wasn’t choosing whether to build an encrypted service or an open one; they were choosing whether to build something better than SMS or nothing at all.

      Now, does Google have a business interest in message content being unencrypted? I suppose, and as I noted on Monday, making Allo unencrypted by default was a bad look (although understandable for non-advertising related reasons, specifically the deep integration with Google Assistant). The truth, though, is that Google already knows plenty about everyone, especially those using Android. One could argue that Google didn’t fight hard enough for encryption, but to say the company actively didn’t want encryption isn’t quite right in my opinion.

      Still, the clarification is useful given the comparison I was trying to draw between encryption and privacy: just as one can, in theory, envision a standard that is both open and includes encryption (like HTTPS!), one can also envision a world where users truly own their data in a secure way and carry it from service-to-service. In reality, such systems are far more viable if built into the foundation of the technology (like HTTPS!) as opposed to being retrofitted over the objection of entrenched incumbents.

      Two more points of follow-up:

      • While I didn’t say so explicitly, I think I at least strongly implied on Monday that I would not expect Apple to support Chat. They certainly could — remember, this is basically SMS 2.0, and Apple obviously supports SMS — but it is difficult for me to imagine any scenario where Apple doesn’t hold its ground with the (very legitimate!) excuse that Chat is not encrypted. More importantly, it is even more difficult for me to see any way that carriers could exert leverage on Apple; their lack of leverage is why iMessage exists in the first place.
      • The blockchain is, of course, a theoretical solution, but as I’ve noted previously, the real blockchain upside with regards to this debate is the entire undoing of aggregators through decentralization. To be sure, that is by no means a sure thing, for many of the principles laid out in this article, particularly the trade-off between a user experience that scales and such decentralization. Regardless, any such solution is quite a ways in the future.

      As for the final bit about regulation, stay tuned. It has been top-of-mind for a long time.