Beachheads and Obstacles

The fact that Amazon held its annual hardware event the same day as the keynote for Facebook’s Oculus Connect conference is almost certainly a coincidence. It was, though, a happy one, at least as far as Stratechery is concerned: these two events, wildly disparate in terms of presentation and content, have more in common than it might seem.

Revisiting the Smartphone Wars

In 2013, when Stratechery started, the widely held belief was that the iPhone, innovative though it may have been, was in serious trouble in the face of Android’s increasing marketshare. Henry Blodget wrote a useful articulation of the bear case on Business Insider:

If smartphones and tablets were not a platform — if the only thing that mattered to the value of the product and a customer’s purchase decision was the gadget itself — then Apple’s loss of market share would not make a difference. Apple zealots would be correct when they smugly assert that what matters is Apple’s “profit share” not “market share.”

But smartphones and tablets are a platform. Third-party companies are building apps and services to run on smartphone and tablet platforms. These apps and services, in turn, are making the platforms more valuable. Consumers are standardizing their lives around the apps and services that run on smartphone and tablet platforms. Because of these “network effects,” in platform markets, dominant market share is huge competitive advantage. In platform markets, as the often-hated but always insanely powerful Microsoft demonstrated for decades in the PC market, the vast majority of the power and profits eventually accrue to the market-share leader.

In this view there is still a premium market, but only within the dominant ecosystem. This, Blodget argued, was Apple’s problem: soon the company would have no market, because Android would have the ecosystem, and by extension all of Apple’s premium customers.

Of course this turned out to be mistaken, for reasons I laid out in What Clayton Christensen Got Wrong.

  • First, integration provided user experience benefits that premium customers valued
  • Second, those premium users were more likely to pay for apps, which increased the attraction of iOS to developers
  • Third, the absolute size of the smartphone market was so big that both iOS and Android were large enough to be attractive to developers

Note, though, that just because Blodget and company were wrong about the iPhone’s prospects does not mean they were wrong conceptually: ecosystems do matter. However, instead of one ecosystem devouring the entire premium versus ubiquity landscape, Apple and Google split it up rather neatly:

Apple and Google's Ecosystem Duopoly

Amazon and Facebook were two of the more prominent companies that found out this reality the hard way.

Mobile Successes and Failures

Apple and Google may be the first companies people think of when you ask who won mobile, but Amazon and Facebook were not far behind.

Amazon spent the smartphone era not only building out Amazon.com, but also Amazon Web Services (AWS). AWS was just as much a critical platform for the smartphone revolution as were iOS and Android: many apps ran on the phone with data or compute on Amazon’s cloud; mobile also created a vacuum in the enterprise for SaaS companies eager to take advantage of Microsoft’s desire to prop up its own mobile platforms instead of supporting iOS and Android, and those SaaS companies were built on AWS.

Smartphones, meanwhile, saved Facebook from itself: instead of a futile attempt to be a platform within the browser, mobile made Facebook just an app, and it was the best possible thing that could have happened to the company. Facebook was freed to focus solely on content its users wanted and advertising to go along with it, generating billions of dollars and a deep moat in targeting advertising along the way.

What is not clear is if Amazon’s and Facebook’s management teams agree. After all, both launched smartphones of their own, and both failed spectacularly.

Facebook’s attempt was rather half-assed (to use the technical term). Instead of writing their own operating system, Facebook Home was a launcher that sat on top of Android; instead of designing their own hardware, the Facebook One was built by HTC. Both decisions ended up being good ones because they made failure less expensive.

Amazon, meanwhile, went all out to build the Fire Phone: a new operating system (based on Android, but incompatible with it), new hardware, including a complicated camera system that included four front-facing cameras, and a sky-high price to match. It fared about as well as the Facebook One, which is to say not well at all.

That, though, is what made last week’s events so interesting: it is these two failures that seemed to play a bigger role in what was announced than did the successes.

Amazon and Facebook’s Announcements

Start with Amazon: the company announced a full fifteen hardware products. In order: Echo Dot with Clock, a new Echo, Echo Studio (an Echo with a high-end speaker system), Echo Show 8 (a third-size of the Echo with a screen), Echo Glow (a lamp), new Eero routers, Echo Flex (a microphone only Echo that hangs off an outlet), Ring Retrofit Alarm Kit (that lets you leverage your preinstalled alarm), Ring Stick Up Cam (a smaller Ring camera), Ring Indoor Cam (an even smaller Ring camera), Amazon Smart Oven (an oven that integrates with Alexa), Fetch (a pet tracker), Echo Buds (wireless headphones with Alexa), Echo Frames (eyeglasses with Alexa), and Echo Loop (a ring with Alexa). Whew!

This is an approach that is the exact opposite of the Fire Phone: instead of pouring all of its resources into one high-priced device, Amazon is making just about every device it can think of, and seeing if they sell. Moreover, they are doing so at prices that significantly undercut the competition: the Echo Studio is $150 cheaper than a HomePod, the Echo Show 8 is $60 cheaper than the Google Nest Hub, and the new Eero is $150 cheaper than the product Eero sold as an independent company. Amazon is clearly pushing for ubiquity; a whale strategy this is not.

Facebook, meanwhile, effectively consolidated its Oculus product line from three to one: the mid-tier Oculus Quest, a standalone virtual reality (VR) unit, gained the capability to connect to a gaming PC in order to play high-end Oculus Rift games; Oculus Go apps, meanwhile, gained the capability to run on the relatively higher-specced Oculus Quest. It is not clear why either the Go or Rift should be a target for developers or customers going forward.

The broader goal, though, remains the same: Facebook is determined to own a platform; the lesson the company seems to have drawn from its smartphone experience is the importance of doing it all.

Beachheads and Obstacles

What Amazon and Facebook do have in common — and perhaps this is why both seem to look back at their very successful smartphone eras with regret — is that Apple and Google are their biggest obstacles to success, and it’s because of their smartphone platforms.

Amazon to its great credit — and perhaps because the company did not have a smartphone to rely on — found a beachhead in the home, the one place where your phone may not be with you. Now it is trying to not only saturate the home but also extend beyond it, both through on-body accessories and also an expanding number of deals with automakers.

Facebook, meanwhile, is searching for a beachhead of its own in virtual reality. That, the company believes, will give it the track to augmented reality, and by extension, usefulness in the real world.

Facebook and Amazon are building beachheads to take on Apple and Google

Amazon’s challenge is Google: Android phones are already everywhere, and Google is catching up in the home more quickly and more effectively than Amazon is pushing outside of it. Google also has a much stronger position when it comes to the sort of Internet services that provide the rough grist of intelligence of virtual assistants: emails, calendars, and maps.

Facebook, meanwhile, is ultimately challenging Apple: augmented reality is going to start at the high end with an integrated solution, and Apple has considerably more experience building physical products for the real world, and a major lead in chip design and miniaturization, not to mention consumer trust. Moreover, while there is obviously technical overlap when it comes to creating virtual reality and augmented reality headsets, the product experience is fundamentally distinct.

Lessons Learned

I’ve been pretty skeptical about Facebook and Oculus all along, both at the time of purchase and last year. I’d like to say I’ve changed my mind, but frankly, last week’s keynote made me question whether Facebook learned any lessons from mobile at all. Zuckerberg said in the keynote opening:

We experience the world through this feeling of presence and the interactions that we get with other people, which is why Facebook’s technology vision has always been about putting people at the center of your computing experience. We’ve mostly done that so far through building apps. I don’t think it’s an accident that a lot of the top-used and biggest apps that are out there are social experiences that put people at the center of the experience, because that’s how we process things.

But there is only so much you can do with apps, without also shaping and improving the underlying platform. I find it shocking that we’re here in 2019 and our phones and our computers are still organized around apps and tasks and not people that we are actually present with. I feel like we can help all of us together deliver a unique contribution to this field by helping to ensure that the next platform changes this.

Zuckerberg is, in effect, saying that he finds it shocking that Facebook Home didn’t succeed. I think the reasons were pretty clear, and a lack of distribution or high-end hardware was not the primary problem. The fact of the matter is that while social connection on our phones is important — perhaps the most important — it is not the only job we ask phones to do. That is why Facebook is an app and not a platform, and that’s ok! Apps, particularly those of Facebook’s scale and advertising prowess, are fantastic businesses. And apps shouldn’t be platforms.

Amazon, on the other hand, seems to have learned the right lessons from its mobile failures; what is notable about the company’s approach to Alexa is that it leverages and learns from the mobile era. Alexa benefits from Amazon’s investments in data centers and networking, interacts with both iOS and Android to the greatest extent possible, and is roughly inline with Amazon’s overall business — making buying things that much more convenient. Alexa is an operating system for the home, and perhaps beyond.

This isn’t a guarantee of success, of course. Google is a formidable competitor, with multiple advantages. It is particularly hard to see Alexa gaining traction outside the home. The only reason Amazon has a chance is because building on strengths is always better than doing something completely new and different from what has made you successful in the past.

Neither, and New: Lessons from Uber and Vision Fund

The first time I wrote about Uber was in June, 2014. The Wall Street Journal had posted a column entitled Uber’s $18.2B Valuation is a Head Scratcher, which led to an easy rejoinder: Why Uber is Worth $18.2 Billion. Given that Uber is today worth $53.2 billion on the open market, that one turned out pretty well.

A month later I felt even better about my piece when Bill Gurley, the legendary venture capitalist, wrote his own rebuttal of an Uber skeptic. Gurley was gentle in his takedown of NYU Stern professor Aswath Damodaran, writing in the introduction:

It is not my aim to specifically convince anyone that Uber is worth any specific valuation. What Professor Damodaran thinks, or what anyone who is not a buyer or seller of stocks thinks, is fairly immaterial. I am also not out to prove him wrong. I am much more interested in the subject of critical reasoning and predictions, and how certain assumptions can lead to gravely different outcomes. As such, my goal is to offer a plausible argument that the core assumptions used in Damodaran’s analysis may be off by a factor of 25 times, perhaps even more. And I hope the analysis is judged on whether the arguments I make are reasonable and feasible.

Gurley’s arguments, which focused on Damodaran’s assumptions around Uber’s total addressable market ($100 billion, the same as taxis) and terminal market share (10%) were clearly correct: Uber is already at a $50+ billion gross bookings run rate, and has around 70% of the market. Damodaran’s assumptions, rooted in the analog world, were sorely mistaken.

At the same time, while Gurley didn’t make any specific assertions about Uber’s valuation, surely he must have expected it would have increased by more than 192% in the following five years; I certainly did. To be sure, there were rather significant intervening events, specifically Uber’s disastrous 2017, where the company endured seemingly endless scandals, lost its CEO, and worst of all, gave life to Lyft, its most important competitor which, at the beginning of that year, was on the verge of going out of business. It is very fair to argue that Uber without an at-scale competitor is a much more valuable company.

That noted, this line from Gurley’s article stands out to me today more than ever:

I am much more interested in the subject of critical reasoning and predictions, and how certain assumptions can lead to gravely different outcomes.

Just because Uber’s critics were wrong to assume that the service was analogous to taxis does not mean that those of us on the other side — not only of the Uber question but of a host of other similar companies that straddle the physical and digital worlds — were completely right in our assumptions either. The opposite of an old-world company is not necessarily a tech company. It is something we haven’t quite seen before, and applying either old-world rules or tech rules is a mistake.

AB 5 and Worker Classification

This idea of the old classifications not quite making sense, and the need for something new, should feel quite familiar in the context of Uber: it is precisely the issue surrounding Uber’s drivers.

Earlier this month California passed AB 5, which codified a California Supreme Court decision setting forward a three-part test to determine whether or not a worker is an independent contractor or an employee (with all of the attendant regulation and taxes that go along with that classification). From the decision:

Under this test, a worker is properly considered an independent contractor to whom a wage order does not apply only if the hiring entity establishes: (A) that the worker is free from the control and direction of the hirer in connection with the performance of the work, both under the contract for the performance of such work and in fact; (B) that the worker performs work that is outside the usual course of the hiring entity’s business; and (C) that the worker is customarily engaged in an independently established trade, occupation, or business of the same nature as the work performed for the hiring entity.

The question as to whether the new law applies is closer than it seems: on one hand, Uber et al1 really do give drivers, who use their own equipment, flexibility as far as hours go, and while there are rules to be followed while on the job, it is the former that is usually the more important standard. Plus drivers famously drive for multiple companies; the need to compete for their presence on the platform (more on this in a bit) is one of the big reasons why Uber is so unprofitable.

That means that (B) is the question: if Uber is in the transportation business, then drivers are workers; Uber, though claims its business “is serving as a technology platform for several different types of digital marketplaces.” As I wrote in a Daily Update:

It’s not an entirely irrational argument. For example, consider the rate: Uber’s point is that not that it sets the rate, but rather the rate is the market-clearing price that maximizes the amount of revenue drivers earn. The idea is that if drivers could set their own prices — a common objection to drivers being independent contractors is that they cannot — a negotiation would occur between customers and drivers until a price was agreed upon; over time this price would be equalized across drivers and riders. Uber’s argument is that it dramatically accelerates this process and in fact makes the market possible, since the level of coordination necessary to reach a market-clearing price at scale would be impossible otherwise.

At the same time, this sort of argument, technically correct from an economic modeling perspective, suffers from the same flaws as most economic models: the lack of any sort of accounting for the human component. In this case the missing bit, though, is not in the model’s outcome, but rather in the manifestation: the way that Uber is experienced by riders and especially riders is that “Drivers are the face of Uber to consumers” (that quote is from Uber’s S-1, by the way). Drivers are also indispensable to how Uber actually generates revenue: sure, drivers can and do come and go as they please, and work simultaneously for Uber’s competitors, but to suggest they are a not a part of the “usual course” of Uber’s business seems off.

That is why the best solution to the employment classification question is to realize that neither of the old categorizations fit: Uber drivers are not employees, nor are they contractors; they are neither, and new. A much better law would define this category in a new way that provides the protections and revenue-collection apparatus that California deems necessary while still preserving the flexibility and market-driven scalability that make these consumer welfare-generating platforms possible.

What is Uber?

So what of Uber itself? It is not a taxi company, as noted above, but is it a tech company? I suggested it was a few weeks ago in What Is a Tech Company?:

Uber…checks most of the same boxes:

  • There is a software-created ecosystem of drivers and riders.
  • Like Airbnb, Uber reports its revenue as if it has low marginal costs, but a holistic view of rides shows that the company pays drivers around 80 percent of total revenue; this isn’t a world of zero marginal costs.
  • Uber’s platform improves over time.
  • Uber is able to serve the entire world, giving it maximum leverage.
  • Uber can transact with anyone with a self-serve model.

A major question about Uber concerns transaction costs: bringing and keeping drivers on the platform is very expensive. This doesn’t mean that Uber isn’t a tech company, but it does underscore the degree to which its model is dependent on factors that don’t have zero costs attached to them.

In fact, I’ve changed my mind: I was right to mention Uber’s costs, and wrong to dismiss them and call Uber a tech company. At the same time, Uber clearly has no analog in the physical world. It is neither, and new — and Uber’s drivers help explain why.

That magical marketplace I described above, where Uber effectively simulates countless one-on-one negotiations between drivers and riders that, on an infinite timescale and with infinite patience, would arrive at the market-clearing price, is very much a technological product. This marketplace leverages today’s paradigm-shifting technologies — smartphones and cloud computing — and is itself software, and thus infinitely leverageable and always improving.

Uber’s financials reflect this: last quarter the company had a gross margin of 51%. That is a fair bit lower than a typical SaaS company’s 70%+ gross margins, but that is primarily because the company’s cost of revenue includes insurance, which scales linearly with revenue. The software behind Uber’s marketplaces scales perfectly.

The problem, though, is that Uber’s financials are an incomplete view on the overall Uber experience, because riders don’t simply pay Uber: they also pay the drivers. And, if you look at Uber’s financials from a rider perspective,2 the situation looks a lot worse; consider last quarter:

in millions Uber’s Financials The Rider Perspective
Revenue $2,768 $15,574
Cost of Revenue $1,342 $14,148
Gross Profit $1,426 $1,426
Gross Margin 51.5% 9.2%

Suddenly that gross margin looks nothing like a software company — and keep in mind this is all Uber has to work with before it gets to its fixed costs.3 The only way this company works is if it grows to a truly mammoth size such that it has sufficient gross margin to cover fixed costs, but it is that much more difficult to acquire a marginal new customer when you simply don’t have that much margin to play with; spending on sales and marketing simply increases the hill you need to climb!

None of this is to say that Uber is not a viable business: all of Gurley’s arguments about the total addressable market and Uber’s ability to dominate that market still apply, because of technology. Uber is not a taxi company! At the same time, a different sort of valuation metric than that usually applied to tech companies was clearly appropriate as well, as Uber’s adventures on the public market demonstrate. In short, the company was neither, and new.

The Uber Anomaly

The corresponding article to What Is a Tech Company? could very well be What Is a Venture Capital Firm?. If tech companies are characterized by zero marginal costs, increased returns to scale, and ecosystems, venture capital firms match with equity financing (which means capped downside and infinite upside), a Babe Ruth portfolio management approach that focuses on home runs despite the increase in strikeouts, and a focus on iterated games when it comes to exerting power.

The synergy between tech companies and venture capitalists

That last point is worth dwelling on; in 2017 I described why an iterated game approach mattered for venture capitalists in the context of — you guessed it! — Uber. That was when Gurley’s Benchmark, then Uber’s largest investor, first forced out and then sued Uber’s former CEO Travis Kalanick.

A venture capitalist will invest in tens if not hundreds of companies over their career, while most founders will only ever start one company; that means that for the venture capitalist investing is an iterated game. Sure, there may be short-term gain in screwing over a founder or bailing on a floundering company, but it simply is not worth it in the long-run: word will spread, and a venture capitalists’ deal flow is only as good as their reputation…

The entire point of venture investing is to hit grand slams, and that calls for more swings of the bat. After all, the most a venture capitalist might lose on a deal — beyond time and opportunity cost, of course — is however much they invested; the downside is capped. Potential returns, though, can be many multiples of that investment. That is why, particularly as capital has flooded the Valley over the last decade, preserving the chance to make grand slam investments has been paramount. No venture capitalist wants to repeat Sequoia’s mistake: better to be “nice”, or, as they say in the Valley, “founder friendly.”

Uber, though, was different:

Uber’s most recent valuation of $68.5 billion nearly matches the worth of every successful Benchmark-funded startup since 2007. Sure, it might make sense to treat company X and founder Y with deference; after all, there are other fish in the pond. Uber, though, is not another fish: it is the catch of a lifetime.

That almost assuredly changed Benchmark’s internal calculus when it came to filing this lawsuit. Does it give the firm a bad reputation, potentially keeping it out of the next Facebook? Unquestionably. The sheer size of Uber though, and the potential return it represents, means that Benchmark is no longer playing an iterated game. The point now is not to get access to the next Facebook: it is to ensure the firm captures its share of the current one.

As I’ve noted, that valuation proved to be faulty; at the same time, $53.2 billion is still a huge amount of money, and probably wouldn’t have changed Benchmark’s calculation. The real takeaway, though, is that Uber was not a typical Silicon Valley startup. No, they weren’t a taxi company, but they weren’t a tech company either, they were something new, and that meant a new kind of investor. Enter SoftBank.

Vision Fund

Masayoshi Son, Softbank’s CEO and the driving force behind Vision Fund, told Bloomberg a year ago that he wanted to “go big bang”:

SoftBank’s massive bet in WeWork is emblematic of Son’s overall approach. “Why don’t we go big bang?” he told Bloomberg in an interview last year when asked about his investing style, and added that other venture capitalists tend to think too small. His goal of swaying the course of history by backing potentially world-changing companies requires that those companies make large outlays in areas from customer acquisition to hiring talent to research and development, a spending tactic that he acknowledged sometimes brings him into conflict with other investors.

“The other shareholders, they try to create clean, polished little companies,” Son said. “And I say: ‘Let’s go rough. We don’t need to polish. We don’t need efficiency right now. Let’s make a big fight. Let’s make a big, successful—a big win.’”

In fact, the “other shareholders” that Son derides are trying to create tech companies: up-front fixed costs to develop software, with high gross margins once it is sold. These are the companies that require investors that have all of the qualities I detailed above: a desire for equity, a willingness to risk strikeouts while swinging for home runs, and the decency that comes from playing an iterated game.

Vision Fund is none of these things. It doesn’t just want equity, it wants preferred equity with a ratchet, to guarantee they get theirs first. Moreover, it seeks to not only invest in winners, but also to leverage its capital to make winners, by forcing competing companies to merge. And, because of this, Vision Fund is very much not playing an iterative game: it will do whatever it takes to win the markets it invests in, including deposing of founders who become a liability.

The problem, though, is Vision Fund may have confused “big capital needs” with “big opportunity”. What is striking about the firm’s portfolio is the paucity of “tech companies”. Almost everything falls in the “Neither and New” category defined by Uber: entire categories like real estate and logistics are defined by their interaction with the physical world, almost everything in the consumer category uses technology to enable real-world services, and the other major category, fintech, by definition needs huge amounts of capital. Most of these companies may have income statements that seem attractive in isolation, but when viewed from a total revenue perspective4 in fact have extremely low gross margins (relative to tech companies) and very high marginal costs.

The question for Softbank then is how many markets are there the size of transportation, with the possibility of taking a large enough chunk to make the economics work (leaving aside the fact that Softbank is underwater on its Uber investment)? The Vision Fund is invested in OpenDoor, for example, which is in an even larger market than transportation (residential real estate), but with much less potential transaction volume; Zillow, which followed OpenDoor into the “iBuyer” market, has a market cap of only $6 billion, in part because of investor skepticism about margins.

This is the challenge for Vision Fund: yes, these companies have huge capital needs, and yes, the only way they can become successful is if they become so big that their small margins are sufficient to cover their fixed cost, but does that necessarily mean big returns? Or did Son anchor on “big” without making sure that his adjective of choice attached to the noun — “returns”, as opposed to “needs” or “markets” — that his investors are expecting?

Moreover, it’s not clear how many misses Vision Fund can afford: the Wall Street Journal reported earlier this week that Vision Fund has promised a 7% return a year to 40% of its investors, which means that SoftBank has limited capacity to be patient and wait for home runs — particularly if WeWork starts dragging down the whole fund.

Worse, it’s not clear how many home runs Softbank has. Looking at 29 U.S. tech IPOs since the beginning of 2018, 20 have increased in market cap over their offering price, and all of them are pure tech companies with high margins.5 Of the nine that have fallen in value, four are marketplace companies6, two are hardware companies7, and only three are pure tech companies8. Son, though, sees pure technology companies as “clean, polished little companies” that are not big enough for Vision Fund.9

Vision Fund is not a venture capital firm, nor is it a public market-focused hedge fund: it is neither, and new, but it very much remains to be seen if “new” is valuable.

New Lessons

At the same time, this is good news for the tech ecosystem: there is clearly still tremendous opportunity to build “tech companies”, primarily for the enterprise, and Vision Fund won’t be an obstacle. True, there are fewer opportunities in the consumer space, but that is more a consequence of big company dominance than Venture Fund stealing away opportunities with outsized returns relative to capital invested. If anything Vision Fund is stealing duds.

This is also good news for public market investors: despite all of the press about Uber and WeWork, more companies are up post-IPO than down — and the gains are much larger in percentage terms than are the losses. The tech company formula still works.

This is also a lesson for me: I started with an article that I got right, but in retrospect I was only halfway correct. Uber had a large market and there were tech-like dynamics that meant it could get a big part of that market, but margins — both reported, but especially relative to the customer transaction — still matter. I didn’t pay enough attention to them.

It also means I should have been more explicitly skeptical about WeWork; my goal was to write a contrarian piece exploring the upside, while still being clear that I wouldn’t invest. I did state that, but I wasn’t nearly clear enough about just how absurd the valuation was, because I didn’t spend enough time discussing margins.10

Going forward I plan to be a lot more skeptical about other tech startups that interface with the real world and the attendant drag on margins that follows; I am not saying that the category isn’t viable, and technology truly makes these companies different than the incumbents in their space, but they are not necessarily tech companies either.

Neither, and new.

I wrote a follow-up to this article in this Daily Update.

  1. I am going to use Uber as a stand-in for companies like Lyft, DoorDash, Instacart, etc. for the rest of this article, but everything applies to all of the companies that use “gig” workers []
  2. In this case, the rider perspective includes both Uber riders and also UberEats customers; the two categories are not separated in Uber’s financials []
  3. It should be noted that Uber, rightfully, accounts for driver incentives either as contra-revenue (most of them) or as a cost of revenue (for driver incentives it is unlikely to earn back); the only driver incentives that fall under fixed costs are bonuses to existing drivers for driver referrals []
  4. I.e. the equivalent of gross bookings in the case of Uber []
  5. In order of returns, Zscaler, Anaplan, Smartsheet, Zoom, DocuSign, CrowdStrike, Fastly, SurveyMonkey, Pinterest, Health Catalyst, Medallia, Cloudflare, Carbon Black, Dynatrace, Datadog, PagerDuty, EverQuote, Zuora, Tenable []
  6. UpWork, Eventbrite, Uber, and Lyft []
  7. Sonos and Arlo Technologies []
  8. Pivotal, Dropbox, and Slack []
  9. Interestingly, the one tech company on this list that the Vision Fund owns, Slack, is the worst performing of all the SaaS companies []
  10. I did, though, make clear in a (free) follow-up that the AWS comparison was never intended to be a direct one []

Exponent Podcast: The Exponent IPO

On Exponent, the weekly podcast I host with James Allworth, we discuss Cloudflare and what it is like going through an IPO, as well as what has gone wrong with WeWork.

Listen to it here.

Day Two to One Day

Jeff Bezos opened his 2016 letter to Amazon shareholders like this:

“Jeff, what does Day 2 look like?”

That’s a question I just got at our most recent all-hands meeting. I’ve been reminding people that it’s Day 1 for a couple of decades. I work in an Amazon building named Day 1, and when I moved buildings, I took the name with me. I spend time thinking about this topic.

“Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”

To be sure, this kind of decline would happen in extreme slow motion. An established company might harvest Day 2 for decades, but the final result would still come.

Bezos went on to give advice about how to avoid Day 2, including “True Customer Obsession”, “Resist Proxies”, “Embrace External Trends”, and “High-Velocity Decision Making”. The company he manages then spent the next several years looking like it was in fact Day 2.

Tilting the Scales

Consider this story in the Wall Street Journal about how Amazon reportedly adjusted its search algorithm to favor its own products:

Amazon.com Inc. has adjusted its product-search system to more prominently feature listings that are more profitable for the company, said people who worked on the project—a move, contested internally, that could favor Amazon’s own brands…The adjustment, which the world’s biggest online retailer hasn’t publicized, followed a yearslong battle between executives who run Amazon’s retail businesses in Seattle and the company’s search team, dubbed A9, in Palo Alto, Calif., which opposed the move, the people said.

Note how badly this decision fares relative to Bezos’ advice:

  • Shifting results away from relevance towards factors that benefits Amazon’s bottom line is not a decision that results from “true customer obsession”.
  • Goal-seeking for profit is a poor proxy for that customer obsession that Bezos focused on in the 1997 shareholder letter attached to every subsequent letter.
  • Amazon allegedly spent “years” deciding whether or not to do this, which is definitely not “high-velocity decision making”.

To be fair, Amazon is embracing the external trend of raising antitrust concerns, which are probably overblown given the company’s single-digit share of retail in the United States. There are no objections to Walmart, for example, having store brands or pay-for-placement programs, despite the fact that Walmart’s share of retail is about 33% larger than Amazon’s (8.9% of consumer retail spending in the U.S. versus 6.4 percent), so it’s not clear on what basis the digital equivalents of these programs would be prosecuted.

With regards to Bezos’ warning, though, the antitrust discussion is a moot point: companies that spend months or years arguing about the legality and customer friendliness of tilting the scales are usually well into Day Two.

Squeezing Suppliers

This is hardly the only example of Amazon becoming obsessed with profitability on the margins in its retail operation over the last few years. For example, from Recode last November:

Over the past few months, Amazon has applied intense pressure to consumer brands across different product categories — seizing more control over what, where and how they can sell their goods on the so-called everything store, these people say. One apparent goal: To take more control over the price of goods on Amazon so the company can better compete with retailers. The power moves are also believed to be a prelude to a new internal system that Amazon has yet to launch called One Vendor. The new initiative will essentially funnel big brands and independent sellers alike through the same back-end system in a supposed effort to improve the uniformity of the shopping experience across Amazon on the public-facing side.

From the Wall Street Journal in December:

As Amazon focuses more on its bottom line in addition to its rapid growth, it is increasingly taking aim at CRaP products [“Can’t Realize a Profit”], according to major brand executives and people familiar with the company’s thinking. In recent months, it has been eliminating unprofitable items and pressing manufacturers to change their packaging to better sell online, according to brands that sell on Amazon and consultants who work with them.

From CNBC in March:

In recent months, Amazon has been telling more vendors, or brand owners who sell their goods wholesale, that if Amazon can’t sell those products to consumers at a profit, it won’t let them pay to promote the items. For example, if a $5 water bottle costs Amazon that amount to store, pack and ship, the maker of the water bottle won’t be allowed to advertise it.

From Bloomberg, also in March:

Amazon.com Inc. has abruptly stopped buying products from many of its wholesalers, sowing panic. The company is encouraging vendors to instead sell directly to consumers on its marketplace. Amazon makes more money that way by offloading the cost of purchasing, storing and shipping products. Meanwhile, Amazon can charge suppliers for these services and take a commission on each transaction, which is much less risky than buying goods outright.

From Bloomberg in May:

In the next few months, bulk orders will dry up for thousands of mostly smaller suppliers, according to three people familiar with the plan. Amazon’s aim is to cut costs and focus wholesale purchasing on major brands like Procter & Gamble, Sony and Lego, the people said. That will ensure the company has adequate supplies of must-have merchandise and help it compete with the likes of Walmart, Target and Best Buy.

The vendor purge is the latest step in Amazon’s “hands off the wheel” initiative, an effort to keep expanding product selection on its website without spending more money on managers to oversee it all. The project entails automating tasks like forecasting demand and negotiating prices which were predominantly done by Amazon employees. It also involves pushing more Amazon suppliers to sell goods themselves so Amazon doesn’t have to pay people to do it for them.

None of these decisions are necessarily wrong in a vacuum; what has been striking, though, is the drumbeat of Amazon Retail changes that seem primarily concerned about Amazon’s profitability. And, for the record, it has worked:

Amazon.com's North American Results

Throughout the second half of 2018 and the first part of 2019, Amazon flipped revenue and expense growth by just a smidge, which caused income to skyrocket on a year-over-year basis. It may have been Day 2 as far as Amazon’s prioritization of profitability above everything was concerned, but at least the company was, in Bezos’ words, “harvesting”.

One Day Shipping

Note, though, that the chart above is missing last quarter’s results, and for good reason:

Amazon.com's North American Results

It’s a bit hard to make out, particularly because I am using trailing twelve-month averages (because of Amazon’s high seasonality), but expenses increased a lot more than revenue last quarter; in fact year-over-year income growth on a quarterly basis was actually -15%.

What changed is that Amazon decided to travel back in time — to Day One — and invest in what the company does best: massively difficult logistical problems that customers love having solved. Originally that was access to any book, then access to anything period, then access in two days, and now Amazon is committed to one.

First, from the company’s Q1 2019 earnings call announcing Amazon’s ambition:

We’re currently working on evolving our Prime free Two-Day Shipping program to be a free One-Day Shipping program. We’re able to do this, because we spent 20 plus years expanding our fulfillment and logistics network, but this is still a big investment and a lot of work to do ahead of us.

For Q2 guidance, we’ve included approximately $800 million of incremental spend related to this investment. And just to clarify, to give a little more information, we have been offering, obviously, faster than Two-Day Shipping for Prime members for years, one day, same day, even down to one to two hour delivery for Prime Now. So we’re going to continue to offer same day and Prime Now selection in an accelerated basis.

But this is all about the core free Two-Day offer evolving into a free One-Day offer. We’ve already started down this path. We’ve in the past months significantly expanded our one-day eligible selection and also expanded the number of zip codes eligible for one-day shipping.

The costs started to show up last quarter. From the company’s Q2 2019 earnings call, a quarter where Amazon missed on profits for the first time in several years:

In Q2, we had a meaningful step up in the one-day shipments, primarily in North America, and one-day volume was accelerating throughout the quarter…On the cost side, we talked last time about $800 million estimate of transportation cost to supply one day, the additional one day in Q2. We were a little bit higher than that number in total cost.

We saw some additional transition costs in our warehouses. We saw some lower productivity as we were expanding rather quickly, both local capacity in the off-season also in our delivery networks. We also saw some costs were moving: buying more inventory and moving inventory around in our network to have it be closer to customers. And we built not only that cost structure, but an accelerating cost penalty into our Q3 guidance that was released with our earnings today.

This is an initiative that clearly passes Bezos’ test:

  • Customers love getting items in one day instead of two.
  • One day shipping is a clear goal.
  • Increased convenience will always be the ultimate external trend.
  • Ramping up one-day shipping in a manner of weeks by definition requires high-velocity decision making.

It is also the opposite of harvesting: it is investing, and it seems more likely than not that Amazon’s upcoming results will look much more like the “Day One” company it was for years, with rapidly growing revenue and costs to match.


This Article comes at a bit of a weird time: in truth I had been considering writing a bearish Amazon article for several months as the penny-pinching anecdotes started to pile up. Tech companies rarely find sustainable growth by focusing on costs; if anything they find antitrust violations.

That announcement about one-day shipping, though, made me hold my fire. Spending a lot of resources on incredibly difficult logistical problems is precisely what makes Amazon so valuable, which means that the commitment to do just that — even with higher costs — is a reason to be bullish. The only problem is that the revenues I anticipate have not yet appeared in the quarterly results.

Still, this search news made me revisit the issue a bit early: tilting the field to favor the bottom line instead of doing what is best for customers is the surest sign of harvesting instead of investing, and it reminded me of my bearish thesis. I wonder if Amazon might not reconsider their approach to search now that the company is demonstrating a recommitment to growing the top line instead of the bottom.

That’s also why last week’s Apple event was encouraging: Apple may have its work cut out to be an effective services company, but by cutting iPhone prices and pricing its services offerings aggressively it is making its own moves towards investing, not simply harvesting. This is also why Facebook’s commitment to Stories was a good sign even if it entailed an earnings hit; its various attempts to wring engagement out of its core app through things like forced Instagram integrations and dating services run in the opposite direction. For Microsoft, meanwhile, The End of Windows meant the end of harvesting and a return to investing, much to investors’ benefit.

Perhaps the biggest question mark, though, is around Google: the company has gotten far more mileage than I ever expected out of mobile generally and cramming more ads into mobile search results specifically; both, though, particularly the latter, seem more like harvesting than investing. And, even when Google does invest, it is too often in projects far removed from customers and the forcing function that going to market entails.

This also may be why Google is the most susceptible to antitrust action of all the major consumer tech companies; the question as to what comes first, harvesting instead of investing or behaving anticompetitively, ceases to matter when you are operating at the scale of any of these companies. And, on the flipside, it strongly suggests that antitrust actions are a trailing indicator of a company that has peaked,1 not a causal force of decline.

  1. YouTube remains a tremendously important counterweight to any bearish Google story []

The iPhone and Apple’s Services Strategy

Editor’s Note: Stratechery was referenced in yesterday’s keynote. I had no knowledge of or awareness of this reference, and have no relationship with Apple, up-to-and-including not owning their stock individually, as explained in my ethics policy.

It is the normal course for Apple events to come and go and people to complain about how boring it all was, particularly when the company announces said event like this:

Apple Event Invitation: "By Innovation Only"

Apple reporter extraordinaire Mark Gurman was not impressed:

Gurman isn’t necessarily wrong about the highly iterative nature of the hardware announcements (although I think that an always-on Apple Watch is a big deal), but that doesn’t necessarily mean he is right about the innovation question. To figure that out we need to first define what exactly innovation is.

Beyond the iPhone, Revisited

Another Apple keynote that was greeted with a similar collective yawn was in 2016, when the company announced the iPhone 7 and Series 2 Apple Watch. Farhad Manjoo wrote at the time in the New York Times:

Apple has squandered its once-commanding lead in hardware and software design. Though the new iPhones include several new features, including water resistance and upgraded cameras, they look pretty much the same as the old ones. The new Apple Watch does too. And as competitors have borrowed and even begun to surpass Apple’s best designs, what was iconic about the company’s phones, computers, tablets and other products has come to seem generic…

I quoted Manjoo’s piece at the time and went on to explain why I thought that year’s keynote was more meaningful than it seemed, particularly because of the AirPods introduction:

What is most intriguing, though, is that “truly wireless future” Ive talked about. What happens if we presume that the same sort of advancement that led from Touch ID to Apple Pay will apply to the AirPods? Remember, one of the devices that pairs with AirPods is the Apple Watch, which received its own update, including GPS. The GPS addition was part of a heavy focus on health-and-fitness, but it is also another step down the road towards a Watch that has its own cellular connection, and when that future arrives the iPhone will quite suddenly shift from indispensable to optional. Simply strap on your Watch, put in your AirPods, and, thanks to Siri, you have everything you need.

That future is here, although the edges are still rough (particularly Siri, which was a major focus of that article); Apple’s financial results have certainly benefited. Over the last three years the company’s “Wearables, Home and Accessories” category, which is dominated by the Apple Watch and AirPods, has nearly doubled from $11.8 billion on a trailing twelve-month (TTM) basis1 to $22.2 billion over the last twelve months. In other words, according to the metric that all businesses are ultimately measured on, that 2016 keynote and the future it pointed to was very innovative indeed.

Apple’s Services Narrative

Wearables have not been Apple’s only growth area: over the same three-year span Services revenue has increased by almost the exact same rate — 89% versus 88% — from $23.1 billion TTM to $43.8 billion TTM. At the same time, it feels a bit icky to call that innovation, particularly given the anticompetitive nature of the App Store.

That’s not totally fair of course: the App Store was one of the most innovative things that Apple ever created from a product perspective; that the company has positioned itself to profit from that innovation indefinitely is innovative in its own right, at least if you go back to measuring via revenue and profits.

Still, the idea of Apple being a Services company is one that has long been hard to grok. When the company first started pushing the “Services Narrative” I declared that Apple is not a Services Company:

Services (horizontal) and hardware (vertical) companies have very different strategic priorities: the former ought to maximize their addressable market (by, say, making a cheaper iPhone), while the latter ought to maximize their differentiation. And, Cook’s answer made clear what Apple’s focus remains.

That answer was about continuing Apple’s pricing approach, which at that time was $649+ for new iPhones, with old iPhones discounted by $100 for every year they were on the market, and Cook’s specific words were “I don’t see us deviating from that approach.”

In fact, Apple did deviate, but in the opposite direction: in 2017 the company launched the $999+ iPhone X at the high end and bumped the price of the now mid-tier iPhone 8 to $699+. I wrote at the time:

The iPhone X sells to two of the markets I identified above:

  • Customers who want the best possible phone
  • Customers who want the prestige of owning the highest-status phone on the market

Note that both of these markets are relatively price-insensitive; to that end, $999 (or, more realistically, $1149 for the 256 GB model), isn’t really an obstacle. For the latter market, it’s arguably a positive.

What this strategy was absolutely not about was expanding the addressable market for Services. Apple was definitely not a Services company when it came to their strategic direction (even if, as I conceded in 2017, it was increasingly fair to evaluate the financial results in that way).

The iPhone’s Price Cut

This leads to what is in my mind the biggest news from yesterday’s event: Apple cut prices.

It was easy to miss, given that the iPhone 11 Pro, the successor to the iPhone X and then XS, hasn’t changed in price: it still starts at $999 ($1,099 for the larger model), and tops out at $1,449; if you want the best you are going to pay for it.

Perhaps the most interesting aside in the keynote, though, is that for the first time a majority of Apple’s customers weren’t willing to pay for the best. Tim Cook said:

Last year we launched three incredible iPhones. The iPhone XR became the most popular iPhone and the most popular smartphone in the world. We also launched the iPhone XS and iPhone XS Max, the most advanced iPhones we have ever created.

In a vacuum there is nothing surprising about this. The iPhone XR was an extremely capable phone, with the same industrial design, the same Face ID, and the same processor as the iPhone XS; the primary differences were an in-between size, one less camera, and an LCD screen instead of OLED. That doesn’t seem like much of a sacrifice for a savings of $250.

And yet, even while I said Apple’s strategy “bordered on over-confidence”, I still fully expected the iPhone XS to be the best-selling phone like the iPhone X before it; that is how committed Apple’s customers have been to buying the flagship iPhone. Even Apple, though, can’t escape the gravitational pull of “good enough” — which is why the price cuts, which happened further down the line — were so important.

There are two ways to see Apple’s price cuts. First, by iPhone model:

Launch 1 year old 2 years old
iPhone 7 $649 $549 $449
iPhone 8 $699 $599 $449
iPhone XR $749 $599
iPhone 11 $699

Secondly by year:

Flagship Mid-tier 1 year old 2 years old
2016 $649 $549 $449
2017 $999 $699 $549 $449
2018 $999 $749 $599 $449
2019 $999 $699 $599 $449

In the second chart you can see how Apple in 2017 not only raised prices dramatically on its flagship models, but also on the mid-tier model relative to previous flagships. This was important because it was these mid-tier models that replaced previous flagships in Apple’s usual “sell the old flagship for $100 less per year” approach. That meant that 2017’s price hike filtered through to 2018’s 1-year-old model, which increased from $549 to $599.

That means that this year actually saw three price cuts:

  • First, the iPhone 11 — this year’s mid-tier model — costs $50 less than the iPhone XR it is replacing.
  • Second, the iPhone XR’s price is being cut by $150 a year after launch, not $100 as Apple has previously done.
  • Third, the iPhone 8’s price is also being cut by $150 two years after launch, not $100 as Apple has previously done.

To be fair, this doesn’t necessarily mean the line looks much different today than it did yesterday: the only price point that is different is the iPhone 11 relative to the XR. That, though, is because it will take time for those previous price hikes to work their way out of the system, presuming Apple wants to stay on this path in the future.

They should. The success of the iPhone XR strongly suggests that there is more elasticity in the iPhone market than ever before. Apple also cut prices in China earlier this year with great success; I wrote after Apple’s FY2019 Q2 earnings:

The available evidence strongly suggests that iPhone demand in China is very elastic: if the iPhone is cheaper, Apple sells more; if it is more expensive, Apple sells less. This is, of course, unsurprising, at least for a commodity, and right there is Apple’s issue in China: the iPhone is simply less differentiated in China than it is elsewhere, leaving it more sensitive to factors like new designs and price than it is elsewhere.

As I note in that excerpt, China is unique, but the commodity argument is a variant of the “good-enough” argument I made above: while Apple doesn’t necessarily need to worry about iPhone customers outside of China switching to Android, they are very much competing with the iPhones people already have, and, as the XR demonstrated, their own new, cheaper phones.

That’s ok, though, and the final step in Apple truly becoming a Services company, not just in its financial results but also in its strategic thinking. More phones sold, no matter their price point, means more Services revenue in the long run (and Wearables revenue too).

Apple’s Services Announcement

Apple’s two service-related announcements are also good reasons to pursue this strategy. Perhaps the most compelling from a financial perspective is Apple Arcade. For $4.99/month a family gets access to a collection of games featured on their own tab in the App Store:

What makes this compelling from Apple’s perspective is that the company is paying a fixed amount for those games overall, which means that once the company covers the costs of those games, every incremental subscription is pure profit. Contrast this to something like Apple Music, where costs scale inline with revenue; no wonder the service is getting such prime real estate — and no wonder Apple suddenly seems interested in selling more iPhones, even if they earn less revenue up-front.

Similar dynamics apply to Apple TV+: once content costs are covered, incremental customers are pure profit. That noted, I’m not convinced that Apple TV+’s ultimate purpose is to be a profit driver by itself; I explained after Apple’s services event earlier this year:

To be very clear about my analysis of Apple TV+, I don’t think it is a Netflix competitor. I see it as a customer acquisition cost for the Apple TV app; it is Apple TV Channels that will make the real money, and this is not an unreasonable expectation. Roku’s entire business is predicated on the same model; the hardware is basically sold at cost, while the “platform” last year had $417 million in revenue and $296 million in profit, which equates to a tidy 71% gross margin.

Apple TV Channels is a means to buy subscriptions to other streaming services, which makes a lot of money for Roku and Amazon in particular; Apple TV+ content is a reason to make Apple TV the default interface for video leading to more subscriptions via Apple TV Channels.2 This view also explains why Apple is going to bundle a year of Apple TV+ with all new Apple device purchases (which is also very much in line with the idea of Apple giving up short-term revenue on its products — or incurring contra-revenue in this case — for long-term subscription revenue).

iPhone as a Service

It does feel like there is one more shoe yet to drop when it comes to Apple’s strategic shift. The fact that Apple is bundling a for-pay service (Apple TV+) with a product purchase is interesting, but what if Apple started including products with paid subscriptions?

That may be closer than it seems. It seemed strange yesterday’s keynote included an Apple Retail update at the very end of the keynote, but I think this slide explained why:

iPhone monthly pricing

Not only can you get a new iPhone for less if you trade in your old one, you can also pay for it on a monthly basis (this applies to phones without a trade-in as well). So, in the case of this slide, you can get an iPhone 11 and Apple TV+ for $17/month.

Apple also adjusted their AppleCare+ terms yesterday: now you can subscribe monthly and AppleCare+ will carry on until you cancel, just as other Apple services like Apple Music or Apple Arcade do. The company already has the iPhone Upgrade Program, that bundles a yearly iPhone and AppleCare+, but this shift for AppleCare+ purchased on its own is another step towards assuming that Apple’s relationship with its customers will be a subscription-based one.

To that end, how long until there is a variant of the iPhone Upgrade Program that is simply an all-up Apple subscription? Pay one monthly fee, and get everything Apple has to offer. Indeed, nothing would show that Apple is a Services company more than making the iPhone itself a service, at least as far as the customer relationship goes. You might even say it is innovative.

  1. Apple’s product numbers are always best represented on a trailing twelve-month basis given the huge amount of seasonality in their revenue []
  2. I also believe this is now the strategic rationale behind Amazon Prime Video []

What Is a Tech Company?

At first glance, WeWork and Peloton, which both released their S-1s in recent weeks, don’t have much in common: one company rents empty buildings and converts them into office space, and the other sells home fitness equipment and streaming classes. Both, though, have prompted the same question: is this a tech company?

Of course, it is fair to ask, “What isn’t a tech company?” Surely that is the endpoint of software eating the world; I think, though, to classify a company as a tech company because it utilizes software is just as unhelpful today as it would have been decades ago.

IBM and Tech-Centered Ecosystems

Fifty years ago, what is a tech company was an easy question to answer: IBM was the tech company, and everybody else was IBM’s customers. That may be a slight exaggeration, but not by much: IBM built the hardware (at that time the System/360), wrote the software, including the operating system and applications, and provided services, including training, ongoing maintenance, and custom line-of-business software.

All kinds of industries benefited from IBM’s technology, including financial services, large manufacturers, retailers, etc., and, of course, the military. Functions like accounting, resource management, and record-keeping automated and centralized activities that used to be done by hand, dramatically increasing the efficiency of existing activities and making new kinds of activities possible.

Increased efficiency and new business opportunities, though, didn’t make J.P. Morgan or General Electric or Sears tech companies. Technology simply became one piece of a greater whole. Yes, it was essential, but that essentialness exposed technology’s banality: companies were only differentiated to the extent they did not use computers, and then to the downside.

IBM, though, was different: every part of the company was about technology — indeed, IBM was an entire ecosystem onto itself: hardware, software, and services, all tied together with a subscription payment model strikingly similar to today’s dominant software-as-a-service approach. In short, being a tech company meant being IBM, which meant creating and participating in an ecosystem built around technology.

Venture Capital and Zero Marginal Costs

The story of IBM handing Microsoft the contract for the PC operating system and, by extension, the dominant position in computing for the next fifteen years, is a well-known one. The context for that decision, though, is best seen by the very different business model Microsoft pursued for its software.

What made subscriptions work for IBM was that the mainframe maker was offering the entire technological stack, and thus had reason to be in direct ongoing contact with its customers. In 1968, though, in an effort to escape an antitrust lawsuit from the federal government, IBM unbundled their hardware, software, and services. This created a new market for software, which was sold on a somewhat ad hoc basis; at the time software didn’t even have copyright protection.

Then, in 1980, Congress added “computer program” to the definition list of U.S. copyright law, and software licensing was born: now companies could maintain legal ownership of software and grant an effectively infinite number of licenses to individuals or corporations to use that software. Thus it was that Microsoft could charge for every copy of Windows or Visual Basic without needing to sell or service the underlying hardware it ran on.

This highlighted another critical factor that makes tech companies unique: the zero marginal cost nature of software. To be sure, this wasn’t a new concept: Silicon Valley received its name because silicon-based chips have similar characteristics; there are massive up-front costs to develop and build a working chip, but once built additional chips can be manufactured for basically nothing. It was this economic reality that gave rise to venture capital, which is about providing money ahead of a viable product for the chance at effectively infinite returns should the product and associated company be successful.

Indeed, this is why software companies have traditionally been so concentrated in Silicon Valley, and not, say, upstate New York, where IBM was located. William Shockley, one of the inventors of the transistor at Bell Labs, was originally from Palo Alto and wanted to take care of his ailing mother even as he was starting his own semiconductor company; eight of his researchers, known as the “traitorous eight”, would flee his tyrannical management to form Fairchild Semiconductor, the employees of which would go on to start over 65 new companies, including Intel.

It was Intel that set the model for venture capital in Silicon Valley, as Arthur Rock put in $10,000 of his own money and convinced his contacts to add an additional $2.5 million to get Intel off the ground; the company would IPO three years later for $8.225 million. Today the timelines are certainly longer but the idea is the same: raise money to start a company predicated on zero marginal costs, and, if you are successful, exit with an excellent return for shareholders. In other words, it is the venture capitalists that ensured software followed silicon, not the inherent nature of silicon itself.

To summarize: venture capitalist fund tech companies, which are characterized by a zero marginal cost component that allows for uncapped returns on investment.

Microsoft and Subscription Pricing

Probably the most overlooked and underrated era of tech history was the on-premises era dominated by software companies like Microsoft, Oracle, and SAP, and hardware from not only IBM but also Sun, HP, and later Dell. This era was characterized by a mix of up-front revenue for the original installation of hardware or software, plus ongoing services revenue. This model is hardly unique to software: lots of large machinery is sold on a similar basis.

The zero marginal cost nature of software, however, made it possible to cut out the up-front cost completely; Microsoft started pushing this model heavily to large enterprise in 2001 with version 6 of its Enterprise Agreement. Instead of paying for perpetual licenses for software that inevitably needed to be upgraded in a few years, enterprises could pay a monthly fee; this had the advantage of not only operationalizing former capital costs but also increasing flexibility. No longer would enterprises have to negotiate expensive “true-up” agreements if they grew; they were also protected on the downside if their workforce shrunk.

Microsoft, meanwhile, was able to convert its up-front software investment from a one-time payment to regular payments over time that were not only perpetual in nature (because to stop payment was to stop using the software, which wasn’t a viable option for most of Microsoft’s customers) but also more closely matched Microsoft’s own development schedule.

This wasn’t a new idea, as IBM had shown several decades earlier; moreover, it is worth pointing out that the entire function of depreciation when it comes to accounting is to properly attribute capital expenditures across the time periods those expenditures are leveraged. What made Microsoft’s approach unique, though, is that over time the product enterprises were paying for was improving. This is in direct contrast to a physical asset that deteriorates, or a traditional software support contract that is limited to a specific version.

Today this is the expectation for software generally: whatever you pay for today will be better in the future, not worse, and tech companies are increasingly organized around this idea of both constant improvements and constant revenue streams.

Salesforce and Cloud Computing

Still, Microsoft products had to actually be installed in the first place: much of the benefit of Enterprise Agreements accrued to companies that had already gone through that pain.

Salesforce, founded in 1999, sought to extend that same convenience to all companies: instead of having to go through long and painful installation processes that were inevitably buggy and over-budget, customers could simply access Salesforce on Salesforce’s own servers. The company branded it “No Software”, because software installations had such negative connotations, but in fact this was the ultimate expression of software. Now, instead of one copy of software replicated endlessly and distributed anywhere, Salesforce would simply run one piece of software and give anyone anywhere access to it. This did increase fixed costs — running servers and paying for bandwidth is expensive — but the increase was more than made up for by the decrease in upfront costs for customers.

This also increased the importance of scale for tech companies: now not only did the cost of software development need to be spread out over the greatest number of customers, so did the ongoing costs of building and running large centralized servers (of course Amazon operationalized these costs as well with AWS). That, though, became another characteristic of tech companies: scale not only pays the bills, it actually improves the service as large expenditures are leveraged across that many more customers.

Atlassian and Zero Transaction Costs

Still, Salesforce was still selling to large corporations. What has changed over the last ten years in particular is the rise of freemium and self-serve, but the origins of this model go back a decade earlier.

The early 2000s were a dire time in tech: the bubble had burst, and it was nearly impossible to raise money in Silicon Valley, much less anywhere else in the world — including Sydney, Australia. So, in 2001, when Scott Farquhar and Mike Cannon-Brookes, whose only goals was to make $35,000 a year and not have to wear a suit, couldn’t afford a sales force for the collaboration software they had developed called Jira they simply put it on the web for anyone to trial, with a payment form to unlock the full program.

This wasn’t necessarily new: “shareware” and “trialware” had existed since the 1980s, and were particularly popular for games, but Atlassian, thanks to being in the right place (selling Agile project management software) at the right time (the explosion of Agile as a development methodology) was using essentially the same model to sell into enterprise.

What made this possible was the combination of zero marginal costs (which meant that distributing software didn’t cost anything) and zero transaction costs: thanks to the web and rudimentary payment processors it was possible for Atlassian to sell to companies without ever talking to them. Indeed, for many years the only sales people Atlassian had were those tasked with reducing churn: all in-bound sales were self-serve.

This model, when combined with Salesforce’s cloud-based model (which Atlassian eventually moved to), is the foundation of today’s SaaS companies: customers can try out software with nothing more than an email address, and pay for it with nothing more than a credit card. This too is a characteristic of tech companies: free-to-try, and easy-to-buy, by anyone, from anywhere.

The Question of the Real World

So what about companies like WeWork and Peloton that interact with the real world? Note the centrality of software in all of these characteristics:

  • Software creates ecosystems.
  • Software has zero marginal costs.
  • Software improves over time.
  • Software offers infinite leverage.
  • Software enables zero transaction costs.

The question of whether companies are tech companies, then, depends on how much of their business is governed by software’s unique characteristics, and how much is limited by real world factors. Consider Netflix, a company that both competes with traditional television and movie companies yet is also considered a tech company:

  • There is no real software-created ecosystem.
  • Netflix shows are delivered at zero marginal costs without the need to pay distributors (although bandwidth bills are significant).
  • Netflix’s product improves over time.
  • Netflix is able to serve the entire world because of software, giving them far more leverage than much of their competition.
  • Netflix can transact with anyone with a self-serve model.

Netflix checks four of the five boxes.

Airbnb, which has yet to go public, is also often thought of as a tech company, even though they deal with lodging:

  • There is a software-created ecosystem of hosts and renters.
  • While Airbnb’s accounting suggests that its revenue has minimal marginal costs, a holistic view of Airbnb’s market shows that the company effectively pays hosts 86 percent of total revenue: the price of an “asset-lite” model is that real world costs dominate in terms of the overall transaction.
  • Airbnb’s platform improves over time.
  • Airbnb is able to serve the entire world, giving it maximum leverage.
  • Airbnb can transact with anyone with a self-serve model.

Uber, meanwhile, has long been mentioned in the same breath as Airbnb, and for good reason: it checks most of the same boxes:

  • There is a software-created ecosystem of drivers and riders.
  • Like Airbnb, Uber reports its revenue as if it has low marginal costs, but a holistic view of rides shows that the company pays drivers around 80 percent of total revenue; this isn’t a world of zero marginal costs.
  • Uber’s platform improves over time.
  • Uber is able to serve the entire world, giving it maximum leverage.
  • Uber can transact with anyone with a self-serve model.

A major question about Uber concerns transaction costs: bringing and keeping drivers on the platform is very expensive. This doesn’t mean that Uber isn’t a tech company, but it does underscore the degree to which its model is dependent on factors that don’t have zero costs attached to them.

Now for the two companies with which I opened the article. First, WeWork (which I wrote about here and here):

  • WeWork claims it has a software-created ecosystem that connect companies and employees across locations, but it is difficult to find evidence that this is a driving factor for WeWork’s business.
  • WeWork pays a huge percentage of its revenue in rent.
  • WeWork’s offering certainly has the potential to improve over time.
  • WeWork is limited by the number of locations it builds out.
  • WeWork requires a consultation for even a one-person rental, and relies heavily on brokers for larger businesses.

Frankly, it is hard to see how WeWork is a tech company in any way.

Finally Peloton (which I wrote about here):

  • Peloton does have social network-type qualities, as well as strong gamification.
  • While Peloton is available as just an app, the full experience requires a four-figure investment in a bike or treadmill; that, needless to say, is not a zero marginal cost offering. The service itself, though, is zero marginal cost.
  • Peloton’s product improves over time.
  • The size, weight, and installation requirements for Peloton’s hardware mean the company is limited to the United States and the just-added United Kingdom and Germany.
  • Peloton has a high-touch installation process

Peloton is also iffy as far these five factors go, but then again, so is Apple: software-differentiated hardware is in many respects its own category. And, there is one more definition that is worth highlighting.

Peloton and Disruption

The term “technology” is an old one, far older than Silicon Valley. It means anything that helps us produce things more efficiently, and it is what drives human progress. In that respect, all successful companies, at least in a free market, are tech companies: they do something more efficiently than anyone else, on whatever product vector matters to their customers.

To that end, technology is best understood with qualifiers, and one of the most useful sets comes from Clayton Christensen and The Innovator’s Dilemma:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

Sustaining technologies make existing firms better, but it doesn’t change the competitive landscape. By extension, if adopting technology simply strengthens your current business, as opposed to making it uniquely possible, you are not a tech company. That, for example, is why IBM’s customers were no more tech companies than are users of the most modern SaaS applications.

Disruptive technologies, though, make something possible that wasn’t previously, or at a price point that wasn’t viable. This is where Peloton earns the “tech company” label from me: compared to spin classes at a dedicated gym, Peloton is cheap, and it scales far better. Sure, looking at a screen isn’t as good as being in the same room with an instructor and other cyclists, but it is massively more convenient and opens the market to a completely new customer base. Moreover, it scales in a way a gym never could: classes are held once and available forever on-demand; the company has not only digitized space but also time, thanks to technology. This is a tech company.

This definition also applies to Netflix, Airbnb, and Uber; all digitized something essential to their competitors, whether it be time or trust. I’m not sure, though, that it applies to WeWork: to the extent the company is unique it seems to rely primarily on unprecedented access to capital. That may be enough, but it does not mean WeWork is a tech company.

And, on the flipside, being a tech company does not guarantee success: the curse of tech companies is that while they generate massive value, capturing that value is extremely difficult. Here Peloton’s hardware is, like Apple’s, a significant advantage.

On the other hand, asset-lite models, like ride-sharing, are very attractive, but can Uber capture sufficient value to make a profit? What will Airbnb’s numbers look like when it finally IPOs? Indeed, the primary reason Peloton’s numbers look good is because they are selling physical products, differentiated by software, at a massive profit!

Still, definitions are helpful, even if they are not predictive. Software is used by all companies, but it completely transforms tech companies and should reshape consideration of their long-term upside — and downside.

I wrote a follow-up to this article in this Daily Update.

Privacy Fundamentalism

Farhad Manjoo, in the New York Times, ran an experiment on themself:

Earlier this year, an editor working on The Times’s Privacy Project asked me whether I’d be interested in having all my digital activity tracked, examined in meticulous detail and then published — you know, for journalism…I had to install a version of the Firefox web browser that was created by privacy researchers to monitor how websites track users’ data. For several days this spring, I lived my life through this Invasive Firefox, which logged every site I visited, all the advertising tracking servers that were watching my surfing and all the data they obtained. Then I uploaded the data to my colleagues at The Times, who reconstructed my web sessions into the gloriously invasive picture of my digital life you see here. (The project brought us all very close; among other things, they could see my physical location and my passwords, which I’ve since changed.)

What did we find? The big story is as you’d expect: that everything you do online is logged in obscene detail, that you have no privacy. And yet, even expecting this, I was bowled over by the scale and detail of the tracking; even for short stints on the web, when I logged into Invasive Firefox just to check facts and catch up on the news, the amount of information collected about my endeavors was staggering.

Here is a shrunk-down version of the graphic that resulted (click it to see the whole thing on the New York Times site):

Farhad Manjoo's online tracking

Notably — at least from my perspective! — Stratechery is on the graphic:

Stratechery's trackers

Wow, it sure looks like I am up to some devious behavior! I guess it is all of the advertising trackers on my site which doesn’t have any advertising…or perhaps Manjoo, as seems to so often be the case with privacy scare pieces, has overstated their case by a massive degree.

Stratechery “Trackers”

The narrow problem with Manjoo’s piece is a definitional one. This is what it says at the top of the graphic:

What the Times considers a tracker

This strikes me as an overly broad definition of tracking; as best I can tell, Manjoo and their team counted every single script, image, or cookie that was loaded from a 3rd-party domain, no matter its function.

Consider Stratechery: the page in question, given the timeframe of Manjoo’s research and the apparent link from Techmeme, is probably The First Post-iPhone Keynote. On that page I count 31 scripts, images, fonts, and XMLHttpRequests (XHR for short, which can be used to set or update cookies) that were loaded from a 3rd-party domain.1 The sources are as follows (in decreasing number by 3rd-party service):

  • Stripe (11 images, 5 JavaScript files, 2 XHRs)
  • Typekit (1 image, 1 JavaScript file, 5 fonts)
  • Cloudfront (3 JavaScript files)
  • New Relic (2 JavaScript files)
  • Google (1 image, 1 JavaScript file)
  • WordPress.com (1 JavaScript file)

You may notice that, in contrast to the graphic, there is nothing from Amazon specifically. There is Cloudfront, which is a content delivery service offered by Amazon Web Services, but suggesting that Stratechery includes trackers from Amazon because I rely on AWS is ridiculous. In the case of Cloudfront, one JavaScript file is from Memberful, my subscription management service, and the other two are public JavaScript libraries used on countless sites on the Internet (jQuery and Pmrpc). As for the rest:

  • Stripe is the payment processor for Stratechery memberships.
  • Typekit is Adobe’s web-font service (Stratechery uses Freight Sans Pro).
  • New Relic is an analytics package used to diagnose website issues and improve performance.
  • Google is Google Analytics, which I use for counting page views and conversions to free and paid subscribers (this last bit is mostly theoretical; Memberful integrates with Google Analytics, but I haven’t run any campaigns — Stratechery relies on word-of-mouth).
  • WordPress.com is for the Jetpack service from Automattic, which I use for site monitoring, security, and backups, as well as the recommended article carousel under each article.

The only service here remotely connected to advertising is Google Analytics, but I have chosen to not share that information with Google (there is no need because I don’t need access to Google’s advertising tools); the truth is that all of these “trackers” make Stratechery possible.2

The Internet’s Nature

This narrow critique of Manjoo’s article — wrongly characterizing multiple resources as “trackers” — gets at a broader philosophical shortcoming: technology can be used for both good things and bad things, but in the haste to highlight the bad, it is easy to be oblivious to the good. Manjoo, for example, works for the New York Times, which makes most of its revenue from subscriptions;3 given that, I’m going to assume they do not object to my including 3rd-party resources on Stratechery that support my own subscription business?

This applies to every part of my stack: because information is so easily spread across the Internet via infrastructure maintained by countless companies for their own positive economic outcome, I can write this Article from my home and you can read it in yours. That this isn’t even surprising is a testament to the degree to which we take the Internet for granted: any site in the world is accessible by anyone from anywhere, because the Internet makes moving data free and easy.

Indeed, that is why my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one. Consider one of the most fearsome surveillance entities of all time, the East German Stasi. From Wired:

The Stasi's files

The German Democratic Republic dissolved in 1990 with the fall of communism, but the documents assembled by the Ministry for State Security, or Stasi, remain. This massive archive includes 69 miles of shelved documents, 1.8 million images, and 30,300 video and audio recordings housed in 13 offices throughout Germany. Canadian photographer Adrian Fish got a rare peek at the archives and meeting rooms of the Berlin office for his series Deutsche Demokratische Republik: The Stasi Archives. “The archives look very banal, just like a bunch of boring file holders with a bunch of paper,” he says. “But what they contain are the everyday results of a people being spied upon.”

That the files are paper makes them terrifying, because anyone can read them individually; that they are paper, though, also limits their reach. Contrast this to Google or Facebook: that they are digital means they reach everywhere; that, though, means they are read in aggregate, and stored in a way that is only decipherable by machines.

To be sure, a Stasi compare and contrast is hardly doing Google or Facebook any favors in this debate: the popular imagination about the danger this data collection poses, though, too often seems derived from the former, instead of the fundamentally different assumptions of the latter. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

  • Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.
  • Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.
  • Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

Apple’s Fundamentalism

This doesn’t just apply to governments: consider Apple, a company which is staking its reputation on privacy. Last week the WebKit team released a new Tracking Prevention Policy that is taking clear aim at 3rd-party trackers:

We have implemented or intend to implement technical protections in WebKit to prevent all tracking practices included in this policy. If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques.

Of particular interest to Stratechery — and, per the opening of this article, Manjoo — is this definition and declaration:

Cross-site tracking is tracking across multiple first party websites; tracking between websites and apps; or the retention, use, or sharing of data from that activity with parties other than the first party on which it was collected.

[…]

WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert). These goals apply to all types of tracking listed above, as well as tracking techniques currently unknown to us.

In case you were wondering,4 yes, this will affect sites like Stratechery, and the WebKit team knows it (emphasis mine to highlight potential impacts on Stratechery):

There are practices on the web that we do not intend to disrupt, but which may be inadvertently affected because they rely on techniques that can also be used for tracking. We consider this to be unintended impact. These practices include:

  • Funding websites using targeted or personalized advertising (see Private Click Measurement below).
  • Measuring the effectiveness of advertising.
  • Federated login using a third-party login provider.
  • Single sign-on to multiple websites controlled by the same organization.
  • Embedded media that uses the user’s identity to respect their preferences.
  • “Like” buttons, federated comments, or other social widgets.
  • Fraud prevention.
  • Bot detection.
  • Improving the security of client authentication.
  • Analytics in the scope of a single website.
  • Audience measurement.

When faced with a tradeoff, we will typically prioritize user benefits over preserving current website practices. We believe that that is the role of a web browser, also known as the user agent.

Don’t worry, Stratechery is not going out of business (although there may be a fair bit of impact on the user experience, particularly around subscribing or logging in). It is disappointing, though, that the maker of one of the most important and the most unavoidable browser technologies in the world (WebKit is the only option on iOS) has decided that an absolutist approach that will ultimately improve the competitive position of massive first party advertisers like Google and Facebook, even as it harms smaller sites that rely on 3rd-party providers for not just ads but all aspects of their business, is what is best for everyone.

What makes this particularly striking is that it was only a month ago that Apple was revealed to be hiring contractors to listen to random Siri recordings; unlike Amazon (but like Google), Apple didn’t disclose that fact to users. Furthermore, unlike both Amazon and Google, Apple didn’t give users any way to see what recordings Apple had or delete them after-the-fact. Many commentators have seized on the irony of Apple having the worst privacy practices for voice recordings given their rhetoric around being a privacy champion, but I think the more interesting insight is twofold.

First, this was, in my estimation, a far worse privacy violation than the sort of online tracking the WebKit team is determined to stamp out, for the simple reason that the Siri violation crossed the line between the physical and digital world. As I noted above the digital world is inherently transparent when it comes to data; the physical world, though — particularly somewhere like your home — is inherently private.

Second, I do understand why Apple has humans listening to Siri recordings: anyone that has used Siri can appreciate that the service needs to accelerate its feedback loop and improve more quickly. What happens, though, when improving the product means invading privacy? Do you look for good trade-offs, like explicit consent and user control, or do you fear a fundamentalist attitude that declares privacy more important than anything, and try to sneak a true privacy violation behind everyone’s back like some sort of rebellious youth fleeing religion? Being an absolutist also leads to bad behavior, because after all, everyone is already a criminal.

Towards Trade-offs

The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.5 To that end, I believe the privacy debate needs to be reset around these three assumptions:

  1. Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.
  2. Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet, and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.
  3. Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

This is where the Stasi example truly resonates: imagine all of those files, filled with all manner of physical movements and meetings and utterings, digitized and thus searchable, shareable, inescapable. That goes beyond a new medium lacking privacy from the get-go: it is taking privacy away from a world that previously had it. And yet the proliferation of cameras, speakers, location data, etc. goes on with a fraction of the criticism levied at big tech companies. Like too many fundamentalists, we are in danger of missing the point.

I wrote a follow-up to this article in this Daily Update.

  1. This matches the 31 dots in Manjoo’s graphic; I did not count HTML documents or CSS files []
  2. I do address these services and others in the Stratechery Privacy Policy []
  3. Let’s be charitable and ignore the fact that the most egregious trackers from Manjoo’s article — by far — are news sites, including nytimes.com []
  4. Or if you think I’m biased, although, for the record, I conceptualized this article before this policy was announced []
  5. And frankly, probably closer to Apple than the others, the last section notwithstanding []

A Framework for Moderation

On Sunday night, when Cloudflare CEO Matthew Prince announced in a blog post that the company was terminating service for 8chan, the response was nearly universal: Finally.

It was hard to disagree: it was on 8chan — which was created after complaints that the extremely lightly-moderated anonymous-based forum 4chan was too heavy-handed — that a suspected terrorist gunman posted a rant explaining his actions before killing 20 people in El Paso. This was the third such incident this year: the terrorist gunmen in Christchurch, New Zealand and Poway, California did the same; 8chan celebrated all of them.

To state the obvious, it is hard to think of a more reprehensible community than 8chan. And, as many were quick to point out, it was hardly the sort of site that Cloudflare wanted to be associated with as they prepared for a reported IPO. Which again raises the question: what took Cloudflare so long?

Moderation Questions

The question of when and why to moderate or ban has been an increasingly frequent one for tech companies, although the circumstances and content to be banned have often varied greatly. Some examples from the last several years:

  • Cloudflare dropping support for 8chan
  • Facebook banning Alex Jones
  • The U.S. Congress creating an exception to Section 230 of the Communications Decency Act for the stated purpose of targeting sex trafficking
  • The Trump administration removing ISPs from Title II classification
  • The European Union ruling that the “Right to be Forgotten” applied to Google

These may seem unrelated, but in fact all are questions about what should (or should not) be moderated, who should (or should not) moderate, when should (or should not) they moderate, where should (or should not) they moderate, and why? At the same time, each of these examples is clearly different, and those differences can help build a framework for companies to make decisions when similar questions arise in the future — including Cloudflare.

Content and Section 230

The first and most obvious question when it comes to content is whether or not it is legal. If it is illegal, the content should be removed.

And indeed it is: service providers remove illegal content as soon as they are made aware of it.

Note, though, that service providers are generally not required to actively search for illegal content, which gets into Section 230 of the Communications Decency Act, a law that is continuously misunderstood and/or misrepresented.1

To understand Section 230 you need to go back to 1991 and the court case Cubby v CompuServe. CompuServe hosted a number of forums; a member of one of those forums made allegedly defamatory remarks about a company named Cubby, Inc. Cubby sued CompuServe for defamation, but a federal court judge ruled that CompuServe was a mere “distributor” of the content, not its publisher. The judge noted:

The requirement that a distributor must have knowledge of the contents of a publication before liability can be imposed for distributing that publication is deeply rooted in the First Amendment…CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

Four years later, though, Stratton Oakmont, a securities investment banking firm, sued Prodigy for libel, in a case that seemed remarkably similar to Cubby v. CompuServe; this time, though, Prodigy lost. From the opinion:

The key distinction between CompuServe and Prodigy is two fold. First, Prodigy held itself out to the public and its members as controlling the content of its computer bulletin boards. Second, Prodigy implemented this control through its automatic software screening program, and the Guidelines which Board Leaders are required to enforce. By actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and “bad taste”, for example, Prodigy is clearly making decisions as to content, and such decisions constitute editorial control…Based on the foregoing, this Court is compelled to conclude that for the purposes of Plaintiffs’ claims in this action, Prodigy is a publisher rather than a distributor.

In other words, the act of moderating any of the user-generated content on its forums made Prodigy liable for all of the user-generated content on its forums — in this case to the tune of $200 million. This left services that hosted user-generated content with only one option: zero moderation. That was the only way to be classified as a distributor with the associated shield from liability, and not as a publisher.

The point of Section 230, then, was to make moderation legally viable; this came via the “Good Samaritan” provision. From the statute:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

In short, Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

Keep in mind that Congress is extremely limited in what it can make illegal because of the First Amendment. Indeed, the vast majority of the Communications Decency Act was ruled unconstitutional a year after it was passed in a unanimous Supreme Court decision. This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.

The one tool that Congress does have is changing Section 230; for example, 2018’s SESTA/FOSTA act made platforms liable for any activity related to sex trafficking. In response platforms removed all content remotely connected to sex work of any kind — Cloudflare, for example, dropped support for the Switter social media network for sex workers — in a way that likely caused more harm than good. This is the problem with using liability to police content: it is always in the interest of service providers to censor too much, because the downside of censoring too little is massive.

The Stack

If the question of what content should be moderated or banned is one left to the service providers themselves, it is worth considering exactly what service providers we are talking about.

At the top of the stack are the service providers that people publish to directly; this includes Facebook, YouTube, Reddit, 8chan and other social networks. These platforms have absolute discretion in their moderation policies, and rightly so. First, because of Section 230, they can moderate anything they want. Second, none of these platforms have a monopoly on online expression; someone who is banned from Facebook can publish on Twitter, or set up their own website. Third, these platforms, particularly those with algorithmic timelines or recommendation engines, have an obligation to moderate more aggressively because they are not simply distributors but also amplifiers.

Internet service providers (ISPs), on the other hand, have very different obligations. While ISPs are no longer covered under Title II of the Communications Act, which barred them from discriminating data on the basis of content, it is the expectation of consumers and generally the policy of ISPs to not block any data because of its content (although ISPs have agreed to block child pornography websites in the past).

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

The position in the stack matters for moderation

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

Cloudflare’s Decision

What made Cloudflare’s decision more challenging was three-fold.

First, while Cloudflare is not an ISP, they are much more akin to infrastructure than they are to user-facing platforms. In the case of 8chan, Cloudflare provided a service that shielded the site from Distributed Denial-of-Service (DDoS) attacks; without a service like Cloudflare, 8chan would almost assuredly be taken offline by Internet vigilantes using botnets to launch such an attack. In other words, the question wasn’t whether or not 8chan was going to be promoted or have easy access to large social networks, but whether it would even exist at all.

To be perfectly clear, I would prefer that 8chan did not exist. At the same time, many of those arguing that 8chan should be erased from the Internet were insisting not too long ago that the U.S. needed to apply Title II regulation (i.e. net neutrality) to infrastructure companies to ensure they were not discriminating based on content. While Title II would not have applied to Cloudflare, it is worth keeping in mind that at some point or another nearly everyone reading this article has expressed concern about infrastructure companies making content decisions.

And rightly so! The difference between an infrastructure company and a customer-facing platform like Facebook is that the former is not accountable to end users in any way. Cloudflare CEO Matthew Prince made this point in an interview with Stratechery:

We get labeled as being free speech absolutists, but I think that has absolutely nothing to do with this case. There is a different area of the law that matters: in the U.S. it is the idea of due process, the Aristotelian idea is that of the rule of law. Those principles are set down in order to give governments legitimacy: transparency, consistency, accountability…if you go to Germany and say “The First Amendment” everyone rolls their eyes, but if you talk about the rule of law, everyone agrees with you…

It felt like people were acknowledging that the deeper you were in the stack the more problematic it was [to take down content], because you couldn’t be transparent, because you couldn’t be judged as to whether you’re consistent or not, because you weren’t fundamentally accountable. It became really difficult to make that determination.

Moreover, Cloudflare is an essential piece of the Facebook and YouTube competitive set: it is hard to argue that Facebook and YouTube should be able to moderate at will because people can go elsewhere if elsewhere does not have the scale to functionally exist.

Second, the nature of the medium means that all Internet companies have to be concerned about the precedent their actions in one country will have in different countries with different laws. One country’s terrorist is another country’s freedom fighter; a third country’s government acting according to the will of the people is a fourth’s tyrannically oppressing the minority. In this case, to drop support for 8chan — a site that was legal — is to admit that the delivery of Cloudflare’s services are up for negotiation.

Third, it is likely that at some point 8chan will come back, thanks to the help of a less scrupulous service, just as the Daily Stormer did when Cloudflare kicked them off two years ago. What, ultimately is the point? In fact, might there be harm, since tracking these sites may end up being more difficult the further underground they go?

This third point is a valid concern, but one I, after long deliberation, ultimately reject. First, convenience matters. The truly committed may find 8chan when and if it pops up again, but there is real value in requiring that level of commitment in the first place, given said commitment is likely nurtured on 8chan itself. Second, I ultimately reject the idea that publishing on the Internet is a right that must be guaranteed by 3rd parties. Stand on the street corner all you like, at least your terrible ideas will be limited by the physical world. The Internet, though, with its inherent ability to broadcast and congregate globally, is a fundamentally more dangerous medium that is by-and-large facilitated by third parties who have rights of their own. Running a website on a cloud service provider means piggy-backing off of your ISP, backbone providers, server providers, etc., and, if you are controversial, services like Cloudflare to protect you. It is magnanimous in a way for Cloudflare to commit to serving everyone, but at the end of the day Cloudflare does have a choice.

To that end I find Cloudflare’s rationale for acting compelling. Prince told me:

If this were a normal circumstance we would say “Yes, it’s really horrendous content, but we’re not in a position to decide what content is bad or not.” But in this case, we saw repeated consistent harm where you had three mass shootings that were directly inspired by and gave credit to this platform. You saw the platform not act on any of that and in fact promote it internally. So then what is the obligation that we have? While we think it’s really important that we are not the ones being the arbiter of what is good or bad, if at the end of the day content platforms aren’t taking any responsibility, or in some cases actively thwarting it, and we see that there is real harm that those platforms are doing, then maybe that is the time that we cut people off.

User-facing platforms are the ones that should make these calls, not infrastructure providers. But if they won’t, someone needs to. So Cloudflare did.

Defining Gray

I promised, with this title, a framework for moderation, and frankly, I under-delivered. What everyone wants is a clear line about what should or should not be moderated, who should or should not be banned. The truth, though, is that those bright lines do not exist, particularly in the United States.

What is possible, though, is to define the boundaries of the gray areas. In the case of user-facing platforms, their discretion is vast, and responsibility for not simply moderation but also promotion significantly greater. A heavier hand is justified, as is external pressure on decision-makers; the most important regulatory response is to ensure there is competition.

Infrastructure companies, meanwhile, should primarily default to legality, but also, as Cloudflare did, recognize that they are the backstop to user-facing platforms that refuse to do their job.

Governments, meanwhile, beyond encouraging competition, should avoid using liability as a lever, and instead stick to clearly defining what is legal and what isn’t. I think it is legitimate for Germany, for example, to ban pro-Nazi websites, or the European Union to enforce the “Right to be Forgotten” within E.U. borders; like most Americans, I lean towards more free speech, not less, but governments, particularly democratically elected ones, get to make the laws.

What is much more problematic are initiatives like the European Copyright Directive, which makes platforms liable for copyright infringement. This inevitably leads to massive overreach and clumsy filtering, and favors large platforms that can pay for both filters and lawyers over smaller ones that cannot.

None of this is easy. I am firmly in the camp that argues that the Internet is something fundamentally different than what came before, making analog examples less relevant than they seem. The risks and opportunities of the Internet are both different and greater than anything we have experienced previously, and perhaps the biggest mistake we can make is being too sure about what is the right thing to do. Gray is uncomfortable, but it may be the best place to be.

I wrote a follow-up to this article in this Daily Update.

  1. For the rest of this section I am re-using text I wrote in this 2018 Daily Update; I am not putting the re-used text in blockquotes as I normally would for the sake of readability []

Shopify and the Power of Platforms

While I am (rightfully) teased about how often I discuss Aggregation Theory, there is a method to my madness, particularly over the last year: more and more attention is being paid to the power wielded by Aggregators like Google and Facebook, but to my mind the language is all wrong.

I discussed this at length last year:

  • Tech’s Two Philosophies highlighted how Facebook and Google want to do things for you; Microsoft and Apple were about helping you do things better.
  • The Moat Map discussed the relationship between network effects and supplier differentiation: the more that network effects were internalized the more suppliers were commoditized, and the more that network effects were externalized the more suppliers were differentiated.
  • Finally, The Bill Gates Line formally defined the difference between Aggregators and Platforms. This is the key paragraph:

    This is ultimately the most important distinction between platforms and Aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; Aggregators, on the other hand, intermediate and control it.

It follows, then, that debates around companies like Google that use the word “platform” and, unsurprisingly, draw comparisons to Microsoft twenty years ago, misunderstand what is happening and, inevitably, result in prescriptions that would exacerbate problems that exist instead of solving them.

There is, though, another reason to understand the difference between platforms and Aggregators: platforms are Aggregators’ most effective competition.

Amazon’s Bifurcation

Earlier this week I wrote about Walmart’s failure to compete with Amazon head-on; after years of trying to leverage its stores in e-commerce, Walmart realized that Amazon was winning because e-commerce required a fundamentally different value chain than retail stores. The point of my Daily Update was that the proper response to that recognition was not to try to imitate Amazon, but rather to focus on areas where the stores actually were an advantage, like groceries, but it’s worth understanding exactly why attacking Amazon head-on was a losing proposition.

When Amazon started, the company followed a traditional retail model, just online. That is, Amazon bought products at wholesale, then sold them to customers:

Amazon retail sits between suppliers and customers

Amazon’s sales proceeded to grow rapidly, not just of books, but also in other media products with large selections like DVDs and CDs that benefitted from Amazon’s effectively unlimited shelf-space. This growth allowed Amazon to build out its fulfillment network, and by 1999 the company had seven fulfillment centers across the U.S. and three more in Europe.

Ten may not seem like a lot — Amazon has well over 300 fulfillment centers today, plus many more distribution and sortation centers — but for reference Walmart has only 20. In other words, at least when it came to fulfillment centers, Amazon was halfway to Walmart’s current scale 20 years ago.

It would ultimately take Amazon another nine years to reach twenty fulfillment centers (this was the time for Walmart to respond), but in the meantime came a critical announcement that changed what those fulfillment centers represented. In 2006 Amazon announced Fulfillment by Amazon, wherein 3rd-party merchants could use those fulfillment centers too. Their products would not only be listed on Amazon.com, they would also be held, packaged, and shipped by Amazon.

In short, Amazon.com effectively bifurcated itself into a retail unit and a fulfillment unit:

Amazon bifurcated itself into retail and fulfillment units

The old value chain is still there — nearly half of the products on Amazon.com are still bought by Amazon at wholesale and sold to customers — but 3rd parties can sell directly to consumers as well, bypassing Amazon’s retail arm and leveraging only Amazon’s fulfillment arm, which was growing rapidly:

Amazon Fulfillment Centers Over Time

Walmart and its 20 distribution centers don’t stand a chance, particularly since catching up means competing for consumers not only with Amazon but with all of those 3rd-party merchants filling up all of those fulfillment centers.

Amazon and Aggregation

There is one more critical part of the drawing I made above:

Amazon owns all customer interactions

Despite the fact that Amazon had effectively split itself in two in order to incorporate 3rd-party merchants, this division is barely noticeable to customers. They still go to Amazon.com, they still use the same shopping cart, they still get the boxes with the smile logo. Basically, Amazon has managed to incorporate 3rd-party merchants while still owning the entire experience from an end-user perspective.

This should sound familiar: as I noted at the top, Aggregators tend to internalize their network effects and commoditize their suppliers, which is exactly what Amazon has done.1 Amazon benefits from more 3rd-party merchants being on its platform because it can offer more products to consumers and justify the buildout of that extensive fulfillment network; 3rd-party merchants are mostly reduced to competing on price.

That, though, suggests there is a platform alternative — that is, a company that succeeds by enabling its suppliers to differentiate and externalizing network effects to create a mutually beneficial ecosystem. That alternative is Shopify.

The Shopify Platform

At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed.

The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

Merchants interact with customers, not Shopify

This means they have to stand out not in a search result on Amazon.com, or simply offer the lowest price, but rather earn customers’ attention through differentiated product, social media advertising, etc. Many, to be sure, will fail at this: Shopify does not break out merchant churn specifically, but it is almost certainly extremely high.

That, though, is the point.

Unlike Walmart, currently weighing whether to spend additional billions after the billions it has already spent trying to attack Amazon head-on, with a binary outcome of success or failure, Shopify is massively diversified. That is the beauty of being a platform: you succeed (or fail) in the aggregate.

To that end, I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.

This is how Shopify can both in the long run be the biggest competitor to Amazon even as it is a company that Amazon can’t compete with: Amazon is pursuing customers and bringing suppliers and merchants onto its platform on its own terms; Shopify is giving merchants an opportunity to differentiate themselves while bearing no risk if they fail.

The Shopify Fulfillment Network

This is the context for one of the most interesting announcements from Shopify’s recent partner conference, Shopify Unite. The name should ring familiar: the Shopify Fulfillment Network.

From the company’s blog:

Customers want their online purchases fast, with free shipping. It’s now expected, thanks to the recent standard set by the largest companies in the world. Working with third-party logistics companies can be tedious. And finding a partner that won’t obscure your customer data or hide your brand with packaging is a challenge.

This is why we’re building Shopify Fulfillment Network—a geographically dispersed network of fulfillment centers with smart inventory-allocation technology. We use machine learning to predict the best places to store and ship your products, so they can get to your customers as fast as possible.

We’ve negotiated low rates with a growing network of warehouse and logistic providers, and then passed on those savings to you. We support multiple channels, custom packaging and branding, and returns and exchanges. And it’s all managed in Shopify.

The first paragraph explains why the Shopify Fulfillment Network was a necessary step for Shopify: Amazon may commoditize suppliers, hiding their brand from website to box, but if its offering is truly superior, suppliers don’t have much choice. That was increasingly the case with regards to fulfillment, particularly for the small-scale sellers that are important to Shopify not necessarily for short-term revenue generation but for long-run upside. Amazon was simply easier for merchants and more reliable for customers.

Notice, though, that Shopify is not doing everything on their own: there is an entire world of third-party logistics companies (known as “3PLs”) that offer warehousing and shipping services. What Shopify is doing is what platforms do best: act as an interface between two modularized pieces of a value chain.

Shopify as interface between 3PLs and merchants

On one side are all of Shopify’s hundreds of thousands of merchants: interfacing with all of them on an individual basis is not scalable for those 3PL companies; now, though, they only need to interface with Shopify.

The same benefit applies in the opposite direction: merchants don’t have the means to negotiate with multiple 3PLs such that their inventory is optimally placed to offer fast and inexpensive delivery to customers; worse, the small-scale sellers I discussed above often can’t even get an audience with these logistics companies. Now, though, Shopify customers need only interface with Shopify.

Plaforms Versus Aggregators

Moreover, this is what Shopify has already accomplished when it comes to referral partners (who drive new merchants onto the platform), developers (who build apps for managing Shopify stores) and theme designers (who sell themes to customize the look-and-feel of stores). COO Harley Finkelstein said at Unite:

You’ve often heard me say that we at Shopify want to create more value for your partners than we capture for ourselves, and I find the best way to demonstrate this is by looking at what I call the “Partner Economy”. The “Partner Economy” is the amount of revenue that flows to all of you our partners…in 2018 Shopify made about a billion dollars [Editor: in revenue]. We estimate that you, our partners, made more than $1.2 billion.

In other words, Shopify clears the Bill Gates Line — it captures a minority of the value in the ecosystem it has created — and the Shopify Fulfillment Network should fit right in:

The Shopify platform

What is powerful about this model is that it leverages the best parts of modularity — diversity and competition at different parts of the value chain — and aligns the incentives of all of them. Every referral partner, developer, theme designer, and now 3PL provider is simultaneously incentivized to compete with each other narrowly and ensure that Shopify succeeds broadly, because that means the pie is bigger for everyone.

This is the only way to take on an integrated Aggregator like Amazon: trying to replicate what Amazon has built up over decades, as Walmart has attempted, is madness. Amazon has the advantage in every part of the stack, from more customers to more suppliers to lower fulfillment costs to faster delivery.

The only way out of that competition is differentiation; granted, Walmart has tried buying and launching new brands exclusive to its store, but differentiation when it comes to e-commerce goods doesn’t arise from top down planning. Rather, it bubbles up from widespread opportunity (and churn!), like that created by Shopify, supported by an entire aligned ecosystem.

  1. While Amazon is not technically an Aggregator — the company deals with physical goods that absolutely have both marginal and transaction costs — one way to understand the company’s dominance is that its massive investments in logistics have driven those costs much lower than its competitors, allowing the company to reap many of the same benefits. []

Facebook, Libra, and the Long Game

When I get things wrong — and I was very much wrong about Facebook’s blockchain plans — the reason is usually a predictable one: confirmation bias. That is, I already have an idea of what a company’s motivations are, and then view news through that lens, failing to think critically about what parts of that news might actually disconfirm my assumptions.

So it was last month when the Wall Street Journal reported that Facebook was building a cryptocurrency-based payment system. I wrote in a Daily Update:

Start with the obvious: this isn’t a Bitcoin competitor. And why would it be? The entire point of Bitcoin is to be distributed; Facebook’s power come from its centralization. Indeed, this is probably the single most important prism through which to examine whatever it is that Facebook does in the space: the company is not going betray its dominant position, but rather seek to strengthen it. That is why I am not too concerned about not knowing the implementation details: take it as a given that whatever role users have to play in this network, Facebook will have final control.

I stand by the first part of that excerpt: for all of the positive attributes Facebook is highlighting about Project Libra — which Facebook, in conjunction with the newly formed Libra Association, announced last week — it is unreasonable to expect that Facebook would invest significant resources in something that would weaken its position. What I got wrong was presuming that meant overt Facebook control. Frustratingly, it was an error that should have both been obvious in my original analysis and also clear in the broader view of the Internet I have explained through Aggregation Theory.

What is Libra

Libra is being presented as a cryptocurrency based on a blockchain: transactions are recorded on a shared ledger and verified by “miners” independently solving cryptographic problems and arriving at a consensus that the transaction is legitimate and should be added to the ledger permanently.

In practice, it is much more complicated: while a limited set of “validators” — aka miners — share a history of transactions in (individual) blocks that are chained together (i.e. a blockchain), what Libra actually exposes is the current state of the ledger. In practice this means that adding new transactions can be much quicker and more efficient — more akin to adding a line to a spreadsheet than rebuilding the entire spreadsheet from scratch.

In other words, there is a trade-off between trust and efficiency: whereas anyone can “rebuild the spreadsheet” in the case of a cryptocurrency like Bitcoin, where the blockchain is fully exposed, normal users have to trust Libra’s validators.1 On the other hand, Bitcoin, thanks to the overhead of communicating and verifying every transaction, can only manage around 7 transactions a second; Libra is promising 1,000 transactions per second.

The Validators

Who, then, are the validators? Well, Facebook is one, but only one: currently there are 28 “Founding Members”, including merchants, venture capitalists, and payment networks, that meet two of the following three criteria:

  • More than $1 billion USD in market value or more than $500 million USD in customer cash flow
  • Reach more than 20 million people a year
  • Recognition as a top-100 industry leader by a third-party association such as Fortune or S&P

These “Founding Members” are required to make a minimum investment of $10 million and provide computing power to the network. In addition, there are separate requirements for non-profit organizations and academic institutions that rely on a mixture of budget, track record, and rankings; a minimum investment may not be necessary. Libra intends to have 100 Founding Members by the time it launches next year.

Here is the important thing to understand about the Libra Association: while its members — who again, are the validators — do control the Libra protocol, Facebook does not control the validators. Which, by extension, means that Facebook will not control Libra.

Libra Versus a Facebook Coin

To understand the distinction, consider an alternative route that Facebook could have taken: a so-called “Facebook Coin”. In that case Facebook would have had total control over the protocol, and to be sure, this would have distinct advantages for Facebook specifically and the usability of a “Facebook Coin” generally:

  • Efficiency and scalability would be maximized because Facebook could coordinate perfectly with itself
  • Development would be significantly accelerated because Facebook would not have to achieve consensus
  • Facebook would have perfect knowledge of all transactions on the system because it would control all entry points

This is the Trust-Efficiency tradeoff taken to the opposite extreme from Bitcoin:

A theoretical Facebook Coin would be the opposite of Bitcoin

With Bitcoin, there is no need to trust anyone — you can verify the entire blockchain yourself — but at the cost of efficiency of transactions. A Facebook Coin, on the other hand, would require complete trust of Facebook, but transactions would be far more efficient as a result.

The most obvious example of this is WeChat Pay: WeChat handles the transactions, stores the money, and is the sole source of authority about who owns what, and thanks to the ubiquity of WeChat and the efficiency of this model, WeChat Pay (along with Alipay) has become the default payment mechanism in China.

Unsurprisingly, WeChat doesn’t use any sort of blockchain-based technology. Why would it? The entire point of a blockchain is to distribute a ledger across multiple parties, which is fundamentally less efficient than simply storing the entire ledger in a single database managed by one party.

Trust Versus Efficiency

This gets at the error in analysis I referenced above: because I was anchored on the idea of Facebook capturing transaction data, I missed that when the Wall Street Journal reported last month that Facebook was using some sort of blockchain technology (leaving aside the quibble on the definition noted above) it was an obvious signal that whatever Facebook was announcing would not be completely controlled by Facebook, because if the goal were Facebook control of a Facebook Coin then a blockchain would be a silly way to implement it.

A decentralized blockchain versus a centralized database

The best way to understand Libra, then, is as a sort of distributed ledger that is a compromise between a fully public blockchain and an internal database:

A distributed ledger as a compromise between a decentralized blockchain and a centralized database

This means that the overall system is much more efficient than Bitcoin, while the necessary level of trust is spread out to multiple entities, not one single company:

Bitcoin versus Libra versus a theoretical Facebook coin

The trade-off is that Libra is not fully permissionless, although the Libra White Paper does say that is the long-term goal:

To ensure that Libra is truly open and always operates in the best interest of its users, our ambition is for the Libra network to become permissionless. The challenge is that as of today we do not believe that there is a proven solution that can deliver the scale, stability, and security needed to support billions of people and transactions across the globe through a permissionless network. One of the association’s directives will be to work with the community to research and implement this transition, which will begin within five years of the public launch of the Libra Blockchain and ecosystem.

Time will tell if this is possible: if you flip the “trust” axis in the above graphs the current state of affairs looks like this:

Is there an efficiency frontier when it comes to no-trust and efficiency?

It may very well prove to be the case that there is a sort of efficient frontier when it comes to “no-trust” versus “efficiency”: that is, any decrease in necessary trust requires a corresponding decrease in efficiency. From my perspective the safest assumption about Libra’s future is that efficiency will be the ultimate priority, which means that the more that Libra is used the more difficult it will be to ever transition to a permission-less model.

The Credit Card Challenge

Still, even if Libra remains controlled by an ever-expanding-but-still-limited set of validators, that is likely to be a far easier “sale” than a Facebook Coin controlled by a single company. Leaving aside the fact Facebook is not exactly swimming in trust these days when it comes to users, why would any other large company want to adopt a currency with a single point of corporate control?

Keep in mind the situation in the United States and other developed countries is much different than China: credit cards have their flaws, particularly in terms of fees, but they are widely accepted by merchants and widely used by consumers. China, on the other hand, mostly leapfrogged credit cards entirely; this meant that WeChat Pay’s (and Alipay’s) competition was cash: in that case the relative advantages of WeChat Pay relative to cash (which are massive) could overcome any concerns around centralized control.

A theoretical Facebook Coin’s relative advantage to credit cards, on the other hand, would be massively smaller, which means obstacles to widespread adoption — like trusting Facebook exclusively — would likely be insurmountable:

How new payment systems are — or are not — adopted

Thus the federation of trust inherent in Libra, despite the loss of efficiency that entails: by not being in control, and by actively including corporations like Spotify and Uber that will provide places to use Libra outside of Facebook, and payment networks like Visa and PayPal that will facilitate such usage, Facebook is increasing the chances that Libra will actually be used instead of credit cards.

Aggregation and the Long Game

I do think it is overly cynical to completely dismiss the advertised benefits of Libra: remittances, for example, have been the go-to example of how cryptocurrencies can have societal benefit for a long time for a very good reason — the current system exacts major fees from the population that can least afford to bear them. And, while I just spent an entire section on credit cards, the reality is that credit card penetration is much lower amongst the poor in developed countries and in developing countries generally: a digital currency ultimately premised on owning a smartphone has the potential to significantly expand markets to the benefits of both consumers and service providers.

To put it another way, Libra has the potential to significantly decrease friction when it comes to the movement of money; of course this potential is hardly limited to Libra — the reduction in friction is one of the selling points of digital currencies generally — but by virtue of being supported by Facebook, particularly the Calibra wallet that will be both a standalone app and also built into Facebook Messenger and WhatsApp, accessing Libra will likely be much simpler than accessing other cryptocurrencies. When it comes to decreasing friction, simplifying the user experience matters just as much as eliminating intermediary institutions.

There is also another component of trust beyond caring about who is verifying transactions: confidence that the value of Libra will be stable. This is the reason why Libra will have a fully-funded reserve denominated in a basket of currencies. This does not foreclose Libra becoming a fully standalone currency in the long run, but for now both users and merchants will be able to trust that the value of Libra will be sufficiently stable to use it for transactions.

If all of these bets pay off — that users and merchants will trust a consortium more than Facebook; that Libra will be cheaper and easier to use, more accessible, and more flexible than credit cards; and that Libra itself will be a reliable store of value — then that decrease in friction will be realized at scale.

And this is when this bet would pay off for Facebook (and the second point I missed in my earlier analysis): the implication that digital currencies will do for money what the Internet did for information is that the very long-term trend will be towards centralization around Aggregators. When there is no friction, control shifts from gatekeepers controlling supply to Aggregators controlling demand. To that end, by pioneering Libra, building what will almost certainly be the first wallet for the currency, and bringing to bear its unmatched network for facilitating payments, Facebook is betting it will offer the best experience for digital currency flows, giving it power not by controlling Libra but rather by controlling the most users of Libra.

Will It Work?

Libra’s success, if it comes, will likely proceed in stages, with different challenges and competitors at each stage:

  • Initially the most obvious use case for Facebook’s Calibra wallet application will be peer-to-peer payments, which means the competitor will be applications like PayPal’s Venmo. Here Facebook’s biggest advantage will be leveraging its network and messaging applications.
  • The second use case will be using Libra to transact with merchants, who stand to benefit both from reduced fees relative to credit cards as well as larger addressable markets (i.e. potential users who don’t have credit cards). Note that none of Libra’s Founding Members are banks, which impose the largest percentage of credit card fees; Visa and Mastercard, on the other hand, are, like PayPal, happy to sit on top of Libra.
  • The largest leap will come last: Libra as a genuine currency, not simply a medium for transaction. This will be a function of volume in the previous two use cases, and is understandably concerning to governments all over the world. This, though, is another advantage of Facebook giving up direct control of Libra: while regulators will be able to limit wallets like Calibra (which will fully abide by Know-Your-Customer and Anti-Money-Laundering regulations), Libra — particularly if it achieves a fully permission-less-model — would be much more difficult to control.

It is easy to see how Facebook, given its size, would thrive in that final state, for the reasons I detailed above. Just as Google long boasted that the more people use the Internet the more revenue Google generates, it stands to reason that the more people use digital money the more it would benefit dominant digital companies like Facebook, whether that be through advertising, transactions, or simply making networks that much more valuable.

That, though, is also a reason to be skeptical: the idea of Google making more money by people using the Internet more was once viewed as a happy alignment of incentives that justified Google’s services being free; today the centralization — and thus money-making potential — that follows a reduction in friction is much better understood, and there is much more concern about just how much power these Aggregators have.

This is particularly the case with Facebook: despite all of the company’s efforts to design a system that does not entail trusting Facebook exclusively — again, this is not a Facebook Coin — Libra is already widely known as a Facebook initiative. Unless the consumer benefits are truly extraordinary, that may be enough to prevent Libra from ever gaining escape velocity. This applies even more to the Calibra wallet: Facebook promises not to mix transaction data with profile data, but that entails, well, trust that Facebook may have already lost.

Still, that doesn’t mean digital currencies will never make it: I do think that Libra gets closer to a workable balance between trust and efficiency than Bitcoin, at least when it comes to being usable for transactions and not simply a store of value; the question is who can actually get such a currency off the ground. Certainly Facebook’s audacity and ambition should not be underestimated, and the company’s network is the biggest reason to believe Libra will work; Facebook’s brand is the biggest reason to believe it will not.

  1. To clarify, this roadmap on the Libra developers blog includes plans to allow anyone to “rebuild the spreadsheet”:

    Validator APIs to support full nodes (nodes that have a full replica of the blockchain but do not participate in consensus). This feature allows for the creation of replicas that can support scaling access to the blockchain and the auditing of the correct execution of transactions.

    However, only validators can actually validate transactions (unlike Bitcoin where anyone can be a miner/validator) []