What Is a Tech Company?

At first glance, WeWork and Peloton, which both released their S-1s in recent weeks, don’t have much in common: one company rents empty buildings and converts them into office space, and the other sells home fitness equipment and streaming classes. Both, though, have prompted the same question: is this a tech company?

Of course, it is fair to ask, “What isn’t a tech company?” Surely that is the endpoint of software eating the world; I think, though, to classify a company as a tech company because it utilizes software is just as unhelpful today as it would have been decades ago.

IBM and Tech-Centered Ecosystems

Fifty years ago, what is a tech company was an easy question to answer: IBM was the tech company, and everybody else was IBM’s customers. That may be a slight exaggeration, but not by much: IBM built the hardware (at that time the System/360), wrote the software, including the operating system and applications, and provided services, including training, ongoing maintenance, and custom line-of-business software.

All kinds of industries benefited from IBM’s technology, including financial services, large manufacturers, retailers, etc., and, of course, the military. Functions like accounting, resource management, and record-keeping automated and centralized activities that used to be done by hand, dramatically increasing the efficiency of existing activities and making new kinds of activities possible.

Increased efficiency and new business opportunities, though, didn’t make J.P. Morgan or General Electric or Sears tech companies. Technology simply became one piece of a greater whole. Yes, it was essential, but that essentialness exposed technology’s banality: companies were only differentiated to the extent they did not use computers, and then to the downside.

IBM, though, was different: every part of the company was about technology — indeed, IBM was an entire ecosystem onto itself: hardware, software, and services, all tied together with a subscription payment model strikingly similar to today’s dominant software-as-a-service approach. In short, being a tech company meant being IBM, which meant creating and participating in an ecosystem built around technology.

Venture Capital and Zero Marginal Costs

The story of IBM handing Microsoft the contract for the PC operating system and, by extension, the dominant position in computing for the next fifteen years, is a well-known one. The context for that decision, though, is best seen by the very different business model Microsoft pursued for its software.

What made subscriptions work for IBM was that the mainframe maker was offering the entire technological stack, and thus had reason to be in direct ongoing contact with its customers. In 1968, though, in an effort to escape an antitrust lawsuit from the federal government, IBM unbundled their hardware, software, and services. This created a new market for software, which was sold on a somewhat ad hoc basis; at the time software didn’t even have copyright protection.

Then, in 1980, Congress added “computer program” to the definition list of U.S. copyright law, and software licensing was born: now companies could maintain legal ownership of software and grant an effectively infinite number of licenses to individuals or corporations to use that software. Thus it was that Microsoft could charge for every copy of Windows or Visual Basic without needing to sell or service the underlying hardware it ran on.

This highlighted another critical factor that makes tech companies unique: the zero marginal cost nature of software. To be sure, this wasn’t a new concept: Silicon Valley received its name because silicon-based chips have similar characteristics; there are massive up-front costs to develop and build a working chip, but once built additional chips can be manufactured for basically nothing. It was this economic reality that gave rise to venture capital, which is about providing money ahead of a viable product for the chance at effectively infinite returns should the product and associated company be successful.

Indeed, this is why software companies have traditionally been so concentrated in Silicon Valley, and not, say, upstate New York, where IBM was located. William Shockley, one of the inventors of the transistor at Bell Labs, was originally from Palo Alto and wanted to take care of his ailing mother even as he was starting his own semiconductor company; eight of his researchers, known as the “traitorous eight”, would flee his tyrannical management to form Fairchild Semiconductor, the employees of which would go on to start over 65 new companies, including Intel.

It was Intel that set the model for venture capital in Silicon Valley, as Arthur Rock put in $10,000 of his own money and convinced his contacts to add an additional $2.5 million to get Intel off the ground; the company would IPO three years later for $8.225 million. Today the timelines are certainly longer but the idea is the same: raise money to start a company predicated on zero marginal costs, and, if you are successful, exit with an excellent return for shareholders. In other words, it is the venture capitalists that ensured software followed silicon, not the inherent nature of silicon itself.

To summarize: venture capitalist fund tech companies, which are characterized by a zero marginal cost component that allows for uncapped returns on investment.

Microsoft and Subscription Pricing

Probably the most overlooked and underrated era of tech history was the on-premises era dominated by software companies like Microsoft, Oracle, and SAP, and hardware from not only IBM but also Sun, HP, and later Dell. This era was characterized by a mix of up-front revenue for the original installation of hardware or software, plus ongoing services revenue. This model is hardly unique to software: lots of large machinery is sold on a similar basis.

The zero marginal cost nature of software, however, made it possible to cut out the up-front cost completely; Microsoft started pushing this model heavily to large enterprise in 2001 with version 6 of its Enterprise Agreement. Instead of paying for perpetual licenses for software that inevitably needed to be upgraded in a few years, enterprises could pay a monthly fee; this had the advantage of not only operationalizing former capital costs but also increasing flexibility. No longer would enterprises have to negotiate expensive “true-up” agreements if they grew; they were also protected on the downside if their workforce shrunk.

Microsoft, meanwhile, was able to convert its up-front software investment from a one-time payment to regular payments over time that were not only perpetual in nature (because to stop payment was to stop using the software, which wasn’t a viable option for most of Microsoft’s customers) but also more closely matched Microsoft’s own development schedule.

This wasn’t a new idea, as IBM had shown several decades earlier; moreover, it is worth pointing out that the entire function of depreciation when it comes to accounting is to properly attribute capital expenditures across the time periods those expenditures are leveraged. What made Microsoft’s approach unique, though, is that over time the product enterprises were paying for was improving. This is in direct contrast to a physical asset that deteriorates, or a traditional software support contract that is limited to a specific version.

Today this is the expectation for software generally: whatever you pay for today will be better in the future, not worse, and tech companies are increasingly organized around this idea of both constant improvements and constant revenue streams.

Salesforce and Cloud Computing

Still, Microsoft products had to actually be installed in the first place: much of the benefit of Enterprise Agreements accrued to companies that had already gone through that pain.

Salesforce, founded in 1999, sought to extend that same convenience to all companies: instead of having to go through long and painful installation processes that were inevitably buggy and over-budget, customers could simply access Salesforce on Salesforce’s own servers. The company branded it “No Software”, because software installations had such negative connotations, but in fact this was the ultimate expression of software. Now, instead of one copy of software replicated endlessly and distributed anywhere, Salesforce would simply run one piece of software and give anyone anywhere access to it. This did increase fixed costs — running servers and paying for bandwidth is expensive — but the increase was more than made up for by the decrease in upfront costs for customers.

This also increased the importance of scale for tech companies: now not only did the cost of software development need to be spread out over the greatest number of customers, so did the ongoing costs of building and running large centralized servers (of course Amazon operationalized these costs as well with AWS). That, though, became another characteristic of tech companies: scale not only pays the bills, it actually improves the service as large expenditures are leveraged across that many more customers.

Atlassian and Zero Transaction Costs

Still, Salesforce was still selling to large corporations. What has changed over the last ten years in particular is the rise of freemium and self-serve, but the origins of this model go back a decade earlier.

The early 2000s were a dire time in tech: the bubble had burst, and it was nearly impossible to raise money in Silicon Valley, much less anywhere else in the world — including Sydney, Australia. So, in 2001, when Scott Farquhar and Mike Cannon-Brookes, whose only goals was to make $35,000 a year and not have to wear a suit, couldn’t afford a sales force for the collaboration software they had developed called Jira they simply put it on the web for anyone to trial, with a payment form to unlock the full program.

This wasn’t necessarily new: “shareware” and “trialware” had existed since the 1980s, and were particularly popular for games, but Atlassian, thanks to being in the right place (selling Agile project management software) at the right time (the explosion of Agile as a development methodology) was using essentially the same model to sell into enterprise.

What made this possible was the combination of zero marginal costs (which meant that distributing software didn’t cost anything) and zero transaction costs: thanks to the web and rudimentary payment processors it was possible for Atlassian to sell to companies without ever talking to them. Indeed, for many years the only sales people Atlassian had were those tasked with reducing churn: all in-bound sales were self-serve.

This model, when combined with Salesforce’s cloud-based model (which Atlassian eventually moved to), is the foundation of today’s SaaS companies: customers can try out software with nothing more than an email address, and pay for it with nothing more than a credit card. This too is a characteristic of tech companies: free-to-try, and easy-to-buy, by anyone, from anywhere.

The Question of the Real World

So what about companies like WeWork and Peloton that interact with the real world? Note the centrality of software in all of these characteristics:

  • Software creates ecosystems.
  • Software has zero marginal costs.
  • Software improves over time.
  • Software offers infinite leverage.
  • Software enables zero transaction costs.

The question of whether companies are tech companies, then, depends on how much of their business is governed by software’s unique characteristics, and how much is limited by real world factors. Consider Netflix, a company that both competes with traditional television and movie companies yet is also considered a tech company:

  • There is no real software-created ecosystem.
  • Netflix shows are delivered at zero marginal costs without the need to pay distributors (although bandwidth bills are significant).
  • Netflix’s product improves over time.
  • Netflix is able to serve the entire world because of software, giving them far more leverage than much of their competition.
  • Netflix can transact with anyone with a self-serve model.

Netflix checks four of the five boxes.

Airbnb, which has yet to go public, is also often thought of as a tech company, even though they deal with lodging:

  • There is a software-created ecosystem of hosts and renters.
  • While Airbnb’s accounting suggests that its revenue has minimal marginal costs, a holistic view of Airbnb’s market shows that the company effectively pays hosts 86 percent of total revenue: the price of an “asset-lite” model is that real world costs dominate in terms of the overall transaction.
  • Airbnb’s platform improves over time.
  • Airbnb is able to serve the entire world, giving it maximum leverage.
  • Airbnb can transact with anyone with a self-serve model.

Uber, meanwhile, has long been mentioned in the same breath as Airbnb, and for good reason: it checks most of the same boxes:

  • There is a software-created ecosystem of drivers and riders.
  • Like Airbnb, Uber reports its revenue as if it has low marginal costs, but a holistic view of rides shows that the company pays drivers around 80 percent of total revenue; this isn’t a world of zero marginal costs.
  • Uber’s platform improves over time.
  • Uber is able to serve the entire world, giving it maximum leverage.
  • Uber can transact with anyone with a self-serve model.

A major question about Uber concerns transaction costs: bringing and keeping drivers on the platform is very expensive. This doesn’t mean that Uber isn’t a tech company, but it does underscore the degree to which its model is dependent on factors that don’t have zero costs attached to them.

Now for the two companies with which I opened the article. First, WeWork (which I wrote about here and here):

  • WeWork claims it has a software-created ecosystem that connect companies and employees across locations, but it is difficult to find evidence that this is a driving factor for WeWork’s business.
  • WeWork pays a huge percentage of its revenue in rent.
  • WeWork’s offering certainly has the potential to improve over time.
  • WeWork is limited by the number of locations it builds out.
  • WeWork requires a consultation for even a one-person rental, and relies heavily on brokers for larger businesses.

Frankly, it is hard to see how WeWork is a tech company in any way.

Finally Peloton (which I wrote about here):

  • Peloton does have social network-type qualities, as well as strong gamification.
  • While Peloton is available as just an app, the full experience requires a four-figure investment in a bike or treadmill; that, needless to say, is not a zero marginal cost offering. The service itself, though, is zero marginal cost.
  • Peloton’s product improves over time.
  • The size, weight, and installation requirements for Peloton’s hardware mean the company is limited to the United States and the just-added United Kingdom and Germany.
  • Peloton has a high-touch installation process

Peloton is also iffy as far these five factors go, but then again, so is Apple: software-differentiated hardware is in many respects its own category. And, there is one more definition that is worth highlighting.

Peloton and Disruption

The term “technology” is an old one, far older than Silicon Valley. It means anything that helps us produce things more efficiently, and it is what drives human progress. In that respect, all successful companies, at least in a free market, are tech companies: they do something more efficiently than anyone else, on whatever product vector matters to their customers.

To that end, technology is best understood with qualifiers, and one of the most useful sets comes from Clayton Christensen and The Innovator’s Dilemma:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

Sustaining technologies make existing firms better, but it doesn’t change the competitive landscape. By extension, if adopting technology simply strengthens your current business, as opposed to making it uniquely possible, you are not a tech company. That, for example, is why IBM’s customers were no more tech companies than are users of the most modern SaaS applications.

Disruptive technologies, though, make something possible that wasn’t previously, or at a price point that wasn’t viable. This is where Peloton earns the “tech company” label from me: compared to spin classes at a dedicated gym, Peloton is cheap, and it scales far better. Sure, looking at a screen isn’t as good as being in the same room with an instructor and other cyclists, but it is massively more convenient and opens the market to a completely new customer base. Moreover, it scales in a way a gym never could: classes are held once and available forever on-demand; the company has not only digitized space but also time, thanks to technology. This is a tech company.

This definition also applies to Netflix, Airbnb, and Uber; all digitized something essential to their competitors, whether it be time or trust. I’m not sure, though, that it applies to WeWork: to the extent the company is unique it seems to rely primarily on unprecedented access to capital. That may be enough, but it does not mean WeWork is a tech company.

And, on the flipside, being a tech company does not guarantee success: the curse of tech companies is that while they generate massive value, capturing that value is extremely difficult. Here Peloton’s hardware is, like Apple’s, a significant advantage.

On the other hand, asset-lite models, like ride-sharing, are very attractive, but can Uber capture sufficient value to make a profit? What will Airbnb’s numbers look like when it finally IPOs? Indeed, the primary reason Peloton’s numbers look good is because they are selling physical products, differentiated by software, at a massive profit!

Still, definitions are helpful, even if they are not predictive. Software is used by all companies, but it completely transforms tech companies and should reshape consideration of their long-term upside — and downside.

I wrote a follow-up to this article in this Daily Update.

Privacy Fundamentalism

Farhad Manjoo, in the New York Times, ran an experiment on themself:

Earlier this year, an editor working on The Times’s Privacy Project asked me whether I’d be interested in having all my digital activity tracked, examined in meticulous detail and then published — you know, for journalism…I had to install a version of the Firefox web browser that was created by privacy researchers to monitor how websites track users’ data. For several days this spring, I lived my life through this Invasive Firefox, which logged every site I visited, all the advertising tracking servers that were watching my surfing and all the data they obtained. Then I uploaded the data to my colleagues at The Times, who reconstructed my web sessions into the gloriously invasive picture of my digital life you see here. (The project brought us all very close; among other things, they could see my physical location and my passwords, which I’ve since changed.)

What did we find? The big story is as you’d expect: that everything you do online is logged in obscene detail, that you have no privacy. And yet, even expecting this, I was bowled over by the scale and detail of the tracking; even for short stints on the web, when I logged into Invasive Firefox just to check facts and catch up on the news, the amount of information collected about my endeavors was staggering.

Here is a shrunk-down version of the graphic that resulted (click it to see the whole thing on the New York Times site):

Farhad Manjoo's online tracking

Notably — at least from my perspective! — Stratechery is on the graphic:

Stratechery's trackers

Wow, it sure looks like I am up to some devious behavior! I guess it is all of the advertising trackers on my site which doesn’t have any advertising…or perhaps Manjoo, as seems to so often be the case with privacy scare pieces, has overstated their case by a massive degree.

Stratechery “Trackers”

The narrow problem with Manjoo’s piece is a definitional one. This is what it says at the top of the graphic:

What the Times considers a tracker

This strikes me as an overly broad definition of tracking; as best I can tell, Manjoo and their team counted every single script, image, or cookie that was loaded from a 3rd-party domain, no matter its function.

Consider Stratechery: the page in question, given the timeframe of Manjoo’s research and the apparent link from Techmeme, is probably The First Post-iPhone Keynote. On that page I count 31 scripts, images, fonts, and XMLHttpRequests (XHR for short, which can be used to set or update cookies) that were loaded from a 3rd-party domain.1 The sources are as follows (in decreasing number by 3rd-party service):

  • Stripe (11 images, 5 JavaScript files, 2 XHRs)
  • Typekit (1 image, 1 JavaScript file, 5 fonts)
  • Cloudfront (3 JavaScript files)
  • New Relic (2 JavaScript files)
  • Google (1 image, 1 JavaScript file)
  • WordPress.com (1 JavaScript file)

You may notice that, in contrast to the graphic, there is nothing from Amazon specifically. There is Cloudfront, which is a content delivery service offered by Amazon Web Services, but suggesting that Stratechery includes trackers from Amazon because I rely on AWS is ridiculous. In the case of Cloudfront, one JavaScript file is from Memberful, my subscription management service, and the other two are public JavaScript libraries used on countless sites on the Internet (jQuery and Pmrpc). As for the rest:

  • Stripe is the payment processor for Stratechery memberships.
  • Typekit is Adobe’s web-font service (Stratechery uses Freight Sans Pro).
  • New Relic is an analytics package used to diagnose website issues and improve performance.
  • Google is Google Analytics, which I use for counting page views and conversions to free and paid subscribers (this last bit is mostly theoretical; Memberful integrates with Google Analytics, but I haven’t run any campaigns — Stratechery relies on word-of-mouth).
  • WordPress.com is for the Jetpack service from Automattic, which I use for site monitoring, security, and backups, as well as the recommended article carousel under each article.

The only service here remotely connected to advertising is Google Analytics, but I have chosen to not share that information with Google (there is no need because I don’t need access to Google’s advertising tools); the truth is that all of these “trackers” make Stratechery possible.2

The Internet’s Nature

This narrow critique of Manjoo’s article — wrongly characterizing multiple resources as “trackers” — gets at a broader philosophical shortcoming: technology can be used for both good things and bad things, but in the haste to highlight the bad, it is easy to be oblivious to the good. Manjoo, for example, works for the New York Times, which makes most of its revenue from subscriptions;3 given that, I’m going to assume they do not object to my including 3rd-party resources on Stratechery that support my own subscription business?

This applies to every part of my stack: because information is so easily spread across the Internet via infrastructure maintained by countless companies for their own positive economic outcome, I can write this Article from my home and you can read it in yours. That this isn’t even surprising is a testament to the degree to which we take the Internet for granted: any site in the world is accessible by anyone from anywhere, because the Internet makes moving data free and easy.

Indeed, that is why my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one. Consider one of the most fearsome surveillance entities of all time, the East German Stasi. From Wired:

The Stasi's files

The German Democratic Republic dissolved in 1990 with the fall of communism, but the documents assembled by the Ministry for State Security, or Stasi, remain. This massive archive includes 69 miles of shelved documents, 1.8 million images, and 30,300 video and audio recordings housed in 13 offices throughout Germany. Canadian photographer Adrian Fish got a rare peek at the archives and meeting rooms of the Berlin office for his series Deutsche Demokratische Republik: The Stasi Archives. “The archives look very banal, just like a bunch of boring file holders with a bunch of paper,” he says. “But what they contain are the everyday results of a people being spied upon.”

That the files are paper makes them terrifying, because anyone can read them individually; that they are paper, though, also limits their reach. Contrast this to Google or Facebook: that they are digital means they reach everywhere; that, though, means they are read in aggregate, and stored in a way that is only decipherable by machines.

To be sure, a Stasi compare and contrast is hardly doing Google or Facebook any favors in this debate: the popular imagination about the danger this data collection poses, though, too often seems derived from the former, instead of the fundamentally different assumptions of the latter. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

  • Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.
  • Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.
  • Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

Apple’s Fundamentalism

This doesn’t just apply to governments: consider Apple, a company which is staking its reputation on privacy. Last week the WebKit team released a new Tracking Prevention Policy that is taking clear aim at 3rd-party trackers:

We have implemented or intend to implement technical protections in WebKit to prevent all tracking practices included in this policy. If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques.

Of particular interest to Stratechery — and, per the opening of this article, Manjoo — is this definition and declaration:

Cross-site tracking is tracking across multiple first party websites; tracking between websites and apps; or the retention, use, or sharing of data from that activity with parties other than the first party on which it was collected.


WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert). These goals apply to all types of tracking listed above, as well as tracking techniques currently unknown to us.

In case you were wondering,4 yes, this will affect sites like Stratechery, and the WebKit team knows it (emphasis mine to highlight potential impacts on Stratechery):

There are practices on the web that we do not intend to disrupt, but which may be inadvertently affected because they rely on techniques that can also be used for tracking. We consider this to be unintended impact. These practices include:

  • Funding websites using targeted or personalized advertising (see Private Click Measurement below).
  • Measuring the effectiveness of advertising.
  • Federated login using a third-party login provider.
  • Single sign-on to multiple websites controlled by the same organization.
  • Embedded media that uses the user’s identity to respect their preferences.
  • “Like” buttons, federated comments, or other social widgets.
  • Fraud prevention.
  • Bot detection.
  • Improving the security of client authentication.
  • Analytics in the scope of a single website.
  • Audience measurement.

When faced with a tradeoff, we will typically prioritize user benefits over preserving current website practices. We believe that that is the role of a web browser, also known as the user agent.

Don’t worry, Stratechery is not going out of business (although there may be a fair bit of impact on the user experience, particularly around subscribing or logging in). It is disappointing, though, that the maker of one of the most important and the most unavoidable browser technologies in the world (WebKit is the only option on iOS) has decided that an absolutist approach that will ultimately improve the competitive position of massive first party advertisers like Google and Facebook, even as it harms smaller sites that rely on 3rd-party providers for not just ads but all aspects of their business, is what is best for everyone.

What makes this particularly striking is that it was only a month ago that Apple was revealed to be hiring contractors to listen to random Siri recordings; unlike Amazon (but like Google), Apple didn’t disclose that fact to users. Furthermore, unlike both Amazon and Google, Apple didn’t give users any way to see what recordings Apple had or delete them after-the-fact. Many commentators have seized on the irony of Apple having the worst privacy practices for voice recordings given their rhetoric around being a privacy champion, but I think the more interesting insight is twofold.

First, this was, in my estimation, a far worse privacy violation than the sort of online tracking the WebKit team is determined to stamp out, for the simple reason that the Siri violation crossed the line between the physical and digital world. As I noted above the digital world is inherently transparent when it comes to data; the physical world, though — particularly somewhere like your home — is inherently private.

Second, I do understand why Apple has humans listening to Siri recordings: anyone that has used Siri can appreciate that the service needs to accelerate its feedback loop and improve more quickly. What happens, though, when improving the product means invading privacy? Do you look for good trade-offs, like explicit consent and user control, or do you fear a fundamentalist attitude that declares privacy more important than anything, and try to sneak a true privacy violation behind everyone’s back like some sort of rebellious youth fleeing religion? Being an absolutist also leads to bad behavior, because after all, everyone is already a criminal.

Towards Trade-offs

The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.5 To that end, I believe the privacy debate needs to be reset around these three assumptions:

  1. Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.
  2. Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet, and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.
  3. Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

This is where the Stasi example truly resonates: imagine all of those files, filled with all manner of physical movements and meetings and utterings, digitized and thus searchable, shareable, inescapable. That goes beyond a new medium lacking privacy from the get-go: it is taking privacy away from a world that previously had it. And yet the proliferation of cameras, speakers, location data, etc. goes on with a fraction of the criticism levied at big tech companies. Like too many fundamentalists, we are in danger of missing the point.

I wrote a follow-up to this article in this Daily Update.

  1. This matches the 31 dots in Manjoo’s graphic; I did not count HTML documents or CSS files []
  2. I do address these services and others in the Stratechery Privacy Policy []
  3. Let’s be charitable and ignore the fact that the most egregious trackers from Manjoo’s article — by far — are news sites, including nytimes.com []
  4. Or if you think I’m biased, although, for the record, I conceptualized this article before this policy was announced []
  5. And frankly, probably closer to Apple than the others, the last section notwithstanding []

A Framework for Moderation

On Sunday night, when Cloudflare CEO Matthew Prince announced in a blog post that the company was terminating service for 8chan, the response was nearly universal: Finally.

It was hard to disagree: it was on 8chan — which was created after complaints that the extremely lightly-moderated anonymous-based forum 4chan was too heavy-handed — that a suspected terrorist gunman posted a rant explaining his actions before killing 20 people in El Paso. This was the third such incident this year: the terrorist gunmen in Christchurch, New Zealand and Poway, California did the same; 8chan celebrated all of them.

To state the obvious, it is hard to think of a more reprehensible community than 8chan. And, as many were quick to point out, it was hardly the sort of site that Cloudflare wanted to be associated with as they prepared for a reported IPO. Which again raises the question: what took Cloudflare so long?

Moderation Questions

The question of when and why to moderate or ban has been an increasingly frequent one for tech companies, although the circumstances and content to be banned have often varied greatly. Some examples from the last several years:

  • Cloudflare dropping support for 8chan
  • Facebook banning Alex Jones
  • The U.S. Congress creating an exception to Section 230 of the Communications Decency Act for the stated purpose of targeting sex trafficking
  • The Trump administration removing ISPs from Title II classification
  • The European Union ruling that the “Right to be Forgotten” applied to Google

These may seem unrelated, but in fact all are questions about what should (or should not) be moderated, who should (or should not) moderate, when should (or should not) they moderate, where should (or should not) they moderate, and why? At the same time, each of these examples is clearly different, and those differences can help build a framework for companies to make decisions when similar questions arise in the future — including Cloudflare.

Content and Section 230

The first and most obvious question when it comes to content is whether or not it is legal. If it is illegal, the content should be removed.

And indeed it is: service providers remove illegal content as soon as they are made aware of it.

Note, though, that service providers are generally not required to actively search for illegal content, which gets into Section 230 of the Communications Decency Act, a law that is continuously misunderstood and/or misrepresented.1

To understand Section 230 you need to go back to 1991 and the court case Cubby v CompuServe. CompuServe hosted a number of forums; a member of one of those forums made allegedly defamatory remarks about a company named Cubby, Inc. Cubby sued CompuServe for defamation, but a federal court judge ruled that CompuServe was a mere “distributor” of the content, not its publisher. The judge noted:

The requirement that a distributor must have knowledge of the contents of a publication before liability can be imposed for distributing that publication is deeply rooted in the First Amendment…CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.

Four years later, though, Stratton Oakmont, a securities investment banking firm, sued Prodigy for libel, in a case that seemed remarkably similar to Cubby v. CompuServe; this time, though, Prodigy lost. From the opinion:

The key distinction between CompuServe and Prodigy is two fold. First, Prodigy held itself out to the public and its members as controlling the content of its computer bulletin boards. Second, Prodigy implemented this control through its automatic software screening program, and the Guidelines which Board Leaders are required to enforce. By actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and “bad taste”, for example, Prodigy is clearly making decisions as to content, and such decisions constitute editorial control…Based on the foregoing, this Court is compelled to conclude that for the purposes of Plaintiffs’ claims in this action, Prodigy is a publisher rather than a distributor.

In other words, the act of moderating any of the user-generated content on its forums made Prodigy liable for all of the user-generated content on its forums — in this case to the tune of $200 million. This left services that hosted user-generated content with only one option: zero moderation. That was the only way to be classified as a distributor with the associated shield from liability, and not as a publisher.

The point of Section 230, then, was to make moderation legally viable; this came via the “Good Samaritan” provision. From the statute:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

In short, Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

Keep in mind that Congress is extremely limited in what it can make illegal because of the First Amendment. Indeed, the vast majority of the Communications Decency Act was ruled unconstitutional a year after it was passed in a unanimous Supreme Court decision. This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.

The one tool that Congress does have is changing Section 230; for example, 2018’s SESTA/FOSTA act made platforms liable for any activity related to sex trafficking. In response platforms removed all content remotely connected to sex work of any kind — Cloudflare, for example, dropped support for the Switter social media network for sex workers — in a way that likely caused more harm than good. This is the problem with using liability to police content: it is always in the interest of service providers to censor too much, because the downside of censoring too little is massive.

The Stack

If the question of what content should be moderated or banned is one left to the service providers themselves, it is worth considering exactly what service providers we are talking about.

At the top of the stack are the service providers that people publish to directly; this includes Facebook, YouTube, Reddit, 8chan and other social networks. These platforms have absolute discretion in their moderation policies, and rightly so. First, because of Section 230, they can moderate anything they want. Second, none of these platforms have a monopoly on online expression; someone who is banned from Facebook can publish on Twitter, or set up their own website. Third, these platforms, particularly those with algorithmic timelines or recommendation engines, have an obligation to moderate more aggressively because they are not simply distributors but also amplifiers.

Internet service providers (ISPs), on the other hand, have very different obligations. While ISPs are no longer covered under Title II of the Communications Act, which barred them from discriminating data on the basis of content, it is the expectation of consumers and generally the policy of ISPs to not block any data because of its content (although ISPs have agreed to block child pornography websites in the past).

It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:

The position in the stack matters for moderation

Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.

Cloudflare’s Decision

What made Cloudflare’s decision more challenging was three-fold.

First, while Cloudflare is not an ISP, they are much more akin to infrastructure than they are to user-facing platforms. In the case of 8chan, Cloudflare provided a service that shielded the site from Distributed Denial-of-Service (DDoS) attacks; without a service like Cloudflare, 8chan would almost assuredly be taken offline by Internet vigilantes using botnets to launch such an attack. In other words, the question wasn’t whether or not 8chan was going to be promoted or have easy access to large social networks, but whether it would even exist at all.

To be perfectly clear, I would prefer that 8chan did not exist. At the same time, many of those arguing that 8chan should be erased from the Internet were insisting not too long ago that the U.S. needed to apply Title II regulation (i.e. net neutrality) to infrastructure companies to ensure they were not discriminating based on content. While Title II would not have applied to Cloudflare, it is worth keeping in mind that at some point or another nearly everyone reading this article has expressed concern about infrastructure companies making content decisions.

And rightly so! The difference between an infrastructure company and a customer-facing platform like Facebook is that the former is not accountable to end users in any way. Cloudflare CEO Matthew Prince made this point in an interview with Stratechery:

We get labeled as being free speech absolutists, but I think that has absolutely nothing to do with this case. There is a different area of the law that matters: in the U.S. it is the idea of due process, the Aristotelian idea is that of the rule of law. Those principles are set down in order to give governments legitimacy: transparency, consistency, accountability…if you go to Germany and say “The First Amendment” everyone rolls their eyes, but if you talk about the rule of law, everyone agrees with you…

It felt like people were acknowledging that the deeper you were in the stack the more problematic it was [to take down content], because you couldn’t be transparent, because you couldn’t be judged as to whether you’re consistent or not, because you weren’t fundamentally accountable. It became really difficult to make that determination.

Moreover, Cloudflare is an essential piece of the Facebook and YouTube competitive set: it is hard to argue that Facebook and YouTube should be able to moderate at will because people can go elsewhere if elsewhere does not have the scale to functionally exist.

Second, the nature of the medium means that all Internet companies have to be concerned about the precedent their actions in one country will have in different countries with different laws. One country’s terrorist is another country’s freedom fighter; a third country’s government acting according to the will of the people is a fourth’s tyrannically oppressing the minority. In this case, to drop support for 8chan — a site that was legal — is to admit that the delivery of Cloudflare’s services are up for negotiation.

Third, it is likely that at some point 8chan will come back, thanks to the help of a less scrupulous service, just as the Daily Stormer did when Cloudflare kicked them off two years ago. What, ultimately is the point? In fact, might there be harm, since tracking these sites may end up being more difficult the further underground they go?

This third point is a valid concern, but one I, after long deliberation, ultimately reject. First, convenience matters. The truly committed may find 8chan when and if it pops up again, but there is real value in requiring that level of commitment in the first place, given said commitment is likely nurtured on 8chan itself. Second, I ultimately reject the idea that publishing on the Internet is a right that must be guaranteed by 3rd parties. Stand on the street corner all you like, at least your terrible ideas will be limited by the physical world. The Internet, though, with its inherent ability to broadcast and congregate globally, is a fundamentally more dangerous medium that is by-and-large facilitated by third parties who have rights of their own. Running a website on a cloud service provider means piggy-backing off of your ISP, backbone providers, server providers, etc., and, if you are controversial, services like Cloudflare to protect you. It is magnanimous in a way for Cloudflare to commit to serving everyone, but at the end of the day Cloudflare does have a choice.

To that end I find Cloudflare’s rationale for acting compelling. Prince told me:

If this were a normal circumstance we would say “Yes, it’s really horrendous content, but we’re not in a position to decide what content is bad or not.” But in this case, we saw repeated consistent harm where you had three mass shootings that were directly inspired by and gave credit to this platform. You saw the platform not act on any of that and in fact promote it internally. So then what is the obligation that we have? While we think it’s really important that we are not the ones being the arbiter of what is good or bad, if at the end of the day content platforms aren’t taking any responsibility, or in some cases actively thwarting it, and we see that there is real harm that those platforms are doing, then maybe that is the time that we cut people off.

User-facing platforms are the ones that should make these calls, not infrastructure providers. But if they won’t, someone needs to. So Cloudflare did.

Defining Gray

I promised, with this title, a framework for moderation, and frankly, I under-delivered. What everyone wants is a clear line about what should or should not be moderated, who should or should not be banned. The truth, though, is that those bright lines do not exist, particularly in the United States.

What is possible, though, is to define the boundaries of the gray areas. In the case of user-facing platforms, their discretion is vast, and responsibility for not simply moderation but also promotion significantly greater. A heavier hand is justified, as is external pressure on decision-makers; the most important regulatory response is to ensure there is competition.

Infrastructure companies, meanwhile, should primarily default to legality, but also, as Cloudflare did, recognize that they are the backstop to user-facing platforms that refuse to do their job.

Governments, meanwhile, beyond encouraging competition, should avoid using liability as a lever, and instead stick to clearly defining what is legal and what isn’t. I think it is legitimate for Germany, for example, to ban pro-Nazi websites, or the European Union to enforce the “Right to be Forgotten” within E.U. borders; like most Americans, I lean towards more free speech, not less, but governments, particularly democratically elected ones, get to make the laws.

What is much more problematic are initiatives like the European Copyright Directive, which makes platforms liable for copyright infringement. This inevitably leads to massive overreach and clumsy filtering, and favors large platforms that can pay for both filters and lawyers over smaller ones that cannot.

None of this is easy. I am firmly in the camp that argues that the Internet is something fundamentally different than what came before, making analog examples less relevant than they seem. The risks and opportunities of the Internet are both different and greater than anything we have experienced previously, and perhaps the biggest mistake we can make is being too sure about what is the right thing to do. Gray is uncomfortable, but it may be the best place to be.

I wrote a follow-up to this article in this Daily Update.

  1. For the rest of this section I am re-using text I wrote in this 2018 Daily Update; I am not putting the re-used text in blockquotes as I normally would for the sake of readability []

Shopify and the Power of Platforms

While I am (rightfully) teased about how often I discuss Aggregation Theory, there is a method to my madness, particularly over the last year: more and more attention is being paid to the power wielded by Aggregators like Google and Facebook, but to my mind the language is all wrong.

I discussed this at length last year:

  • Tech’s Two Philosophies highlighted how Facebook and Google want to do things for you; Microsoft and Apple were about helping you do things better.
  • The Moat Map discussed the relationship between network effects and supplier differentiation: the more that network effects were internalized the more suppliers were commoditized, and the more that network effects were externalized the more suppliers were differentiated.
  • Finally, The Bill Gates Line formally defined the difference between Aggregators and Platforms. This is the key paragraph:

    This is ultimately the most important distinction between platforms and Aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; Aggregators, on the other hand, intermediate and control it.

It follows, then, that debates around companies like Google that use the word “platform” and, unsurprisingly, draw comparisons to Microsoft twenty years ago, misunderstand what is happening and, inevitably, result in prescriptions that would exacerbate problems that exist instead of solving them.

There is, though, another reason to understand the difference between platforms and Aggregators: platforms are Aggregators’ most effective competition.

Amazon’s Bifurcation

Earlier this week I wrote about Walmart’s failure to compete with Amazon head-on; after years of trying to leverage its stores in e-commerce, Walmart realized that Amazon was winning because e-commerce required a fundamentally different value chain than retail stores. The point of my Daily Update was that the proper response to that recognition was not to try to imitate Amazon, but rather to focus on areas where the stores actually were an advantage, like groceries, but it’s worth understanding exactly why attacking Amazon head-on was a losing proposition.

When Amazon started, the company followed a traditional retail model, just online. That is, Amazon bought products at wholesale, then sold them to customers:

Amazon retail sits between suppliers and customers

Amazon’s sales proceeded to grow rapidly, not just of books, but also in other media products with large selections like DVDs and CDs that benefitted from Amazon’s effectively unlimited shelf-space. This growth allowed Amazon to build out its fulfillment network, and by 1999 the company had seven fulfillment centers across the U.S. and three more in Europe.

Ten may not seem like a lot — Amazon has well over 300 fulfillment centers today, plus many more distribution and sortation centers — but for reference Walmart has only 20. In other words, at least when it came to fulfillment centers, Amazon was halfway to Walmart’s current scale 20 years ago.

It would ultimately take Amazon another nine years to reach twenty fulfillment centers (this was the time for Walmart to respond), but in the meantime came a critical announcement that changed what those fulfillment centers represented. In 2006 Amazon announced Fulfillment by Amazon, wherein 3rd-party merchants could use those fulfillment centers too. Their products would not only be listed on Amazon.com, they would also be held, packaged, and shipped by Amazon.

In short, Amazon.com effectively bifurcated itself into a retail unit and a fulfillment unit:

Amazon bifurcated itself into retail and fulfillment units

The old value chain is still there — nearly half of the products on Amazon.com are still bought by Amazon at wholesale and sold to customers — but 3rd parties can sell directly to consumers as well, bypassing Amazon’s retail arm and leveraging only Amazon’s fulfillment arm, which was growing rapidly:

Amazon Fulfillment Centers Over Time

Walmart and its 20 distribution centers don’t stand a chance, particularly since catching up means competing for consumers not only with Amazon but with all of those 3rd-party merchants filling up all of those fulfillment centers.

Amazon and Aggregation

There is one more critical part of the drawing I made above:

Amazon owns all customer interactions

Despite the fact that Amazon had effectively split itself in two in order to incorporate 3rd-party merchants, this division is barely noticeable to customers. They still go to Amazon.com, they still use the same shopping cart, they still get the boxes with the smile logo. Basically, Amazon has managed to incorporate 3rd-party merchants while still owning the entire experience from an end-user perspective.

This should sound familiar: as I noted at the top, Aggregators tend to internalize their network effects and commoditize their suppliers, which is exactly what Amazon has done.1 Amazon benefits from more 3rd-party merchants being on its platform because it can offer more products to consumers and justify the buildout of that extensive fulfillment network; 3rd-party merchants are mostly reduced to competing on price.

That, though, suggests there is a platform alternative — that is, a company that succeeds by enabling its suppliers to differentiate and externalizing network effects to create a mutually beneficial ecosystem. That alternative is Shopify.

The Shopify Platform

At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed.

The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.

Merchants interact with customers, not Shopify

This means they have to stand out not in a search result on Amazon.com, or simply offer the lowest price, but rather earn customers’ attention through differentiated product, social media advertising, etc. Many, to be sure, will fail at this: Shopify does not break out merchant churn specifically, but it is almost certainly extremely high.

That, though, is the point.

Unlike Walmart, currently weighing whether to spend additional billions after the billions it has already spent trying to attack Amazon head-on, with a binary outcome of success or failure, Shopify is massively diversified. That is the beauty of being a platform: you succeed (or fail) in the aggregate.

To that end, I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.

This is how Shopify can both in the long run be the biggest competitor to Amazon even as it is a company that Amazon can’t compete with: Amazon is pursuing customers and bringing suppliers and merchants onto its platform on its own terms; Shopify is giving merchants an opportunity to differentiate themselves while bearing no risk if they fail.

The Shopify Fulfillment Network

This is the context for one of the most interesting announcements from Shopify’s recent partner conference, Shopify Unite. The name should ring familiar: the Shopify Fulfillment Network.

From the company’s blog:

Customers want their online purchases fast, with free shipping. It’s now expected, thanks to the recent standard set by the largest companies in the world. Working with third-party logistics companies can be tedious. And finding a partner that won’t obscure your customer data or hide your brand with packaging is a challenge.

This is why we’re building Shopify Fulfillment Network—a geographically dispersed network of fulfillment centers with smart inventory-allocation technology. We use machine learning to predict the best places to store and ship your products, so they can get to your customers as fast as possible.

We’ve negotiated low rates with a growing network of warehouse and logistic providers, and then passed on those savings to you. We support multiple channels, custom packaging and branding, and returns and exchanges. And it’s all managed in Shopify.

The first paragraph explains why the Shopify Fulfillment Network was a necessary step for Shopify: Amazon may commoditize suppliers, hiding their brand from website to box, but if its offering is truly superior, suppliers don’t have much choice. That was increasingly the case with regards to fulfillment, particularly for the small-scale sellers that are important to Shopify not necessarily for short-term revenue generation but for long-run upside. Amazon was simply easier for merchants and more reliable for customers.

Notice, though, that Shopify is not doing everything on their own: there is an entire world of third-party logistics companies (known as “3PLs”) that offer warehousing and shipping services. What Shopify is doing is what platforms do best: act as an interface between two modularized pieces of a value chain.

Shopify as interface between 3PLs and merchants

On one side are all of Shopify’s hundreds of thousands of merchants: interfacing with all of them on an individual basis is not scalable for those 3PL companies; now, though, they only need to interface with Shopify.

The same benefit applies in the opposite direction: merchants don’t have the means to negotiate with multiple 3PLs such that their inventory is optimally placed to offer fast and inexpensive delivery to customers; worse, the small-scale sellers I discussed above often can’t even get an audience with these logistics companies. Now, though, Shopify customers need only interface with Shopify.

Plaforms Versus Aggregators

Moreover, this is what Shopify has already accomplished when it comes to referral partners (who drive new merchants onto the platform), developers (who build apps for managing Shopify stores) and theme designers (who sell themes to customize the look-and-feel of stores). COO Harley Finkelstein said at Unite:

You’ve often heard me say that we at Shopify want to create more value for your partners than we capture for ourselves, and I find the best way to demonstrate this is by looking at what I call the “Partner Economy”. The “Partner Economy” is the amount of revenue that flows to all of you our partners…in 2018 Shopify made about a billion dollars [Editor: in revenue]. We estimate that you, our partners, made more than $1.2 billion.

In other words, Shopify clears the Bill Gates Line — it captures a minority of the value in the ecosystem it has created — and the Shopify Fulfillment Network should fit right in:

The Shopify platform

What is powerful about this model is that it leverages the best parts of modularity — diversity and competition at different parts of the value chain — and aligns the incentives of all of them. Every referral partner, developer, theme designer, and now 3PL provider is simultaneously incentivized to compete with each other narrowly and ensure that Shopify succeeds broadly, because that means the pie is bigger for everyone.

This is the only way to take on an integrated Aggregator like Amazon: trying to replicate what Amazon has built up over decades, as Walmart has attempted, is madness. Amazon has the advantage in every part of the stack, from more customers to more suppliers to lower fulfillment costs to faster delivery.

The only way out of that competition is differentiation; granted, Walmart has tried buying and launching new brands exclusive to its store, but differentiation when it comes to e-commerce goods doesn’t arise from top down planning. Rather, it bubbles up from widespread opportunity (and churn!), like that created by Shopify, supported by an entire aligned ecosystem.

  1. While Amazon is not technically an Aggregator — the company deals with physical goods that absolutely have both marginal and transaction costs — one way to understand the company’s dominance is that its massive investments in logistics have driven those costs much lower than its competitors, allowing the company to reap many of the same benefits. []

Facebook, Libra, and the Long Game

When I get things wrong — and I was very much wrong about Facebook’s blockchain plans — the reason is usually a predictable one: confirmation bias. That is, I already have an idea of what a company’s motivations are, and then view news through that lens, failing to think critically about what parts of that news might actually disconfirm my assumptions.

So it was last month when the Wall Street Journal reported that Facebook was building a cryptocurrency-based payment system. I wrote in a Daily Update:

Start with the obvious: this isn’t a Bitcoin competitor. And why would it be? The entire point of Bitcoin is to be distributed; Facebook’s power come from its centralization. Indeed, this is probably the single most important prism through which to examine whatever it is that Facebook does in the space: the company is not going betray its dominant position, but rather seek to strengthen it. That is why I am not too concerned about not knowing the implementation details: take it as a given that whatever role users have to play in this network, Facebook will have final control.

I stand by the first part of that excerpt: for all of the positive attributes Facebook is highlighting about Project Libra — which Facebook, in conjunction with the newly formed Libra Association, announced last week — it is unreasonable to expect that Facebook would invest significant resources in something that would weaken its position. What I got wrong was presuming that meant overt Facebook control. Frustratingly, it was an error that should have both been obvious in my original analysis and also clear in the broader view of the Internet I have explained through Aggregation Theory.

What is Libra

Libra is being presented as a cryptocurrency based on a blockchain: transactions are recorded on a shared ledger and verified by “miners” independently solving cryptographic problems and arriving at a consensus that the transaction is legitimate and should be added to the ledger permanently.

In practice, it is much more complicated: while a limited set of “validators” — aka miners — share a history of transactions in (individual) blocks that are chained together (i.e. a blockchain), what Libra actually exposes is the current state of the ledger. In practice this means that adding new transactions can be much quicker and more efficient — more akin to adding a line to a spreadsheet than rebuilding the entire spreadsheet from scratch.

In other words, there is a trade-off between trust and efficiency: whereas anyone can “rebuild the spreadsheet” in the case of a cryptocurrency like Bitcoin, where the blockchain is fully exposed, normal users have to trust Libra’s validators.1 On the other hand, Bitcoin, thanks to the overhead of communicating and verifying every transaction, can only manage around 7 transactions a second; Libra is promising 1,000 transactions per second.

The Validators

Who, then, are the validators? Well, Facebook is one, but only one: currently there are 28 “Founding Members”, including merchants, venture capitalists, and payment networks, that meet two of the following three criteria:

  • More than $1 billion USD in market value or more than $500 million USD in customer cash flow
  • Reach more than 20 million people a year
  • Recognition as a top-100 industry leader by a third-party association such as Fortune or S&P

These “Founding Members” are required to make a minimum investment of $10 million and provide computing power to the network. In addition, there are separate requirements for non-profit organizations and academic institutions that rely on a mixture of budget, track record, and rankings; a minimum investment may not be necessary. Libra intends to have 100 Founding Members by the time it launches next year.

Here is the important thing to understand about the Libra Association: while its members — who again, are the validators — do control the Libra protocol, Facebook does not control the validators. Which, by extension, means that Facebook will not control Libra.

Libra Versus a Facebook Coin

To understand the distinction, consider an alternative route that Facebook could have taken: a so-called “Facebook Coin”. In that case Facebook would have had total control over the protocol, and to be sure, this would have distinct advantages for Facebook specifically and the usability of a “Facebook Coin” generally:

  • Efficiency and scalability would be maximized because Facebook could coordinate perfectly with itself
  • Development would be significantly accelerated because Facebook would not have to achieve consensus
  • Facebook would have perfect knowledge of all transactions on the system because it would control all entry points

This is the Trust-Efficiency tradeoff taken to the opposite extreme from Bitcoin:

A theoretical Facebook Coin would be the opposite of Bitcoin

With Bitcoin, there is no need to trust anyone — you can verify the entire blockchain yourself — but at the cost of efficiency of transactions. A Facebook Coin, on the other hand, would require complete trust of Facebook, but transactions would be far more efficient as a result.

The most obvious example of this is WeChat Pay: WeChat handles the transactions, stores the money, and is the sole source of authority about who owns what, and thanks to the ubiquity of WeChat and the efficiency of this model, WeChat Pay (along with Alipay) has become the default payment mechanism in China.

Unsurprisingly, WeChat doesn’t use any sort of blockchain-based technology. Why would it? The entire point of a blockchain is to distribute a ledger across multiple parties, which is fundamentally less efficient than simply storing the entire ledger in a single database managed by one party.

Trust Versus Efficiency

This gets at the error in analysis I referenced above: because I was anchored on the idea of Facebook capturing transaction data, I missed that when the Wall Street Journal reported last month that Facebook was using some sort of blockchain technology (leaving aside the quibble on the definition noted above) it was an obvious signal that whatever Facebook was announcing would not be completely controlled by Facebook, because if the goal were Facebook control of a Facebook Coin then a blockchain would be a silly way to implement it.

A decentralized blockchain versus a centralized database

The best way to understand Libra, then, is as a sort of distributed ledger that is a compromise between a fully public blockchain and an internal database:

A distributed ledger as a compromise between a decentralized blockchain and a centralized database

This means that the overall system is much more efficient than Bitcoin, while the necessary level of trust is spread out to multiple entities, not one single company:

Bitcoin versus Libra versus a theoretical Facebook coin

The trade-off is that Libra is not fully permissionless, although the Libra White Paper does say that is the long-term goal:

To ensure that Libra is truly open and always operates in the best interest of its users, our ambition is for the Libra network to become permissionless. The challenge is that as of today we do not believe that there is a proven solution that can deliver the scale, stability, and security needed to support billions of people and transactions across the globe through a permissionless network. One of the association’s directives will be to work with the community to research and implement this transition, which will begin within five years of the public launch of the Libra Blockchain and ecosystem.

Time will tell if this is possible: if you flip the “trust” axis in the above graphs the current state of affairs looks like this:

Is there an efficiency frontier when it comes to no-trust and efficiency?

It may very well prove to be the case that there is a sort of efficient frontier when it comes to “no-trust” versus “efficiency”: that is, any decrease in necessary trust requires a corresponding decrease in efficiency. From my perspective the safest assumption about Libra’s future is that efficiency will be the ultimate priority, which means that the more that Libra is used the more difficult it will be to ever transition to a permission-less model.

The Credit Card Challenge

Still, even if Libra remains controlled by an ever-expanding-but-still-limited set of validators, that is likely to be a far easier “sale” than a Facebook Coin controlled by a single company. Leaving aside the fact Facebook is not exactly swimming in trust these days when it comes to users, why would any other large company want to adopt a currency with a single point of corporate control?

Keep in mind the situation in the United States and other developed countries is much different than China: credit cards have their flaws, particularly in terms of fees, but they are widely accepted by merchants and widely used by consumers. China, on the other hand, mostly leapfrogged credit cards entirely; this meant that WeChat Pay’s (and Alipay’s) competition was cash: in that case the relative advantages of WeChat Pay relative to cash (which are massive) could overcome any concerns around centralized control.

A theoretical Facebook Coin’s relative advantage to credit cards, on the other hand, would be massively smaller, which means obstacles to widespread adoption — like trusting Facebook exclusively — would likely be insurmountable:

How new payment systems are — or are not — adopted

Thus the federation of trust inherent in Libra, despite the loss of efficiency that entails: by not being in control, and by actively including corporations like Spotify and Uber that will provide places to use Libra outside of Facebook, and payment networks like Visa and PayPal that will facilitate such usage, Facebook is increasing the chances that Libra will actually be used instead of credit cards.

Aggregation and the Long Game

I do think it is overly cynical to completely dismiss the advertised benefits of Libra: remittances, for example, have been the go-to example of how cryptocurrencies can have societal benefit for a long time for a very good reason — the current system exacts major fees from the population that can least afford to bear them. And, while I just spent an entire section on credit cards, the reality is that credit card penetration is much lower amongst the poor in developed countries and in developing countries generally: a digital currency ultimately premised on owning a smartphone has the potential to significantly expand markets to the benefits of both consumers and service providers.

To put it another way, Libra has the potential to significantly decrease friction when it comes to the movement of money; of course this potential is hardly limited to Libra — the reduction in friction is one of the selling points of digital currencies generally — but by virtue of being supported by Facebook, particularly the Calibra wallet that will be both a standalone app and also built into Facebook Messenger and WhatsApp, accessing Libra will likely be much simpler than accessing other cryptocurrencies. When it comes to decreasing friction, simplifying the user experience matters just as much as eliminating intermediary institutions.

There is also another component of trust beyond caring about who is verifying transactions: confidence that the value of Libra will be stable. This is the reason why Libra will have a fully-funded reserve denominated in a basket of currencies. This does not foreclose Libra becoming a fully standalone currency in the long run, but for now both users and merchants will be able to trust that the value of Libra will be sufficiently stable to use it for transactions.

If all of these bets pay off — that users and merchants will trust a consortium more than Facebook; that Libra will be cheaper and easier to use, more accessible, and more flexible than credit cards; and that Libra itself will be a reliable store of value — then that decrease in friction will be realized at scale.

And this is when this bet would pay off for Facebook (and the second point I missed in my earlier analysis): the implication that digital currencies will do for money what the Internet did for information is that the very long-term trend will be towards centralization around Aggregators. When there is no friction, control shifts from gatekeepers controlling supply to Aggregators controlling demand. To that end, by pioneering Libra, building what will almost certainly be the first wallet for the currency, and bringing to bear its unmatched network for facilitating payments, Facebook is betting it will offer the best experience for digital currency flows, giving it power not by controlling Libra but rather by controlling the most users of Libra.

Will It Work?

Libra’s success, if it comes, will likely proceed in stages, with different challenges and competitors at each stage:

  • Initially the most obvious use case for Facebook’s Calibra wallet application will be peer-to-peer payments, which means the competitor will be applications like PayPal’s Venmo. Here Facebook’s biggest advantage will be leveraging its network and messaging applications.
  • The second use case will be using Libra to transact with merchants, who stand to benefit both from reduced fees relative to credit cards as well as larger addressable markets (i.e. potential users who don’t have credit cards). Note that none of Libra’s Founding Members are banks, which impose the largest percentage of credit card fees; Visa and Mastercard, on the other hand, are, like PayPal, happy to sit on top of Libra.
  • The largest leap will come last: Libra as a genuine currency, not simply a medium for transaction. This will be a function of volume in the previous two use cases, and is understandably concerning to governments all over the world. This, though, is another advantage of Facebook giving up direct control of Libra: while regulators will be able to limit wallets like Calibra (which will fully abide by Know-Your-Customer and Anti-Money-Laundering regulations), Libra — particularly if it achieves a fully permission-less-model — would be much more difficult to control.

It is easy to see how Facebook, given its size, would thrive in that final state, for the reasons I detailed above. Just as Google long boasted that the more people use the Internet the more revenue Google generates, it stands to reason that the more people use digital money the more it would benefit dominant digital companies like Facebook, whether that be through advertising, transactions, or simply making networks that much more valuable.

That, though, is also a reason to be skeptical: the idea of Google making more money by people using the Internet more was once viewed as a happy alignment of incentives that justified Google’s services being free; today the centralization — and thus money-making potential — that follows a reduction in friction is much better understood, and there is much more concern about just how much power these Aggregators have.

This is particularly the case with Facebook: despite all of the company’s efforts to design a system that does not entail trusting Facebook exclusively — again, this is not a Facebook Coin — Libra is already widely known as a Facebook initiative. Unless the consumer benefits are truly extraordinary, that may be enough to prevent Libra from ever gaining escape velocity. This applies even more to the Calibra wallet: Facebook promises not to mix transaction data with profile data, but that entails, well, trust that Facebook may have already lost.

Still, that doesn’t mean digital currencies will never make it: I do think that Libra gets closer to a workable balance between trust and efficiency than Bitcoin, at least when it comes to being usable for transactions and not simply a store of value; the question is who can actually get such a currency off the ground. Certainly Facebook’s audacity and ambition should not be underestimated, and the company’s network is the biggest reason to believe Libra will work; Facebook’s brand is the biggest reason to believe it will not.

  1. To clarify, this roadmap on the Libra developers blog includes plans to allow anyone to “rebuild the spreadsheet”:

    Validator APIs to support full nodes (nodes that have a full replica of the blockchain but do not participate in consensus). This feature allows for the creation of replicas that can support scaling access to the blockchain and the auditing of the correct execution of transactions.

    However, only validators can actually validate transactions (unlike Bitcoin where anyone can be a miner/validator) []

Tech and Antitrust

Four years ago I wrote Aggregation Theory, which argued that technology companies, uniquely enabled by zero marginal costs, were dominant by virtue of user preference driving suppliers onto their platforms, creating a virtuous cycle. Then, one month later, I predicted that the end state of Aggregation Theory would be increased demands for antitrust action. From Aggregation and the New Regulation:

This last point is key: under Aggregation Theory the winning aggregators have strong winner-take-all characteristics. In other words, they tend towards monopolies. Google is perhaps the best Aggregation Theory example of all — the company modularized individual pages from the publications that housed them even as it became the gateway to said pages for the vast majority of people — and so, given their success, perhaps it shouldn’t be a surprise that the company is under formal investigation by the European Union.

There was a second more subtle point in that article, though:

In other words, the regulation situation for these massive winner-take-all companies is not hopeless, but it has changed: their strength derives from the customer relationships they own, which means quiet backroom deals and straight-up arm wrestling of the Google and Uber varieties are liable to backfire in the face of overwhelming public opinion; it is in shaping that public opinion that the real battle will be fought. And while it’s true that the direct relationship aggregation companies have with their users is an advantage in this fight, the overwhelming power of social media is the new counterweight: it is easier than ever to reach said users with a report or column that resonates deeply. Your average writer or reporter has more (potential) power, not less.

This seems like the best explanation for how we have arrived at the current moment; Reuters reported last week that the U.S. Department of Justice and the Federal Trade Commission were divvying up tech companies for potential antitrust investigations — Google and Apple to the former, and Facebook and Amazon to the latter — a seemingly natural endpoint to what has been a mounting drumbeat for regulatory action against tech.

There’s just one problem: it’s not clear what there is to investigate.

I should state an obligatory caveat: I am not a lawyer or economist, which is relevant given that U.S. antitrust cases are adjudicated in court and largely driven by expert testimony. That reality, though, only underscores the point: any case against these four companies (with possibly one exception, which I will get to momentarily), will be extremely difficult to win.1 To explain why, it is worth examining all four companies with regards to:

  • Whether or not they have a durable monopoly
  • What anticompetitive behavior they are engaging in
  • What remedies are available
  • What will happen in the future with and without regulator intervention

In addition, for comparison’s sake, I will evaluate late 1990’s Microsoft, the last major tech antitrust case in the United States, along the same dimensions.

Durable Monopolies

The FTC defines monopolization as follows:

Courts do not require a literal monopoly before applying rules for single firm conduct; that term is used as shorthand for a firm with significant and durable market power — that is, the long term ability to raise price or exclude competitors. That is how that term is used here: a “monopolist” is a firm with significant and durable market power. Courts look at the firm’s market share, but typically do not find monopoly power if the firm (or a group of firms acting in concert) has less than 50 percent of the sales of a particular product or service within a certain geographic area. Some courts have required much higher percentages. In addition, that leading position must be sustainable over time: if competitive forces or the entry of new firms could discipline the conduct of the leading firm, courts are unlikely to find that the firm has lasting market power.

There are (at least) two major questions that arise from this: how is the relevant market defined, and what does it mean for market power to be sustainable over time?

1990s Microsoft: Microsoft was found to have a monopoly on operating systems for personal computers, and that advantage was found to be durable because of the lock-in created by the network effects between developers using the Windows API and users. Both conclusions were reasonable.

Google: Google certainly has a dominant position in search, but the real question is around durability. Google has long argued that “Competition is only a click away”, which has the welcome benefit of being true.

The European Commission handled this objection by arguing that Google also enjoys network effects:

There are also high barriers to entry in these markets, in part because of network effects: the more consumers use a search engine, the more attractive it becomes to advertisers. The profits generated can then be used to attract even more consumers. Similarly, the data a search engine gathers about consumers can in turn be used to improve results.

This is certainly a much more tenuous lock-in than the Windows API, but I think it is a plausible one.

Apple: There is no company for which the question of market definition matters more than Apple. The company is eager to point out that the iPhone has a minority smartphone share in every market in which it competes; even in the U.S., Apple’s best market, the iPhone has 45% share, less than the 50 percent of sales the FTC suggests as a cut-off.

In Europe, Apple is likely in trouble when it comes to the European Commission’s investigation of Apple about Spotify’s complaints about the App Store. In the Google Android case the European Commission determined that “Google is dominant in the markets for general internet search services, licensable smart mobile operating systems and app stores for the Android mobile operating system.” That last clause leaves room for Apple to be found dominant in app store for the iOS mobile operating system, at which point taking 30% of Spotify’s revenue (or else forbidding Spotify to even link to a web page with a sign-up form) will almost certainly be ruled illegal.

I strongly suspect the Department of Justice will have a much more difficult time convincing a federal court that such a narrow definition is appropriate, but at the same time, I’m not certain that “smartphones” are the correct market definition either. Suggesting that users changing ecosystems is a sufficient antidote to Apple’s behavior is like suggesting that users subject to a hospital monopoly in their city should simply move elsewhere; asking a third party to remedy anticompetitive behavior by incurring massive inconvenience with zero immediate gain is just as problematic as making up market definitions to achieve a desired result.

Facebook: Here again market definitions are very fuzzy. Most people have multiple social media accounts across both Facebook and non-Facebook services, which means any sort of workable market share definition would have to rely on “time-spent” or some other zero-sum metric. Moreover, it’s not clear what is or is not a social network: does iMessage count? What about text messaging generally? What about email?

There certainly is an argument that Google and Facebook are a duopoly when it comes to digital advertising, but it is not as if either has the power to foreclose supply: there is effectively infinite advertising inventory on the Internet, which suggests that Google and Facebook earn more advertising dollars because they are better at advertising, not because they foreclose competition.

Amazon: There really is no plausible argument that Amazon has a monopoly. Yes, the company has around 37% of e-commerce sales, but (1) that is obviously less than 50% and (2) the competition is only a click away! Moreover, it’s not clear why “e-commerce” is the relevant market, and in terms of retail Amazon has low single-digits market share.

Anticompetitive Behavior

But for a few exceptions, everything that follows is moot if the company in question is not found to have a durable monopoly. After all, “anticompetitive behavior” is simply another name for “driving differentiation”, which no one should want to be illegal for any company that is not in a dominant position; it is the potential to make outsized profits that drives innovation.

Still, it is worth examining what, if anything, these companies do that might be considered problematic.

1990s Microsoft: Microsoft was found guilty of illegally bundling Internet Explorer with Windows and unfairly restricting OEMs from shipping computers with alternative browsers (or alternative operating systems). The first objection is particularly interesting in 2019, given that it is unimaginable that any operating system would ship without web browser functionality (which, at a minimum, would obviate an essential distribution channel for 3rd-party software). The second is much more problematic: as I wrote in Where Warren’s Wrong, competition-constraining contracts from dominant players should be viewed with extreme skepticism, as their purpose is almost always to extend dominance, not increase consumer welfare.

Google: Again — and note a developing theme here — Google’s anticompetitive behavior is relatively clear. First, the company consistently favors its own properties in search results, particularly “above-the-fold” — that is, results that are not actually search results but which seek to answer the user’s query directly. A partial list:

  • Google by-and-large removed video segments from competing properties in favor of YouTube videos
  • Google offers local results from Google Maps above search results that tend to favor Yelp, TripAdvisor, etc.
  • Google offers hotel and flight listings above search results that tend to favor Booking, Expedia, etc.
  • Google displays AMP-enabled websites (a Google technology) above search results that are agnostic about how a web page is displayed.
  • Google displays tweets for individuals (thanks to a beneficial relationship with Twitter) above search results that tend to favor LinkedIn, Facebook, etc.

Of these local is probably the most open-and-shut case (although Google’s efforts around travel and hospitality are on the same track): Google Maps results were worse, got better when Google scraped data from competitors (which it stopped doing after an FTC investigation), and now is somewhat competitive by sheer force of exposure to customers defaulting to Google search.

Then, of course, there is Android, where Google leveraged the Play Store to force Android OEMs to feature Search and Chrome, and further forbade said OEMs from shipping any phones with open-source Android alternatives (a la Microsoft). This is one case the European Commission got exactly right.

Apple: As I argued in Antitrust, the App Store, and Apple, Apple is leveraging its position in the smartphone market to earn rents in the market for digital goods:

To put it another way, Apple profits handsomely from having a monopoly on iOS: if you want the Apple software experience, you have no choice but to buy Apple hardware. That is perfectly legitimate. The company, though, is leveraging that monopoly into an adjacent market — the digital content market — and rent-seeking. Apple does nothing to increase the value of Netflix shows or Spotify music or Amazon books or any number of digital services from any number of app providers; they simply skim off 30% because they can.

For this to be illegal does not necessarily require that Apple have a monopoly: tying (i.e. iOS users must use the App Store) is per se illegal in theory, but in practice the Supreme Court has dramatically constricted the definition of tying to include a requirement that the tie-er have market dominance; the Supreme Court also declined to review the Court of Appeals decision in the Microsoft case, which held that courts should use a rule of reason test for software specifically that also considers the benefits of tying, not simply the downsides.

I would certainly argue that the requirement that digital content use Apple’s payment processor (and thus give up 30%) has downsides that outweigh the benefits, but the truth is that this is a case that, under U.S. antitrust law, is harder to make than it was 20 years ago.

Facebook: There are certainly plenty of reasons to be upset with Facebook when it comes to issues of privacy, but the company has not done anything illegal from an antitrust perspective.

I am, to be clear, distinguishing anti-competitive behavior from anti-competitive mergers. I have made the case as to why Facebook’s acquisition of Instagram was so problematic, and this is the area that needs the most urgent attention from anyone who cares about competition. The single best way to maintain a dominant position in a market as dynamic as technology is to use the outsized profits that come from winning in one market to buy the winner in another; it follows, then, that the best way to spur competition in the long run is to force companies to compete with new entrants, not buy them out.

Amazon: Make no mistake, Amazon drives a very hard bargain with its suppliers. Those suppliers, though, have a whole host of alternatives through which to sell their product. Meanwhile, those hard bargains accrue to consumers’ benefit.

Similarly, it is very hard to see why Amazon can’t offer its own branded goods; this practice is widespread in retail, and for good reason: consumers get a better price, not only on the store-branded goods, but also on 3rd-party goods that can be priced more competitively since the retailer is making its margin on its own goods.

In short, more than any company on this list, the arguments against Amazon fall apart on the first point: Amazon simply isn’t a monopoly.


Remedies by definition come last: there has to be something worth remedying! Still, it is interesting to consider what the appropriate remedy for each company would be if they were indeed found to be a monopoly engaged in illegal anticompetitive behavior.

1990s Microsoft: Microsoft was originally ordered to be broken-up, although this remedy was overruled on appeal. The idea was that Windows would better serve all 3rd-party software suppliers if it weren’t incentivized to favor its own offerings. Ultimately, though, the company agreed to open up its API, although critics argued that the specifics simply cemented Windows’ dominance, instead of making it possible to build a Windows alternative that could run 3rd-party Windows applications.

The European Commission went further both in terms of requiring interoperability and also presenting users with choice in terms of both browsers and media players. In both cases 3rd-party competitors actually won in the long run — but they won because they were clearly better (first Firefox and then Chrome, and iTunes).

Google: An effective Google remedy would likely be more about constraining Google behavior than it would be about restructuring Google itself. Google might be forbidden from offering its own results for things like local search, or be forced to feature results from competitors according to an algorithm overseen by a court observer. There would also likely be a large fine.

Apple: The obvious remedy for Apple would be allowing 3rd-party payment processors for apps; frankly, I think this might go too far, as there are real benefits to Apple controlling everything API-related on the iOS platform. I would be satisfied with Apple allowing apps to launch web views for payment processing that is clearly handled on the app’s own webpage.

Alternatively, Apple could be forced to significantly reduces its App Store take rate, but I would prefer that Apple be forced to compete for payment processing business, which would achieve a similar result.

Facebook: Facebook, fascinatingly enough, given its lack of anticompetitive behavior, has the most obvious remedy: break apart Facebook, Instagram, and WhatsApp. I do believe this would be beneficial for competition: Instagram being an independent company would not only add another competitor for digital advertising, but would also make other companies like Snapchat more competitive by virtue of forcing advertisers to diversify. Again, though, this is more about a failure in merger review.

Amazon: Amazon has made anti-competitive acquisitions of its own, like Zappos and Diapers.com. Those platforms are gone though, making any sort of breakup unrealistic (this is likely at least one factor in Facebook’s plans to integrate messaging across its platforms — that will make a breakup that much more difficult). And as far as selling its own products goes, not only is that probably not a problem, but there is little evidence 3rd-party sellers are being hurt by Amazon’s policies, and plenty of evidence that they are helped by having access to Amazon’s customers. Moreover, highly differentiated suppliers have found success prioritizing other retailers if Amazon squeezes too hard.

The Future

Ideally, an antitrust action is not simply about punishing bad behavior in the past, but also about ensuring competition going forward. To that end, it is worth considering whether the upheaval that would result from any sort of investigation would actually make a long-term difference.

1990s Microsoft: Here the Microsoft case is particularly pressing. It is my contention that Microsoft failed to compete on the Internet and in mobile because the company was fundamentally unsuited to do so, both in terms of culture and capability.

The implication of this conclusion is that the antitrust case against Microsoft was largely a waste of time: the company would have been surpassed by Google and Apple regardless (and that the company only returned to prominence when it embraced a market that suited its capabilities and transformed its culture).

Many disagree to be sure, arguing that the antitrust case prevented Microsoft from foreclosing Google, although it is never clear as to how Microsoft would have done so (nor any explanation as to why Microsoft failed in mobile, where they were not constrained). A better argument is IBM: the government may have ultimately failed in its antitrust case against the mainframe behemoth, but IBM did voluntarily separate its software sales from its hardware sales, setting the stage for its own disruption; then again, the bigger factor was that IBM simply didn’t care enough about PCs to lock them down effectively.

Google: I wrote that we had reached Peak Google in 2014; clearly I was wrong, at least as far as the company’s results and stock price were concerned, but notably the company is ever more dependent on search advertising. One of my biggest mistakes was underestimating the degree to which Google could monetize mobile, not simply through increased adoption but also stuffing results with ever more ads (which, in the limited viewport of smartphones, are even easier to tap on).

That, though, is also an argument that my mistake was one of timing, not thesis (still a mistake, to be clear). For all of Google’s seeming advantages in machine learning, the company has yet to come up with a true second act in terms of driving revenue and profits (with the notable exception of YouTube, an acquisition).

Frankly, I suspect this is why Google is the most at-risk in this analysis: when a company is growing, it has no need to engage in anti-competitive behavior; it is only when the low-hanging fruit is gone that the risk of leveraging one market into another becomes worth it.

Apple: That analysis applies to Apple as well: the company introduced the “Services Narrative” in the 6S cycle, which in retrospect was when iPhone growth plateaued. Suddenly the rent Apple collected from apps was not simply an added bonus to a thriving iPhone business but a core driver of the company’s stock price.

At the same time, it is not as if iPhones are disappearing: there is still an argument to act for the sake of all of the businesses that will be hurt in the meantime. The same argument applies to Google: just because antitrust action isn’t necessarily causal when it comes to a company being eclipsed doesn’t mean it can’t be an important tool to maintain competition in the meantime.

Facebook: As I noted above, Instagram bought Facebook another five-to-ten years of dominance. That, though, is itself evidence that social networks are not forever. Each generation has its own preferences, and as long as acquisition rules around network-based companies are significantly beefed up, the best solution for Facebook, at least from an antitrust perspective, is simply time.

Amazon: This probably deserves a longer article at some point, but I think there is reason to believe that Amazon’s consumer business has also slowed considerably. The company is pushing more into ads, squeezing its suppliers, and driving customers to 3rd-party merchants with their attendant higher margins (for Amazon). This makes sense: there are certain categories of products that make sense for e-commerce, and Amazon does very well there, but will — and perhaps has — hit a ceiling as far as overall retail share is concerned.

Indeed, a mistake many tech company critics make is assuming that graphs that are up-and-to-the-right continue indefinitely; nearly all of those graphs are S-curves that will flatten out, and it is dangerous making regulatory decisions without some sort of insight as to when that flattening will occur.

Ultimately, when it comes to antitrust actions against tech companies in the U.S., there really isn’t nearly as much there as all of the attendant fervor would suggest. Google is absolutely vulnerable, Apple somewhat less so, and it is very hard to see any sort of case against Facebook or Amazon.

And again, this is probably a trailing indicator: Google and Apple have maximized their gains from their most important products, while Facebook and Amazon (particularly AWS) still have growth potential. I don’t think this alignment is a coincidence.

That is not to say that tech deserves no regulation: questions of privacy, for example, are something else entirely. Nor, for that matter, is antitrust irrelevant in the United States generally: concentration has increased dramatically throughout the economy.

What is driving that concentration matters, though: at the end of the day tech companies are powerful because consumers like them, not because they are the only option. Consumer welfare still matters, both in a court of law and in the court of public opinion.

  1. I will mention the European Commission’s different standards in passing; I addressed the differences between U.S. and European approaches more fully in 2016’s Antitrust and Aggregation []

The First Post-iPhone Keynote

This article was previously title ‘Apple’s Audacity’

It is the nature of hardware that a computer the vast majority of Apple’s customers will never own was the headline from the company’s keynote at its annual Worldwide Developer’s Conference (WWDC). The Mac Pro starts at $6,000, and will be configurable to a number many times that. If you think that is absurd, or would simply rather buy a new car, well, you’re not the target customer.

At the same time, here I am, leading with the Mac Pro, just like those headline writers, and I’m not incentivized by hardware driving clicks: it was fun seeing what Apple came up with in its attempt to build the most powerful Mac ever, in the same way it is fun to read about supercars. More importantly, I thought that sense of “going for it” that characterized the Mac Pro permeated the entire keynote: Apple seemed more sure of itself and, consequentially, more audacious than it has in several years.

The iPhone Plateau

In retrospect, the previous malaise around Apple should have been expected. When a product like the iPhone comes along — and make no mistake, there are very few products like the iPhone! — the goal is simply to hold on to a rocket ship. Growth was trivial: simply add a new country or a new carrier, and predict iPhone sales with eerie accuracy. That all culminated with the iPhone 6, when Apple’s forecasts were finally wrong — there was far more pent-up demand for larger screens than anyone anticipated.

It turned out that was the peak: Apple would again miss forecasts with the iPhone 6S, but this time their mistake was expecting growth that never materialized, eventually resulting in a $2 billion inventory draw-down. The forecasts did get better, but as I explained last year, unit growth never returned:

The 6S was the new normal: iPhone unit sales have been basically flat ever since:

iPhone unit sales over time

What has changed is Apple’s pricing: the iPhone 7 Plus cost $20 more than the iPhone 6S Plus. Then, last year, came the big jump: both the iPhones 8 and 8 Plus cost more than their predecessors ($50 and $30 respectively); more importantly, they were no longer the flagship. That appellation belonged to the $999 iPhone X, and given how many Apple fans will only buy the best, average selling price skyrocketed:

iPhone average selling price over time

From a financial perspective, it didn’t much matter where the growth came from — more units or more revenue per unit. The iPhone, though, was no longer a rocket ship scaling to new heights; it was an institution, something with roots, and something that could be exploited.

This changes a company: instead of looking outwards for opportunity, the gaze turns inwards. In 2018 I called it Apple’s Middle Age:

Apple’s growth is almost entirely inwardly-focused when it comes to its user-base: higher prices, more services, and more devices…The high-end smartphone market — that is, the iPhone market — is saturated. Apple still has the advantage in loyalty, which means switchers will on balance move from Android to iPhone, but that advantage is counter-weighted by clearly elongating upgrade cycles. To that end, if Apple wants growth, its existing customer base is by far the most obvious place to turn.

In short, it just doesn’t make much sense to act like a young person with nothing to lose: one gets older, one’s circumstances and priorities change, and one settles down. It’s all rather inevitable…The fact of the matter is that Apple under Cook is as strategically sound a company as there is; it makes sense to settle down. What that means for the long run — stability-driven growth, or stagnation — remains to be seen.

The long-run came quickly: one year later CEO Tim Cook had to issue a revenue warning thanks to slumping iPhone sales; after four years of accounting for between 68-70% of Apple’s revenue in the company’s fiscal first quarter, the iPhone suddenly only accounted for 62%.

It might have been the best thing that could have happened to Apple.

Three Announcements

There were three announcements in yesterday’s keynote that, particularly when taken together, spoke to a company moving forward.

The first was the end of iTunes, which will be split into separate Music, Podcasts, and TV apps; device syncing will be handed by the Finder. This is both straightforward and overdue, but still meaningful: while iTunes didn’t save Apple, the application is the connective thread of one of the greatest growth stories in business history. There is a direct line from the introduction of iTunes in January 2001 to the introduction of the iPod later that year, then the iTunes Music Store in 2003 (upon which the App Store was built), and ultimately the iPhone in 2007. iTunes provided the foundation for everything that followed, and it seems appropriate that the application is going away at the same time that growth story is coming to a close.

The second was the introduction of iPadOS. This is, to be clear, mostly a marketing move: iPadOS is very much the same iOS it was two days ago. Marketing moves can matter, though: in this case — much like the Mac Pro — it is a statement from Apple that the non-iPhone parts of its business still matter. While the company was on the iPhone plateau it wasn’t so clear that management cared about either — both the iPad and Mac languished, the former in terms of software, and the latter in terms of hardware — but now there is real evidence the company is fully back in. That management no longer had a choice is besides the point.

The third announcement was the App Store on Apple Watch. While there was not any news about the Apple Watch being completely untethered from the iPhone — non-cellular models have no choice — it is a clear step towards making the Watch independent.1 That, by extension, completely changes the Watch’s addressable market from iPhone users to everyone. This was likely Apple’s endgame all along, but there is more urgency than there may have been even six months ago, and that is a great thing: better urgency than complacency.

Privacy and Purpose

Last fall Cook gave a remarkable speech at the 40th International Conference of Data Protection and Privacy Commissioners in Europe. This was certainly not the first time Cook has spoken about privacy, but the clarity, purpose, and passion with which Cook spoke was striking. I wrote about the speech in a Daily Update, so I will not break it down in full here, but this portion is worth highlighting again:

Now there are many people who would prefer I never said all of that. Some oppose any form of privacy legislation; others will endorse reform in public and then resist and undermine it behind closed doors. They may say to you, “Our companies can never achieve technology’s true potential if they are constrained with privacy regulation.” But this notion isn’t just wrong: it is destructive. Technology’s potential is, and always must be, rooted in the faith people have in it, in the optimism and the creativity that it stirs in the hearts of individuals, and in its promise and capacity to make the world a better place. It’s time to face facts: we will never achieve technology’s true potential without the full faith and confidence of the people who use it.

It was only a month ago that Google made a very different pitch, making the case that the services it created with all of the data it collected was a tradeoff worth making. Unsurprisingly, the two different visions aligned with the company’s two different business models: data collection is obviously integral to advertising, and privacy a differentiating factor for Apple’s high-end hardware.

What I appreciated about both company’s events, though, was the commitment. Google did not try to obfuscate or hide how its products worked, but rather embraced and dwelled on the centrality of user data to its offerings. Apple, similarly, emphasized privacy at every turn, and did so with passion: it felt like the fight for privacy has given the entire company a new sense of purpose, and that is invaluable.

In short, it is clear that privacy has become more than a Strategy Credit for Apple. It is a driving force behind the company’s decisions, both in terms of product features and also strategy. This is particularly apparent in perhaps the most important announcement yesterday, Sign In with Apple.

Sign In with Apple

It’s important to note that the question of privacy goes far beyond Google and Facebook — it predates the Internet. Starting in the 1960s companies began collecting all of the personal information on individual consumers they could get; Lester Wunderman gave it the sanitized name of “direct marketing”. Everything from reward programs to store loyalty discounts to credit cards were created and mined to better understand and market to those individual consumers.

The Internet plugged into this existing infrastructure: it was that much easier to track what users were interested in, particularly on the desktop, and there were far more places to put advertisements in front of them. Mobile actually tamped this down, for a bit: there was no longer one browser that accepted cookies from anyone and everyone, which made it harder to track. That, though, was a boon for Facebook in particular: its walled-garden both collected data and displayed advertisements all in one place.

Over time Facebook extended its data collection far beyond the Facebook app: both it and Google have a presence on most websites, and offering login services for apps not only relieves developers from having to manage identities but also give both companies a view into what their users are doing. The alternative is for users to use their email address to create accounts, but that is hardly better: your email address is to data collectors as your house address was fifty years ago — a unique identifier that connects you to the all-encompassing profiles that have been built without your knowledge.

This is the context for Sign In with Apple: developers can now let Apple handle identity instead of Facebook or Google. Furthermore, users creating accounts with Sign In with Apple have the option of using a unique email address per service, breaking that key link to their data profiles, wherever they are housed.

This was certainly an interesting announcement in its own right: identity management is one of the single most powerful tools in technology. Owning identity was and is a critical part of Microsoft’s dominance in enterprise, and the same could be said of Facebook in particular in the consumer space. Apple making a similar push — or even simply weakening the position of others — is noteworthy.

Privacy and Power

Still, Sign In with Apple is a hard sell for most developers who have already aligned themselves with Facebook or Google or have rolled their own solution. And, given that developers want to make money, is it really worth adding on an identity manager that would likely interfere with that?

Then came the bombshell in Apple’s Updates to the App Store Guidelines:

Sign In with Apple will be available for beta testing this summer. It will be required as an option for users in apps that support third-party sign-in when it is commercially available later this year.

Apple is going to leverage its monopoly position as app provider on the iPhone to force developers (who use 3rd party solutions)2 to use Sign In with Apple. Keep in mind, that also means building Sign In with Apple into related websites, and even Android apps, at least if you want users to be able to login anywhere other than their iPhones. It was quite the announcement, particularly on a day where it became clear that Apple was a potential target of U.S. antitrust investigators.

It is also the starkest example yet of how the push for privacy and the push for competition are, as I wrote a year ago often at odds. Apple is without question proposing an identity solution that is better for privacy, and they are going to ensure that solution gets traction by leveraging their control of the App Store.

Note, though, that even this fits in the broader theme of Apple regaining its mojo: complaints about the App Store have been in part about data but mostly about Apple’s commission and refusal to allow alternative payment methods. It is a tactic that very much fits into the “get more revenue from existing customers” approach that characterized the last few years.

Sign In with Apple, though, is much more aggressive and strategic in nature: it is a new capability, that could both hurt competitors and attract new users. It is the move of a company looking outwards for opportunity, and motivated by something more intrinsic than revenue. It is very Apple-like, and while there will be a lot of debate about whether leveraging the App Store in this way is illegal or not, it is a lot more interesting for the industry to have Apple off the iPhone plateau.

I wrote a follow-up to this article in this Daily Update.

  1. The Watch will now be able to update itself as well []
  2. One unanswered question: what about enterprise single sign-on offerings like Okta? []

China, Leverage, and Values

Tim Culpan declared at Bloomberg that The Tech Cold War Has Begun after the Trump administration barred companies viewed as national security threats from selling to the U.S., and blocked U.S. companies from selling to Huawei specifically without explicit permission. Culpan writes:

The prospect that the U.S. government would cut off the supply of components to Huawei was precisely what management had been anticipating for close to a year, Bloomberg News reported Friday. Huawei has at least three months of supplies stockpiled. That’s not a lot, but it speaks to the seriousness with which the Shenzhen-based company took the threat.

There’s hope that this latest escalation is just part of the U.S.’s trade-war posturing and will be resolved as part of broader negotiations. Huawei, or Chinese leaders, are unlikely to be so naive as to share that. Even the briefest of bans will be proof to them that China can no long rely on outsiders.

We can now expect China to redouble efforts to roll out a homegrown smartphone operating system, design its own chips, develop its own semiconductor technology (including design tools and manufacturing equipment), and implement its own technology standards. This can only accelerate the process of creating a digital iron curtain that separates the world into two distinct, mutually exclusive technological spheres.

I agree with Culpan’s overall conclusions, and dived into some of the short and medium-term implications of the Trump administration’s action in yesterday’s Daily Update. However, to the extent that a “tech Cold War has begun”, that is only because a war takes two.

The ZTE Ban

Huawei’s preparation for this moment likely started last year when a similar ban was placed on the sale of American components to ZTE; as I explained at the time:

Obviously the United States government cannot tell a Chinese company what to do. However, the U.S. government can tell American companies what to do, and that includes determining what technology can be exported, and to whom. To that end, countries like Iran and North Korea have long been subject to U.S. sanctions, which means that it is illegal for U.S. companies in many sectors, particularly technology-related ones, to export products to those countries (including digital products like licensed software). And, by extension, U.S. companies cannot knowingly export embargoed products to companies that then sell those products to countries covered by those sanctions.

That ZTE was flouting those sanctions was well-known, and the company settled with the U.S. government in 2017. The action last year, then, was in response to ZTE allegedly violating that settlement; at the same time, it was hard not to wonder if there was any relation to the ongoing trade dispute with China?

Similar questions surround this Huawei action: the Trump administration ultimately made a deal to spare ZTE, and a waiver has already been granted allowing Huawei to service existing networks and phones in the U.S.

Still, if you’re looking for the start of this “tech cold war”, the move against ZTE was arguably the bigger deal: for the first time the extreme vulnerability China’s tech giants have to U.S. action was laid bare.

The U.S. Advantage

While tech devices like iPhones are “Made in China”, it is important to note that little of the technology originates there — less than $10 worth, in fact. Much more goes to component suppliers in the United States, South Korea, Taiwan, and Japan1 (and obviously, even more goes to Apple itself).

The reality is that China is still relatively far behind when it comes to the manufacture of most advanced components, and very far behind when it comes to both advanced processing chips and also the equipment that goes into designing and fabricating them. Yes, Huawei has its own system-on-a-chip, but it is a relatively bog-standard ARM design that even then relies heavily on U.S. software. China may very well be committed to becoming technologically independent, but that is an effort that will take years.

That is why the best that Huawei could do over the last year was stockpile supplies: the U.S. retains a significant upper-hand in this “war”. At the same time, cutting off Chinese customers like Huawei will cost U.S. suppliers dearly: high-value components by definition entail very large research and development costs and significant capital outlays for their manufacture; that means that profit comes from volume, and losing a massive customer like Huawei would be costly.

China has one other card to play: rare earth elements. These 17 elements2 are essential for electronic components, and China dominates their production, accounting for over 90% of the world’s supply. The country flexed its power in 2010, imposing export quotas (which were later ruled illegal by the WTO) that caused prices to skyrocket, at least for non-Chinese companies, giving Chinese companies an advantage. To that end, it is certainly not a coincidence that Chinese President Xi Jinping toured a rare earth mining and processing center yesterday, accompanied by China’s top trade negotiator.

China’s Protectionism

China’s 2010 rare earth export reduction wasn’t the only shot the country has taken: in January of that year Google announced that its network had been hacked by China, resulting in the theft of intellectual property, and that the company was reevaluating its approach to the Chinese market. Soon after Google closed down its China operations and directed users to its Hong Kong site, which was summarily blocked by the Great Firewall.

Google was hardly alone in this regard: YouTube, Twitter, and Facebook were all blocked in 2009, and since Google’s block sites like Instagram, Dropbox, Pinterest, Reddit, and Discord have been as well, along with a whole host of media sites like the Wall Street Journal, New York Times, and Wikipedia.

Indeed, this is where I take the biggest issue with Culpan labeling this past week’s actions as the start of a tech cold war: China took the first shots, and they took them a long time ago. For over a decade U.S. services companies have been unilaterally shut out of the China market, even as Chinese alternatives had full reign, running on servers built with U.S. components (and likely using U.S. intellectual property).3

To be sure, China’s motivation was not necessarily protectionism, at least in the economic sense: what mattered most to the country’s ruling Communist Party was control of the flow of information. At the same time, from a narrow economic perspective, the truth is that China has been limiting the economic upside of U.S. companies far longer than the U.S. has tried to limit China’s.

Not that the U.S. investor class cared: for U.S. component suppliers China provided not only revenue but scale; for hardware manufacturers like Apple China provided low labor costs and an increasingly sophisticated base of manufacturing expertise, and full-on design services for more commoditized OEM’s like PC makers. And while U.S. services may not have been allowed in China, U.S. venture capital money was certainly allowed to invest in Chinese startups.

The truth is that the U.S. China relationship has been extremely one-sided for a very long time now: China buys the hardware it needs, and keeps all of the software opportunities for itself — and, of course, pursues software opportunities abroad. At the same time, U.S. acquiescence to this state of affairs has denied China the necessary motivation to actually make the investments necessary to replace U.S. hardware completely, leading to this specific moment in time.

A Question of Leverage

To that end, and leaving aside broader questions about the Trump administration’s approach to trade with China, when it comes to a “tech cold war” I think the U.S. has the most leverage it ever will have: the U.S. advantage in advanced components, particularly processors and their fabrication, is massive, and will only grow if the U.S. is able to gain the support of countries like South Korea, Japan, and Taiwan. Yes, China will spend whatever is necessary to catch up, but it will take a lot of time.

The primary potential pain points for the U.S., meanwhile, are those same component manufacturers that China needs, whose revenue and profits will be hurt, rare earths, and Apple. The latter is more exposed to China than anyone, on two fronts: first, the company’s massive manufacturing facilities in China, and second, the importance of the Chinese consumer market to the iPhone in particular.

This does not guarantee that Apple will be retaliated against: Apple employs millions of Chinese, both directly in manufacturing and also component suppliers, and the Chinese leadership will be loath to leave any of them unemployed. And, on the flipside, Chinese consumers, particularly those in influential first-tier cities, like Apple products. I do think the latter is more likely to be impacted than the former: China can do a lot to disrupt Apple’s consumer-facing operations in China, as they already have in both services and iPhone sales.

The other big question is if the Trump administration will levy tariffs on iPhones for U.S. consumers: to date Apple has been largely spared, but the U.S. is running out of goods to slap tariffs on; again the company benefits from its popularity with end users, who would be much more sensitive to a rise in iPhone prices than just about anything else.

A Question of Values

For obvious reasons, I think most people in tech are opposed to the Trump administration’s approach: not only is Trump unpopular in Silicon Valley generally (which means his policies are), but the near-term damage to U.S. tech companies could be significant.

At the same time, as someone who has argued that technology is an amoral force, China gives me significant pause. On one hand, while the shift of manufacturing to China has hurt the industrial heartlands of both the U.S. and Europe, nothing in history has had a greater impact on the alleviation of poverty and suffering of humanity generally than China’s embrace of capitalism and globalization, protectionist though it may have been. Technology, particularly improvements in global communication and transportation capabilities, played a major role in that.

On the other hand, for all of the praise that is heaped on Chinese service companies like Tencent for their innovation, the fact that everything on Tencent is monitored and censored is chilling, particularly when people disappear. The possibilities of a central government creating the conditions for, say, self-driving cars or some other top-down application of technology is appealing, but turning a city into a prison through surveillance is terrifying. And while it is tempting to fantasize about removing “fake news” and hateful content with an iron fist, it is a step down the road to removing everything that is objectionable to an unaccountable authority with little more than an adjustment to a configuration file.

This is the true war when it comes to technology: censorship versus openness, control versus creativity, and centralization versus competition. These are, of course, connected: China’s censorship is about control facilitated by centralization. That, though, should not only give Western tech companies and investors pause about China generally, but should also lead to serious introspection about the appropriate policies towards our own tech industry. Openness, creativity, and competition are just as related as their counterparts, and infringement on any one of them should be taken as a threat to all three.

I wrote a follow-up to this article in this Daily Update.

  1. The relative order varies based on the iPhone model; the iPhones XS, for example, gets its very expensive OLED screen from Samsung in South Korea, and its processor from TSMC in Taiwan. Previous iPhone models sources screens from Japan and processors from Samsung. []
  2. The elements are cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y) []
  3. This doesn’t even address rampant piracy in China: the country was one of Microsoft’s largest markets by usage, but drove revenue equivalent to the Netherlands. []