On Exponent, the weekly podcast I host with James Allworth, we discuss Cloudflare and what it is like going through an IPO, as well as what has gone wrong with WeWork.
Listen to it here.
Jeff Bezos opened his 2016 letter to Amazon shareholders like this:
“Jeff, what does Day 2 look like?”
That’s a question I just got at our most recent all-hands meeting. I’ve been reminding people that it’s Day 1 for a couple of decades. I work in an Amazon building named Day 1, and when I moved buildings, I took the name with me. I spend time thinking about this topic.
“Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”
To be sure, this kind of decline would happen in extreme slow motion. An established company might harvest Day 2 for decades, but the final result would still come.
Bezos went on to give advice about how to avoid Day 2, including “True Customer Obsession”, “Resist Proxies”, “Embrace External Trends”, and “High-Velocity Decision Making”. The company he manages then spent the next several years looking like it was in fact Day 2.
Consider this story in the Wall Street Journal about how Amazon reportedly adjusted its search algorithm to favor its own products:
Amazon.com Inc. has adjusted its product-search system to more prominently feature listings that are more profitable for the company, said people who worked on the project—a move, contested internally, that could favor Amazon’s own brands…The adjustment, which the world’s biggest online retailer hasn’t publicized, followed a yearslong battle between executives who run Amazon’s retail businesses in Seattle and the company’s search team, dubbed A9, in Palo Alto, Calif., which opposed the move, the people said.
Note how badly this decision fares relative to Bezos’ advice:
To be fair, Amazon is embracing the external trend of raising antitrust concerns, which are probably overblown given the company’s single-digit share of retail in the United States. There are no objections to Walmart, for example, having store brands or pay-for-placement programs, despite the fact that Walmart’s share of retail is about 33% larger than Amazon’s (8.9% of consumer retail spending in the U.S. versus 6.4 percent), so it’s not clear on what basis the digital equivalents of these programs would be prosecuted.
With regards to Bezos’ warning, though, the antitrust discussion is a moot point: companies that spend months or years arguing about the legality and customer friendliness of tilting the scales are usually well into Day Two.
This is hardly the only example of Amazon becoming obsessed with profitability on the margins in its retail operation over the last few years. For example, from Recode last November:
Over the past few months, Amazon has applied intense pressure to consumer brands across different product categories — seizing more control over what, where and how they can sell their goods on the so-called everything store, these people say. One apparent goal: To take more control over the price of goods on Amazon so the company can better compete with retailers. The power moves are also believed to be a prelude to a new internal system that Amazon has yet to launch called One Vendor. The new initiative will essentially funnel big brands and independent sellers alike through the same back-end system in a supposed effort to improve the uniformity of the shopping experience across Amazon on the public-facing side.
From the Wall Street Journal in December:
As Amazon focuses more on its bottom line in addition to its rapid growth, it is increasingly taking aim at CRaP products [“Can’t Realize a Profit”], according to major brand executives and people familiar with the company’s thinking. In recent months, it has been eliminating unprofitable items and pressing manufacturers to change their packaging to better sell online, according to brands that sell on Amazon and consultants who work with them.
From CNBC in March:
In recent months, Amazon has been telling more vendors, or brand owners who sell their goods wholesale, that if Amazon can’t sell those products to consumers at a profit, it won’t let them pay to promote the items. For example, if a $5 water bottle costs Amazon that amount to store, pack and ship, the maker of the water bottle won’t be allowed to advertise it.
From Bloomberg, also in March:
Amazon.com Inc. has abruptly stopped buying products from many of its wholesalers, sowing panic. The company is encouraging vendors to instead sell directly to consumers on its marketplace. Amazon makes more money that way by offloading the cost of purchasing, storing and shipping products. Meanwhile, Amazon can charge suppliers for these services and take a commission on each transaction, which is much less risky than buying goods outright.
From Bloomberg in May:
In the next few months, bulk orders will dry up for thousands of mostly smaller suppliers, according to three people familiar with the plan. Amazon’s aim is to cut costs and focus wholesale purchasing on major brands like Procter & Gamble, Sony and Lego, the people said. That will ensure the company has adequate supplies of must-have merchandise and help it compete with the likes of Walmart, Target and Best Buy.
The vendor purge is the latest step in Amazon’s “hands off the wheel” initiative, an effort to keep expanding product selection on its website without spending more money on managers to oversee it all. The project entails automating tasks like forecasting demand and negotiating prices which were predominantly done by Amazon employees. It also involves pushing more Amazon suppliers to sell goods themselves so Amazon doesn’t have to pay people to do it for them.
None of these decisions are necessarily wrong in a vacuum; what has been striking, though, is the drumbeat of Amazon Retail changes that seem primarily concerned about Amazon’s profitability. And, for the record, it has worked:
Throughout the second half of 2018 and the first part of 2019, Amazon flipped revenue and expense growth by just a smidge, which caused income to skyrocket on a year-over-year basis. It may have been Day 2 as far as Amazon’s prioritization of profitability above everything was concerned, but at least the company was, in Bezos’ words, “harvesting”.
Note, though, that the chart above is missing last quarter’s results, and for good reason:
It’s a bit hard to make out, particularly because I am using trailing twelve-month averages (because of Amazon’s high seasonality), but expenses increased a lot more than revenue last quarter; in fact year-over-year income growth on a quarterly basis was actually -15%.
What changed is that Amazon decided to travel back in time — to Day One — and invest in what the company does best: massively difficult logistical problems that customers love having solved. Originally that was access to any book, then access to anything period, then access in two days, and now Amazon is committed to one.
First, from the company’s Q1 2019 earnings call announcing Amazon’s ambition:
We’re currently working on evolving our Prime free Two-Day Shipping program to be a free One-Day Shipping program. We’re able to do this, because we spent 20 plus years expanding our fulfillment and logistics network, but this is still a big investment and a lot of work to do ahead of us.
For Q2 guidance, we’ve included approximately $800 million of incremental spend related to this investment. And just to clarify, to give a little more information, we have been offering, obviously, faster than Two-Day Shipping for Prime members for years, one day, same day, even down to one to two hour delivery for Prime Now. So we’re going to continue to offer same day and Prime Now selection in an accelerated basis.
But this is all about the core free Two-Day offer evolving into a free One-Day offer. We’ve already started down this path. We’ve in the past months significantly expanded our one-day eligible selection and also expanded the number of zip codes eligible for one-day shipping.
The costs started to show up last quarter. From the company’s Q2 2019 earnings call, a quarter where Amazon missed on profits for the first time in several years:
In Q2, we had a meaningful step up in the one-day shipments, primarily in North America, and one-day volume was accelerating throughout the quarter…On the cost side, we talked last time about $800 million estimate of transportation cost to supply one day, the additional one day in Q2. We were a little bit higher than that number in total cost.
We saw some additional transition costs in our warehouses. We saw some lower productivity as we were expanding rather quickly, both local capacity in the off-season also in our delivery networks. We also saw some costs were moving: buying more inventory and moving inventory around in our network to have it be closer to customers. And we built not only that cost structure, but an accelerating cost penalty into our Q3 guidance that was released with our earnings today.
This is an initiative that clearly passes Bezos’ test:
It is also the opposite of harvesting: it is investing, and it seems more likely than not that Amazon’s upcoming results will look much more like the “Day One” company it was for years, with rapidly growing revenue and costs to match.
This Article comes at a bit of a weird time: in truth I had been considering writing a bearish Amazon article for several months as the penny-pinching anecdotes started to pile up. Tech companies rarely find sustainable growth by focusing on costs; if anything they find antitrust violations.
That announcement about one-day shipping, though, made me hold my fire. Spending a lot of resources on incredibly difficult logistical problems is precisely what makes Amazon so valuable, which means that the commitment to do just that — even with higher costs — is a reason to be bullish. The only problem is that the revenues I anticipate have not yet appeared in the quarterly results.
Still, this search news made me revisit the issue a bit early: tilting the field to favor the bottom line instead of doing what is best for customers is the surest sign of harvesting instead of investing, and it reminded me of my bearish thesis. I wonder if Amazon might not reconsider their approach to search now that the company is demonstrating a recommitment to growing the top line instead of the bottom.
That’s also why last week’s Apple event was encouraging: Apple may have its work cut out to be an effective services company, but by cutting iPhone prices and pricing its services offerings aggressively it is making its own moves towards investing, not simply harvesting. This is also why Facebook’s commitment to Stories was a good sign even if it entailed an earnings hit; its various attempts to wring engagement out of its core app through things like forced Instagram integrations and dating services run in the opposite direction. For Microsoft, meanwhile, The End of Windows meant the end of harvesting and a return to investing, much to investors’ benefit.
Perhaps the biggest question mark, though, is around Google: the company has gotten far more mileage than I ever expected out of mobile generally and cramming more ads into mobile search results specifically; both, though, particularly the latter, seem more like harvesting than investing. And, even when Google does invest, it is too often in projects far removed from customers and the forcing function that going to market entails.
This also may be why Google is the most susceptible to antitrust action of all the major consumer tech companies; the question as to what comes first, harvesting instead of investing or behaving anticompetitively, ceases to matter when you are operating at the scale of any of these companies. And, on the flipside, it strongly suggests that antitrust actions are a trailing indicator of a company that has peaked,1 not a causal force of decline.
Editor’s Note: Stratechery was referenced in yesterday’s keynote. I had no knowledge of or awareness of this reference, and have no relationship with Apple, up-to-and-including not owning their stock individually, as explained in my ethics policy.
It is the normal course for Apple events to come and go and people to complain about how boring it all was, particularly when the company announces said event like this:
Apple reporter extraordinaire Mark Gurman was not impressed:
Nothing shown today really qualifies as meeting high “innovation only” expectations: Apple delivered the smallest Watch update ever, an iPad with a slightly bigger screen and nothing more, and iPhones with cameras equal to or less than many other devices. Apple needs a big 2020. https://t.co/jGhKcYHQSU
— Mark Gurman (@markgurman) September 11, 2019
Gurman isn’t necessarily wrong about the highly iterative nature of the hardware announcements (although I think that an always-on Apple Watch is a big deal), but that doesn’t necessarily mean he is right about the innovation question. To figure that out we need to first define what exactly innovation is.
Another Apple keynote that was greeted with a similar collective yawn was in 2016, when the company announced the iPhone 7 and Series 2 Apple Watch. Farhad Manjoo wrote at the time in the New York Times:
Apple has squandered its once-commanding lead in hardware and software design. Though the new iPhones include several new features, including water resistance and upgraded cameras, they look pretty much the same as the old ones. The new Apple Watch does too. And as competitors have borrowed and even begun to surpass Apple’s best designs, what was iconic about the company’s phones, computers, tablets and other products has come to seem generic…
I quoted Manjoo’s piece at the time and went on to explain why I thought that year’s keynote was more meaningful than it seemed, particularly because of the AirPods introduction:
What is most intriguing, though, is that “truly wireless future” Ive talked about. What happens if we presume that the same sort of advancement that led from Touch ID to Apple Pay will apply to the AirPods? Remember, one of the devices that pairs with AirPods is the Apple Watch, which received its own update, including GPS. The GPS addition was part of a heavy focus on health-and-fitness, but it is also another step down the road towards a Watch that has its own cellular connection, and when that future arrives the iPhone will quite suddenly shift from indispensable to optional. Simply strap on your Watch, put in your AirPods, and, thanks to Siri, you have everything you need.
That future is here, although the edges are still rough (particularly Siri, which was a major focus of that article); Apple’s financial results have certainly benefited. Over the last three years the company’s “Wearables, Home and Accessories” category, which is dominated by the Apple Watch and AirPods, has nearly doubled from $11.8 billion on a trailing twelve-month (TTM) basis1 to $22.2 billion over the last twelve months. In other words, according to the metric that all businesses are ultimately measured on, that 2016 keynote and the future it pointed to was very innovative indeed.
Wearables have not been Apple’s only growth area: over the same three-year span Services revenue has increased by almost the exact same rate — 89% versus 88% — from $23.1 billion TTM to $43.8 billion TTM. At the same time, it feels a bit icky to call that innovation, particularly given the anticompetitive nature of the App Store.
That’s not totally fair of course: the App Store was one of the most innovative things that Apple ever created from a product perspective; that the company has positioned itself to profit from that innovation indefinitely is innovative in its own right, at least if you go back to measuring via revenue and profits.
Still, the idea of Apple being a Services company is one that has long been hard to grok. When the company first started pushing the “Services Narrative” I declared that Apple is not a Services Company:
Services (horizontal) and hardware (vertical) companies have very different strategic priorities: the former ought to maximize their addressable market (by, say, making a cheaper iPhone), while the latter ought to maximize their differentiation. And, Cook’s answer made clear what Apple’s focus remains.
That answer was about continuing Apple’s pricing approach, which at that time was $649+ for new iPhones, with old iPhones discounted by $100 for every year they were on the market, and Cook’s specific words were “I don’t see us deviating from that approach.”
In fact, Apple did deviate, but in the opposite direction: in 2017 the company launched the $999+ iPhone X at the high end and bumped the price of the now mid-tier iPhone 8 to $699+. I wrote at the time:
The iPhone X sells to two of the markets I identified above:
- Customers who want the best possible phone
- Customers who want the prestige of owning the highest-status phone on the market
Note that both of these markets are relatively price-insensitive; to that end, $999 (or, more realistically, $1149 for the 256 GB model), isn’t really an obstacle. For the latter market, it’s arguably a positive.
What this strategy was absolutely not about was expanding the addressable market for Services. Apple was definitely not a Services company when it came to their strategic direction (even if, as I conceded in 2017, it was increasingly fair to evaluate the financial results in that way).
This leads to what is in my mind the biggest news from yesterday’s event: Apple cut prices.
It was easy to miss, given that the iPhone 11 Pro, the successor to the iPhone X and then XS, hasn’t changed in price: it still starts at $999 ($1,099 for the larger model), and tops out at $1,449; if you want the best you are going to pay for it.
Perhaps the most interesting aside in the keynote, though, is that for the first time a majority of Apple’s customers weren’t willing to pay for the best. Tim Cook said:
Last year we launched three incredible iPhones. The iPhone XR became the most popular iPhone and the most popular smartphone in the world. We also launched the iPhone XS and iPhone XS Max, the most advanced iPhones we have ever created.
In a vacuum there is nothing surprising about this. The iPhone XR was an extremely capable phone, with the same industrial design, the same Face ID, and the same processor as the iPhone XS; the primary differences were an in-between size, one less camera, and an LCD screen instead of OLED. That doesn’t seem like much of a sacrifice for a savings of $250.
And yet, even while I said Apple’s strategy “bordered on over-confidence”, I still fully expected the iPhone XS to be the best-selling phone like the iPhone X before it; that is how committed Apple’s customers have been to buying the flagship iPhone. Even Apple, though, can’t escape the gravitational pull of “good enough” — which is why the price cuts, which happened further down the line — were so important.
There are two ways to see Apple’s price cuts. First, by iPhone model:
|Launch||1 year old||2 years old|
Secondly by year:
|Flagship||Mid-tier||1 year old||2 years old|
In the second chart you can see how Apple in 2017 not only raised prices dramatically on its flagship models, but also on the mid-tier model relative to previous flagships. This was important because it was these mid-tier models that replaced previous flagships in Apple’s usual “sell the old flagship for $100 less per year” approach. That meant that 2017’s price hike filtered through to 2018’s 1-year-old model, which increased from $549 to $599.
That means that this year actually saw three price cuts:
To be fair, this doesn’t necessarily mean the line looks much different today than it did yesterday: the only price point that is different is the iPhone 11 relative to the XR. That, though, is because it will take time for those previous price hikes to work their way out of the system, presuming Apple wants to stay on this path in the future.
They should. The success of the iPhone XR strongly suggests that there is more elasticity in the iPhone market than ever before. Apple also cut prices in China earlier this year with great success; I wrote after Apple’s FY2019 Q2 earnings:
The available evidence strongly suggests that iPhone demand in China is very elastic: if the iPhone is cheaper, Apple sells more; if it is more expensive, Apple sells less. This is, of course, unsurprising, at least for a commodity, and right there is Apple’s issue in China: the iPhone is simply less differentiated in China than it is elsewhere, leaving it more sensitive to factors like new designs and price than it is elsewhere.
As I note in that excerpt, China is unique, but the commodity argument is a variant of the “good-enough” argument I made above: while Apple doesn’t necessarily need to worry about iPhone customers outside of China switching to Android, they are very much competing with the iPhones people already have, and, as the XR demonstrated, their own new, cheaper phones.
That’s ok, though, and the final step in Apple truly becoming a Services company, not just in its financial results but also in its strategic thinking. More phones sold, no matter their price point, means more Services revenue in the long run (and Wearables revenue too).
Apple’s two service-related announcements are also good reasons to pursue this strategy. Perhaps the most compelling from a financial perspective is Apple Arcade. For $4.99/month a family gets access to a collection of games featured on their own tab in the App Store:
What makes this compelling from Apple’s perspective is that the company is paying a fixed amount for those games overall, which means that once the company covers the costs of those games, every incremental subscription is pure profit. Contrast this to something like Apple Music, where costs scale inline with revenue; no wonder the service is getting such prime real estate — and no wonder Apple suddenly seems interested in selling more iPhones, even if they earn less revenue up-front.
Similar dynamics apply to Apple TV+: once content costs are covered, incremental customers are pure profit. That noted, I’m not convinced that Apple TV+’s ultimate purpose is to be a profit driver by itself; I explained after Apple’s services event earlier this year:
To be very clear about my analysis of Apple TV+, I don’t think it is a Netflix competitor. I see it as a customer acquisition cost for the Apple TV app; it is Apple TV Channels that will make the real money, and this is not an unreasonable expectation. Roku’s entire business is predicated on the same model; the hardware is basically sold at cost, while the “platform” last year had $417 million in revenue and $296 million in profit, which equates to a tidy 71% gross margin.
Apple TV Channels is a means to buy subscriptions to other streaming services, which makes a lot of money for Roku and Amazon in particular; Apple TV+ content is a reason to make Apple TV the default interface for video leading to more subscriptions via Apple TV Channels.2 This view also explains why Apple is going to bundle a year of Apple TV+ with all new Apple device purchases (which is also very much in line with the idea of Apple giving up short-term revenue on its products — or incurring contra-revenue in this case — for long-term subscription revenue).
It does feel like there is one more shoe yet to drop when it comes to Apple’s strategic shift. The fact that Apple is bundling a for-pay service (Apple TV+) with a product purchase is interesting, but what if Apple started including products with paid subscriptions?
That may be closer than it seems. It seemed strange yesterday’s keynote included an Apple Retail update at the very end of the keynote, but I think this slide explained why:
Not only can you get a new iPhone for less if you trade in your old one, you can also pay for it on a monthly basis (this applies to phones without a trade-in as well). So, in the case of this slide, you can get an iPhone 11 and Apple TV+ for $17/month.
Apple also adjusted their AppleCare+ terms yesterday: now you can subscribe monthly and AppleCare+ will carry on until you cancel, just as other Apple services like Apple Music or Apple Arcade do. The company already has the iPhone Upgrade Program, that bundles a yearly iPhone and AppleCare+, but this shift for AppleCare+ purchased on its own is another step towards assuming that Apple’s relationship with its customers will be a subscription-based one.
To that end, how long until there is a variant of the iPhone Upgrade Program that is simply an all-up Apple subscription? Pay one monthly fee, and get everything Apple has to offer. Indeed, nothing would show that Apple is a Services company more than making the iPhone itself a service, at least as far as the customer relationship goes. You might even say it is innovative.
At first glance, WeWork and Peloton, which both released their S-1s in recent weeks, don’t have much in common: one company rents empty buildings and converts them into office space, and the other sells home fitness equipment and streaming classes. Both, though, have prompted the same question: is this a tech company?
Of course, it is fair to ask, “What isn’t a tech company?” Surely that is the endpoint of software eating the world; I think, though, to classify a company as a tech company because it utilizes software is just as unhelpful today as it would have been decades ago.
Fifty years ago, what is a tech company was an easy question to answer: IBM was the tech company, and everybody else was IBM’s customers. That may be a slight exaggeration, but not by much: IBM built the hardware (at that time the System/360), wrote the software, including the operating system and applications, and provided services, including training, ongoing maintenance, and custom line-of-business software.
All kinds of industries benefited from IBM’s technology, including financial services, large manufacturers, retailers, etc., and, of course, the military. Functions like accounting, resource management, and record-keeping automated and centralized activities that used to be done by hand, dramatically increasing the efficiency of existing activities and making new kinds of activities possible.
Increased efficiency and new business opportunities, though, didn’t make J.P. Morgan or General Electric or Sears tech companies. Technology simply became one piece of a greater whole. Yes, it was essential, but that essentialness exposed technology’s banality: companies were only differentiated to the extent they did not use computers, and then to the downside.
IBM, though, was different: every part of the company was about technology — indeed, IBM was an entire ecosystem onto itself: hardware, software, and services, all tied together with a subscription payment model strikingly similar to today’s dominant software-as-a-service approach. In short, being a tech company meant being IBM, which meant creating and participating in an ecosystem built around technology.
The story of IBM handing Microsoft the contract for the PC operating system and, by extension, the dominant position in computing for the next fifteen years, is a well-known one. The context for that decision, though, is best seen by the very different business model Microsoft pursued for its software.
What made subscriptions work for IBM was that the mainframe maker was offering the entire technological stack, and thus had reason to be in direct ongoing contact with its customers. In 1968, though, in an effort to escape an antitrust lawsuit from the federal government, IBM unbundled their hardware, software, and services. This created a new market for software, which was sold on a somewhat ad hoc basis; at the time software didn’t even have copyright protection.
Then, in 1980, Congress added “computer program” to the definition list of U.S. copyright law, and software licensing was born: now companies could maintain legal ownership of software and grant an effectively infinite number of licenses to individuals or corporations to use that software. Thus it was that Microsoft could charge for every copy of Windows or Visual Basic without needing to sell or service the underlying hardware it ran on.
This highlighted another critical factor that makes tech companies unique: the zero marginal cost nature of software. To be sure, this wasn’t a new concept: Silicon Valley received its name because silicon-based chips have similar characteristics; there are massive up-front costs to develop and build a working chip, but once built additional chips can be manufactured for basically nothing. It was this economic reality that gave rise to venture capital, which is about providing money ahead of a viable product for the chance at effectively infinite returns should the product and associated company be successful.
Indeed, this is why software companies have traditionally been so concentrated in Silicon Valley, and not, say, upstate New York, where IBM was located. William Shockley, one of the inventors of the transistor at Bell Labs, was originally from Palo Alto and wanted to take care of his ailing mother even as he was starting his own semiconductor company; eight of his researchers, known as the “traitorous eight”, would flee his tyrannical management to form Fairchild Semiconductor, the employees of which would go on to start over 65 new companies, including Intel.
It was Intel that set the model for venture capital in Silicon Valley, as Arthur Rock put in $10,000 of his own money and convinced his contacts to add an additional $2.5 million to get Intel off the ground; the company would IPO three years later for $8.225 million. Today the timelines are certainly longer but the idea is the same: raise money to start a company predicated on zero marginal costs, and, if you are successful, exit with an excellent return for shareholders. In other words, it is the venture capitalists that ensured software followed silicon, not the inherent nature of silicon itself.
To summarize: venture capitalist fund tech companies, which are characterized by a zero marginal cost component that allows for uncapped returns on investment.
Probably the most overlooked and underrated era of tech history was the on-premises era dominated by software companies like Microsoft, Oracle, and SAP, and hardware from not only IBM but also Sun, HP, and later Dell. This era was characterized by a mix of up-front revenue for the original installation of hardware or software, plus ongoing services revenue. This model is hardly unique to software: lots of large machinery is sold on a similar basis.
The zero marginal cost nature of software, however, made it possible to cut out the up-front cost completely; Microsoft started pushing this model heavily to large enterprise in 2001 with version 6 of its Enterprise Agreement. Instead of paying for perpetual licenses for software that inevitably needed to be upgraded in a few years, enterprises could pay a monthly fee; this had the advantage of not only operationalizing former capital costs but also increasing flexibility. No longer would enterprises have to negotiate expensive “true-up” agreements if they grew; they were also protected on the downside if their workforce shrunk.
Microsoft, meanwhile, was able to convert its up-front software investment from a one-time payment to regular payments over time that were not only perpetual in nature (because to stop payment was to stop using the software, which wasn’t a viable option for most of Microsoft’s customers) but also more closely matched Microsoft’s own development schedule.
This wasn’t a new idea, as IBM had shown several decades earlier; moreover, it is worth pointing out that the entire function of depreciation when it comes to accounting is to properly attribute capital expenditures across the time periods those expenditures are leveraged. What made Microsoft’s approach unique, though, is that over time the product enterprises were paying for was improving. This is in direct contrast to a physical asset that deteriorates, or a traditional software support contract that is limited to a specific version.
Today this is the expectation for software generally: whatever you pay for today will be better in the future, not worse, and tech companies are increasingly organized around this idea of both constant improvements and constant revenue streams.
Still, Microsoft products had to actually be installed in the first place: much of the benefit of Enterprise Agreements accrued to companies that had already gone through that pain.
Salesforce, founded in 1999, sought to extend that same convenience to all companies: instead of having to go through long and painful installation processes that were inevitably buggy and over-budget, customers could simply access Salesforce on Salesforce’s own servers. The company branded it “No Software”, because software installations had such negative connotations, but in fact this was the ultimate expression of software. Now, instead of one copy of software replicated endlessly and distributed anywhere, Salesforce would simply run one piece of software and give anyone anywhere access to it. This did increase fixed costs — running servers and paying for bandwidth is expensive — but the increase was more than made up for by the decrease in upfront costs for customers.
This also increased the importance of scale for tech companies: now not only did the cost of software development need to be spread out over the greatest number of customers, so did the ongoing costs of building and running large centralized servers (of course Amazon operationalized these costs as well with AWS). That, though, became another characteristic of tech companies: scale not only pays the bills, it actually improves the service as large expenditures are leveraged across that many more customers.
Still, Salesforce was still selling to large corporations. What has changed over the last ten years in particular is the rise of freemium and self-serve, but the origins of this model go back a decade earlier.
The early 2000s were a dire time in tech: the bubble had burst, and it was nearly impossible to raise money in Silicon Valley, much less anywhere else in the world — including Sydney, Australia. So, in 2001, when Scott Farquhar and Mike Cannon-Brookes, whose only goals was to make $35,000 a year and not have to wear a suit, couldn’t afford a sales force for the collaboration software they had developed called Jira they simply put it on the web for anyone to trial, with a payment form to unlock the full program.
This wasn’t necessarily new: “shareware” and “trialware” had existed since the 1980s, and were particularly popular for games, but Atlassian, thanks to being in the right place (selling Agile project management software) at the right time (the explosion of Agile as a development methodology) was using essentially the same model to sell into enterprise.
What made this possible was the combination of zero marginal costs (which meant that distributing software didn’t cost anything) and zero transaction costs: thanks to the web and rudimentary payment processors it was possible for Atlassian to sell to companies without ever talking to them. Indeed, for many years the only sales people Atlassian had were those tasked with reducing churn: all in-bound sales were self-serve.
This model, when combined with Salesforce’s cloud-based model (which Atlassian eventually moved to), is the foundation of today’s SaaS companies: customers can try out software with nothing more than an email address, and pay for it with nothing more than a credit card. This too is a characteristic of tech companies: free-to-try, and easy-to-buy, by anyone, from anywhere.
So what about companies like WeWork and Peloton that interact with the real world? Note the centrality of software in all of these characteristics:
The question of whether companies are tech companies, then, depends on how much of their business is governed by software’s unique characteristics, and how much is limited by real world factors. Consider Netflix, a company that both competes with traditional television and movie companies yet is also considered a tech company:
Netflix checks four of the five boxes.
Airbnb, which has yet to go public, is also often thought of as a tech company, even though they deal with lodging:
Uber, meanwhile, has long been mentioned in the same breath as Airbnb, and for good reason: it checks most of the same boxes:
A major question about Uber concerns transaction costs: bringing and keeping drivers on the platform is very expensive. This doesn’t mean that Uber isn’t a tech company, but it does underscore the degree to which its model is dependent on factors that don’t have zero costs attached to them.
Frankly, it is hard to see how WeWork is a tech company in any way.
Finally Peloton (which I wrote about here):
Peloton is also iffy as far these five factors go, but then again, so is Apple: software-differentiated hardware is in many respects its own category. And, there is one more definition that is worth highlighting.
The term “technology” is an old one, far older than Silicon Valley. It means anything that helps us produce things more efficiently, and it is what drives human progress. In that respect, all successful companies, at least in a free market, are tech companies: they do something more efficiently than anyone else, on whatever product vector matters to their customers.
To that end, technology is best understood with qualifiers, and one of the most useful sets comes from Clayton Christensen and The Innovator’s Dilemma:
Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…
Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.
Sustaining technologies make existing firms better, but it doesn’t change the competitive landscape. By extension, if adopting technology simply strengthens your current business, as opposed to making it uniquely possible, you are not a tech company. That, for example, is why IBM’s customers were no more tech companies than are users of the most modern SaaS applications.
Disruptive technologies, though, make something possible that wasn’t previously, or at a price point that wasn’t viable. This is where Peloton earns the “tech company” label from me: compared to spin classes at a dedicated gym, Peloton is cheap, and it scales far better. Sure, looking at a screen isn’t as good as being in the same room with an instructor and other cyclists, but it is massively more convenient and opens the market to a completely new customer base. Moreover, it scales in a way a gym never could: classes are held once and available forever on-demand; the company has not only digitized space but also time, thanks to technology. This is a tech company.
This definition also applies to Netflix, Airbnb, and Uber; all digitized something essential to their competitors, whether it be time or trust. I’m not sure, though, that it applies to WeWork: to the extent the company is unique it seems to rely primarily on unprecedented access to capital. That may be enough, but it does not mean WeWork is a tech company.
And, on the flipside, being a tech company does not guarantee success: the curse of tech companies is that while they generate massive value, capturing that value is extremely difficult. Here Peloton’s hardware is, like Apple’s, a significant advantage.
On the other hand, asset-lite models, like ride-sharing, are very attractive, but can Uber capture sufficient value to make a profit? What will Airbnb’s numbers look like when it finally IPOs? Indeed, the primary reason Peloton’s numbers look good is because they are selling physical products, differentiated by software, at a massive profit!
Still, definitions are helpful, even if they are not predictive. Software is used by all companies, but it completely transforms tech companies and should reshape consideration of their long-term upside — and downside.
I wrote a follow-up to this article in this Daily Update.
Farhad Manjoo, in the New York Times, ran an experiment on themself:
Earlier this year, an editor working on The Times’s Privacy Project asked me whether I’d be interested in having all my digital activity tracked, examined in meticulous detail and then published — you know, for journalism…I had to install a version of the Firefox web browser that was created by privacy researchers to monitor how websites track users’ data. For several days this spring, I lived my life through this Invasive Firefox, which logged every site I visited, all the advertising tracking servers that were watching my surfing and all the data they obtained. Then I uploaded the data to my colleagues at The Times, who reconstructed my web sessions into the gloriously invasive picture of my digital life you see here. (The project brought us all very close; among other things, they could see my physical location and my passwords, which I’ve since changed.)
What did we find? The big story is as you’d expect: that everything you do online is logged in obscene detail, that you have no privacy. And yet, even expecting this, I was bowled over by the scale and detail of the tracking; even for short stints on the web, when I logged into Invasive Firefox just to check facts and catch up on the news, the amount of information collected about my endeavors was staggering.
Here is a shrunk-down version of the graphic that resulted (click it to see the whole thing on the New York Times site):
Notably — at least from my perspective! — Stratechery is on the graphic:
Wow, it sure looks like I am up to some devious behavior! I guess it is all of the advertising trackers on my site which doesn’t have any advertising…or perhaps Manjoo, as seems to so often be the case with privacy scare pieces, has overstated their case by a massive degree.
The narrow problem with Manjoo’s piece is a definitional one. This is what it says at the top of the graphic:
This strikes me as an overly broad definition of tracking; as best I can tell, Manjoo and their team counted every single script, image, or cookie that was loaded from a 3rd-party domain, no matter its function.
Consider Stratechery: the page in question, given the timeframe of Manjoo’s research and the apparent link from Techmeme, is probably The First Post-iPhone Keynote. On that page I count 31 scripts, images, fonts, and XMLHttpRequests (XHR for short, which can be used to set or update cookies) that were loaded from a 3rd-party domain.1 The sources are as follows (in decreasing number by 3rd-party service):
The only service here remotely connected to advertising is Google Analytics, but I have chosen to not share that information with Google (there is no need because I don’t need access to Google’s advertising tools); the truth is that all of these “trackers” make Stratechery possible.2
This narrow critique of Manjoo’s article — wrongly characterizing multiple resources as “trackers” — gets at a broader philosophical shortcoming: technology can be used for both good things and bad things, but in the haste to highlight the bad, it is easy to be oblivious to the good. Manjoo, for example, works for the New York Times, which makes most of its revenue from subscriptions;3 given that, I’m going to assume they do not object to my including 3rd-party resources on Stratechery that support my own subscription business?
This applies to every part of my stack: because information is so easily spread across the Internet via infrastructure maintained by countless companies for their own positive economic outcome, I can write this Article from my home and you can read it in yours. That this isn’t even surprising is a testament to the degree to which we take the Internet for granted: any site in the world is accessible by anyone from anywhere, because the Internet makes moving data free and easy.
Indeed, that is why my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.
That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one. Consider one of the most fearsome surveillance entities of all time, the East German Stasi. From Wired:
The German Democratic Republic dissolved in 1990 with the fall of communism, but the documents assembled by the Ministry for State Security, or Stasi, remain. This massive archive includes 69 miles of shelved documents, 1.8 million images, and 30,300 video and audio recordings housed in 13 offices throughout Germany. Canadian photographer Adrian Fish got a rare peek at the archives and meeting rooms of the Berlin office for his series Deutsche Demokratische Republik: The Stasi Archives. “The archives look very banal, just like a bunch of boring file holders with a bunch of paper,” he says. “But what they contain are the everyday results of a people being spied upon.”
That the files are paper makes them terrifying, because anyone can read them individually; that they are paper, though, also limits their reach. Contrast this to Google or Facebook: that they are digital means they reach everywhere; that, though, means they are read in aggregate, and stored in a way that is only decipherable by machines.
To be sure, a Stasi compare and contrast is hardly doing Google or Facebook any favors in this debate: the popular imagination about the danger this data collection poses, though, too often seems derived from the former, instead of the fundamentally different assumptions of the latter. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.
Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).
This doesn’t just apply to governments: consider Apple, a company which is staking its reputation on privacy. Last week the WebKit team released a new Tracking Prevention Policy that is taking clear aim at 3rd-party trackers:
We have implemented or intend to implement technical protections in WebKit to prevent all tracking practices included in this policy. If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques.
Of particular interest to Stratechery — and, per the opening of this article, Manjoo — is this definition and declaration:
Cross-site tracking is tracking across multiple first party websites; tracking between websites and apps; or the retention, use, or sharing of data from that activity with parties other than the first party on which it was collected.
WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert). These goals apply to all types of tracking listed above, as well as tracking techniques currently unknown to us.
In case you were wondering,4 yes, this will affect sites like Stratechery, and the WebKit team knows it (emphasis mine to highlight potential impacts on Stratechery):
There are practices on the web that we do not intend to disrupt, but which may be inadvertently affected because they rely on techniques that can also be used for tracking. We consider this to be unintended impact. These practices include:
- Funding websites using targeted or personalized advertising (see Private Click Measurement below).
- Measuring the effectiveness of advertising.
- Federated login using a third-party login provider.
- Single sign-on to multiple websites controlled by the same organization.
- Embedded media that uses the user’s identity to respect their preferences.
- “Like” buttons, federated comments, or other social widgets.
- Fraud prevention.
- Bot detection.
- Improving the security of client authentication.
- Analytics in the scope of a single website.
- Audience measurement.
When faced with a tradeoff, we will typically prioritize user benefits over preserving current website practices. We believe that that is the role of a web browser, also known as the user agent.
Don’t worry, Stratechery is not going out of business (although there may be a fair bit of impact on the user experience, particularly around subscribing or logging in). It is disappointing, though, that the maker of one of the most important and the most unavoidable browser technologies in the world (WebKit is the only option on iOS) has decided that an absolutist approach that will ultimately improve the competitive position of massive first party advertisers like Google and Facebook, even as it harms smaller sites that rely on 3rd-party providers for not just ads but all aspects of their business, is what is best for everyone.
What makes this particularly striking is that it was only a month ago that Apple was revealed to be hiring contractors to listen to random Siri recordings; unlike Amazon (but like Google), Apple didn’t disclose that fact to users. Furthermore, unlike both Amazon and Google, Apple didn’t give users any way to see what recordings Apple had or delete them after-the-fact. Many commentators have seized on the irony of Apple having the worst privacy practices for voice recordings given their rhetoric around being a privacy champion, but I think the more interesting insight is twofold.
First, this was, in my estimation, a far worse privacy violation than the sort of online tracking the WebKit team is determined to stamp out, for the simple reason that the Siri violation crossed the line between the physical and digital world. As I noted above the digital world is inherently transparent when it comes to data; the physical world, though — particularly somewhere like your home — is inherently private.
Second, I do understand why Apple has humans listening to Siri recordings: anyone that has used Siri can appreciate that the service needs to accelerate its feedback loop and improve more quickly. What happens, though, when improving the product means invading privacy? Do you look for good trade-offs, like explicit consent and user control, or do you fear a fundamentalist attitude that declares privacy more important than anything, and try to sneak a true privacy violation behind everyone’s back like some sort of rebellious youth fleeing religion? Being an absolutist also leads to bad behavior, because after all, everyone is already a criminal.
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.5 To that end, I believe the privacy debate needs to be reset around these three assumptions:
This is where the Stasi example truly resonates: imagine all of those files, filled with all manner of physical movements and meetings and utterings, digitized and thus searchable, shareable, inescapable. That goes beyond a new medium lacking privacy from the get-go: it is taking privacy away from a world that previously had it. And yet the proliferation of cameras, speakers, location data, etc. goes on with a fraction of the criticism levied at big tech companies. Like too many fundamentalists, we are in danger of missing the point.
I wrote a follow-up to this article in this Daily Update.
On Sunday night, when Cloudflare CEO Matthew Prince announced in a blog post that the company was terminating service for 8chan, the response was nearly universal: Finally.
It was hard to disagree: it was on 8chan — which was created after complaints that the extremely lightly-moderated anonymous-based forum 4chan was too heavy-handed — that a suspected terrorist gunman posted a rant explaining his actions before killing 20 people in El Paso. This was the third such incident this year: the terrorist gunmen in Christchurch, New Zealand and Poway, California did the same; 8chan celebrated all of them.
To state the obvious, it is hard to think of a more reprehensible community than 8chan. And, as many were quick to point out, it was hardly the sort of site that Cloudflare wanted to be associated with as they prepared for a reported IPO. Which again raises the question: what took Cloudflare so long?
The question of when and why to moderate or ban has been an increasingly frequent one for tech companies, although the circumstances and content to be banned have often varied greatly. Some examples from the last several years:
These may seem unrelated, but in fact all are questions about what should (or should not) be moderated, who should (or should not) moderate, when should (or should not) they moderate, where should (or should not) they moderate, and why? At the same time, each of these examples is clearly different, and those differences can help build a framework for companies to make decisions when similar questions arise in the future — including Cloudflare.
The first and most obvious question when it comes to content is whether or not it is legal. If it is illegal, the content should be removed.
And indeed it is: service providers remove illegal content as soon as they are made aware of it.
Note, though, that service providers are generally not required to actively search for illegal content, which gets into Section 230 of the Communications Decency Act, a law that is continuously misunderstood and/or misrepresented.1
To understand Section 230 you need to go back to 1991 and the court case Cubby v CompuServe. CompuServe hosted a number of forums; a member of one of those forums made allegedly defamatory remarks about a company named Cubby, Inc. Cubby sued CompuServe for defamation, but a federal court judge ruled that CompuServe was a mere “distributor” of the content, not its publisher. The judge noted:
The requirement that a distributor must have knowledge of the contents of a publication before liability can be imposed for distributing that publication is deeply rooted in the First Amendment…CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.
Four years later, though, Stratton Oakmont, a securities investment banking firm, sued Prodigy for libel, in a case that seemed remarkably similar to Cubby v. CompuServe; this time, though, Prodigy lost. From the opinion:
The key distinction between CompuServe and Prodigy is two fold. First, Prodigy held itself out to the public and its members as controlling the content of its computer bulletin boards. Second, Prodigy implemented this control through its automatic software screening program, and the Guidelines which Board Leaders are required to enforce. By actively utilizing technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and “bad taste”, for example, Prodigy is clearly making decisions as to content, and such decisions constitute editorial control…Based on the foregoing, this Court is compelled to conclude that for the purposes of Plaintiffs’ claims in this action, Prodigy is a publisher rather than a distributor.
In other words, the act of moderating any of the user-generated content on its forums made Prodigy liable for all of the user-generated content on its forums — in this case to the tune of $200 million. This left services that hosted user-generated content with only one option: zero moderation. That was the only way to be classified as a distributor with the associated shield from liability, and not as a publisher.
The point of Section 230, then, was to make moderation legally viable; this came via the “Good Samaritan” provision. From the statute:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
In short, Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.
Keep in mind that Congress is extremely limited in what it can make illegal because of the First Amendment. Indeed, the vast majority of the Communications Decency Act was ruled unconstitutional a year after it was passed in a unanimous Supreme Court decision. This is how we have arrived at the uneasy space that Cloudflare and others occupy: it is the will of the democratically elected Congress that companies moderate content above-and-beyond what is illegal, but Congress can not tell them exactly what content should be moderated.
The one tool that Congress does have is changing Section 230; for example, 2018’s SESTA/FOSTA act made platforms liable for any activity related to sex trafficking. In response platforms removed all content remotely connected to sex work of any kind — Cloudflare, for example, dropped support for the Switter social media network for sex workers — in a way that likely caused more harm than good. This is the problem with using liability to police content: it is always in the interest of service providers to censor too much, because the downside of censoring too little is massive.
If the question of what content should be moderated or banned is one left to the service providers themselves, it is worth considering exactly what service providers we are talking about.
At the top of the stack are the service providers that people publish to directly; this includes Facebook, YouTube, Reddit, 8chan and other social networks. These platforms have absolute discretion in their moderation policies, and rightly so. First, because of Section 230, they can moderate anything they want. Second, none of these platforms have a monopoly on online expression; someone who is banned from Facebook can publish on Twitter, or set up their own website. Third, these platforms, particularly those with algorithmic timelines or recommendation engines, have an obligation to moderate more aggressively because they are not simply distributors but also amplifiers.
Internet service providers (ISPs), on the other hand, have very different obligations. While ISPs are no longer covered under Title II of the Communications Act, which barred them from discriminating data on the basis of content, it is the expectation of consumers and generally the policy of ISPs to not block any data because of its content (although ISPs have agreed to block child pornography websites in the past).
It makes sense to think about these positions of the stack very differently: the top of the stack is about broadcasting — reaching as many people as possible — and while you may have the right to say anything you want, there is no right to be heard. Internet service providers, though, are about access — having the opportunity to speak or hear in the first place. In other words, the further down the stack, the more legality should be the sole criteria for moderation; the further up the more discretion and even responsibility there should be for content:
Note the implications for Facebook and YouTube in particular: their moderation decisions should not be viewed in the context of free speech, but rather as discretionary decisions made by managers seeking to attract the broadest customer base; the appropriate regulatory response, if one is appropriate, should be to push for more competition so that those dissatisfied with Facebook or Google’s moderation policies can go elsewhere.
What made Cloudflare’s decision more challenging was three-fold.
First, while Cloudflare is not an ISP, they are much more akin to infrastructure than they are to user-facing platforms. In the case of 8chan, Cloudflare provided a service that shielded the site from Distributed Denial-of-Service (DDoS) attacks; without a service like Cloudflare, 8chan would almost assuredly be taken offline by Internet vigilantes using botnets to launch such an attack. In other words, the question wasn’t whether or not 8chan was going to be promoted or have easy access to large social networks, but whether it would even exist at all.
To be perfectly clear, I would prefer that 8chan did not exist. At the same time, many of those arguing that 8chan should be erased from the Internet were insisting not too long ago that the U.S. needed to apply Title II regulation (i.e. net neutrality) to infrastructure companies to ensure they were not discriminating based on content. While Title II would not have applied to Cloudflare, it is worth keeping in mind that at some point or another nearly everyone reading this article has expressed concern about infrastructure companies making content decisions.
And rightly so! The difference between an infrastructure company and a customer-facing platform like Facebook is that the former is not accountable to end users in any way. Cloudflare CEO Matthew Prince made this point in an interview with Stratechery:
We get labeled as being free speech absolutists, but I think that has absolutely nothing to do with this case. There is a different area of the law that matters: in the U.S. it is the idea of due process, the Aristotelian idea is that of the rule of law. Those principles are set down in order to give governments legitimacy: transparency, consistency, accountability…if you go to Germany and say “The First Amendment” everyone rolls their eyes, but if you talk about the rule of law, everyone agrees with you…
It felt like people were acknowledging that the deeper you were in the stack the more problematic it was [to take down content], because you couldn’t be transparent, because you couldn’t be judged as to whether you’re consistent or not, because you weren’t fundamentally accountable. It became really difficult to make that determination.
Moreover, Cloudflare is an essential piece of the Facebook and YouTube competitive set: it is hard to argue that Facebook and YouTube should be able to moderate at will because people can go elsewhere if elsewhere does not have the scale to functionally exist.
Second, the nature of the medium means that all Internet companies have to be concerned about the precedent their actions in one country will have in different countries with different laws. One country’s terrorist is another country’s freedom fighter; a third country’s government acting according to the will of the people is a fourth’s tyrannically oppressing the minority. In this case, to drop support for 8chan — a site that was legal — is to admit that the delivery of Cloudflare’s services are up for negotiation.
Third, it is likely that at some point 8chan will come back, thanks to the help of a less scrupulous service, just as the Daily Stormer did when Cloudflare kicked them off two years ago. What, ultimately is the point? In fact, might there be harm, since tracking these sites may end up being more difficult the further underground they go?
This third point is a valid concern, but one I, after long deliberation, ultimately reject. First, convenience matters. The truly committed may find 8chan when and if it pops up again, but there is real value in requiring that level of commitment in the first place, given said commitment is likely nurtured on 8chan itself. Second, I ultimately reject the idea that publishing on the Internet is a right that must be guaranteed by 3rd parties. Stand on the street corner all you like, at least your terrible ideas will be limited by the physical world. The Internet, though, with its inherent ability to broadcast and congregate globally, is a fundamentally more dangerous medium that is by-and-large facilitated by third parties who have rights of their own. Running a website on a cloud service provider means piggy-backing off of your ISP, backbone providers, server providers, etc., and, if you are controversial, services like Cloudflare to protect you. It is magnanimous in a way for Cloudflare to commit to serving everyone, but at the end of the day Cloudflare does have a choice.
To that end I find Cloudflare’s rationale for acting compelling. Prince told me:
If this were a normal circumstance we would say “Yes, it’s really horrendous content, but we’re not in a position to decide what content is bad or not.” But in this case, we saw repeated consistent harm where you had three mass shootings that were directly inspired by and gave credit to this platform. You saw the platform not act on any of that and in fact promote it internally. So then what is the obligation that we have? While we think it’s really important that we are not the ones being the arbiter of what is good or bad, if at the end of the day content platforms aren’t taking any responsibility, or in some cases actively thwarting it, and we see that there is real harm that those platforms are doing, then maybe that is the time that we cut people off.
User-facing platforms are the ones that should make these calls, not infrastructure providers. But if they won’t, someone needs to. So Cloudflare did.
I promised, with this title, a framework for moderation, and frankly, I under-delivered. What everyone wants is a clear line about what should or should not be moderated, who should or should not be banned. The truth, though, is that those bright lines do not exist, particularly in the United States.
What is possible, though, is to define the boundaries of the gray areas. In the case of user-facing platforms, their discretion is vast, and responsibility for not simply moderation but also promotion significantly greater. A heavier hand is justified, as is external pressure on decision-makers; the most important regulatory response is to ensure there is competition.
Infrastructure companies, meanwhile, should primarily default to legality, but also, as Cloudflare did, recognize that they are the backstop to user-facing platforms that refuse to do their job.
Governments, meanwhile, beyond encouraging competition, should avoid using liability as a lever, and instead stick to clearly defining what is legal and what isn’t. I think it is legitimate for Germany, for example, to ban pro-Nazi websites, or the European Union to enforce the “Right to be Forgotten” within E.U. borders; like most Americans, I lean towards more free speech, not less, but governments, particularly democratically elected ones, get to make the laws.
What is much more problematic are initiatives like the European Copyright Directive, which makes platforms liable for copyright infringement. This inevitably leads to massive overreach and clumsy filtering, and favors large platforms that can pay for both filters and lawyers over smaller ones that cannot.
None of this is easy. I am firmly in the camp that argues that the Internet is something fundamentally different than what came before, making analog examples less relevant than they seem. The risks and opportunities of the Internet are both different and greater than anything we have experienced previously, and perhaps the biggest mistake we can make is being too sure about what is the right thing to do. Gray is uncomfortable, but it may be the best place to be.
I wrote a follow-up to this article in this Daily Update.
While I am (rightfully) teased about how often I discuss Aggregation Theory, there is a method to my madness, particularly over the last year: more and more attention is being paid to the power wielded by Aggregators like Google and Facebook, but to my mind the language is all wrong.
I discussed this at length last year:
This is ultimately the most important distinction between platforms and Aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; Aggregators, on the other hand, intermediate and control it.
It follows, then, that debates around companies like Google that use the word “platform” and, unsurprisingly, draw comparisons to Microsoft twenty years ago, misunderstand what is happening and, inevitably, result in prescriptions that would exacerbate problems that exist instead of solving them.
There is, though, another reason to understand the difference between platforms and Aggregators: platforms are Aggregators’ most effective competition.
Earlier this week I wrote about Walmart’s failure to compete with Amazon head-on; after years of trying to leverage its stores in e-commerce, Walmart realized that Amazon was winning because e-commerce required a fundamentally different value chain than retail stores. The point of my Daily Update was that the proper response to that recognition was not to try to imitate Amazon, but rather to focus on areas where the stores actually were an advantage, like groceries, but it’s worth understanding exactly why attacking Amazon head-on was a losing proposition.
When Amazon started, the company followed a traditional retail model, just online. That is, Amazon bought products at wholesale, then sold them to customers:
Amazon’s sales proceeded to grow rapidly, not just of books, but also in other media products with large selections like DVDs and CDs that benefitted from Amazon’s effectively unlimited shelf-space. This growth allowed Amazon to build out its fulfillment network, and by 1999 the company had seven fulfillment centers across the U.S. and three more in Europe.
Ten may not seem like a lot — Amazon has well over 300 fulfillment centers today, plus many more distribution and sortation centers — but for reference Walmart has only 20. In other words, at least when it came to fulfillment centers, Amazon was halfway to Walmart’s current scale 20 years ago.
It would ultimately take Amazon another nine years to reach twenty fulfillment centers (this was the time for Walmart to respond), but in the meantime came a critical announcement that changed what those fulfillment centers represented. In 2006 Amazon announced Fulfillment by Amazon, wherein 3rd-party merchants could use those fulfillment centers too. Their products would not only be listed on Amazon.com, they would also be held, packaged, and shipped by Amazon.
In short, Amazon.com effectively bifurcated itself into a retail unit and a fulfillment unit:
The old value chain is still there — nearly half of the products on Amazon.com are still bought by Amazon at wholesale and sold to customers — but 3rd parties can sell directly to consumers as well, bypassing Amazon’s retail arm and leveraging only Amazon’s fulfillment arm, which was growing rapidly:
Walmart and its 20 distribution centers don’t stand a chance, particularly since catching up means competing for consumers not only with Amazon but with all of those 3rd-party merchants filling up all of those fulfillment centers.
There is one more critical part of the drawing I made above:
Despite the fact that Amazon had effectively split itself in two in order to incorporate 3rd-party merchants, this division is barely noticeable to customers. They still go to Amazon.com, they still use the same shopping cart, they still get the boxes with the smile logo. Basically, Amazon has managed to incorporate 3rd-party merchants while still owning the entire experience from an end-user perspective.
This should sound familiar: as I noted at the top, Aggregators tend to internalize their network effects and commoditize their suppliers, which is exactly what Amazon has done.1 Amazon benefits from more 3rd-party merchants being on its platform because it can offer more products to consumers and justify the buildout of that extensive fulfillment network; 3rd-party merchants are mostly reduced to competing on price.
That, though, suggests there is a platform alternative — that is, a company that succeeds by enabling its suppliers to differentiate and externalizing network effects to create a mutually beneficial ecosystem. That alternative is Shopify.
At first glance, Shopify isn’t an Amazon competitor at all: after all, there is nothing to buy on Shopify.com. And yet, there were 218 million people that bought products from Shopify without even knowing the company existed.
The difference is that Shopify is a platform: instead of interfacing with customers directly, 820,000 3rd-party merchants sit on top of Shopify and are responsible for acquiring all of those customers on their own.
This means they have to stand out not in a search result on Amazon.com, or simply offer the lowest price, but rather earn customers’ attention through differentiated product, social media advertising, etc. Many, to be sure, will fail at this: Shopify does not break out merchant churn specifically, but it is almost certainly extremely high.
That, though, is the point.
Unlike Walmart, currently weighing whether to spend additional billions after the billions it has already spent trying to attack Amazon head-on, with a binary outcome of success or failure, Shopify is massively diversified. That is the beauty of being a platform: you succeed (or fail) in the aggregate.
To that end, I would argue that for Shopify a high churn rate is just as much a positive signal as it is a negative one: the easier it is to start an e-commerce business on the platform, the more failures there will be. And, at the same time, the greater likelihood there will be of capturing and supporting successes.
This is how Shopify can both in the long run be the biggest competitor to Amazon even as it is a company that Amazon can’t compete with: Amazon is pursuing customers and bringing suppliers and merchants onto its platform on its own terms; Shopify is giving merchants an opportunity to differentiate themselves while bearing no risk if they fail.
This is the context for one of the most interesting announcements from Shopify’s recent partner conference, Shopify Unite. The name should ring familiar: the Shopify Fulfillment Network.
From the company’s blog:
Customers want their online purchases fast, with free shipping. It’s now expected, thanks to the recent standard set by the largest companies in the world. Working with third-party logistics companies can be tedious. And finding a partner that won’t obscure your customer data or hide your brand with packaging is a challenge.
This is why we’re building Shopify Fulfillment Network—a geographically dispersed network of fulfillment centers with smart inventory-allocation technology. We use machine learning to predict the best places to store and ship your products, so they can get to your customers as fast as possible.
We’ve negotiated low rates with a growing network of warehouse and logistic providers, and then passed on those savings to you. We support multiple channels, custom packaging and branding, and returns and exchanges. And it’s all managed in Shopify.
The first paragraph explains why the Shopify Fulfillment Network was a necessary step for Shopify: Amazon may commoditize suppliers, hiding their brand from website to box, but if its offering is truly superior, suppliers don’t have much choice. That was increasingly the case with regards to fulfillment, particularly for the small-scale sellers that are important to Shopify not necessarily for short-term revenue generation but for long-run upside. Amazon was simply easier for merchants and more reliable for customers.
Notice, though, that Shopify is not doing everything on their own: there is an entire world of third-party logistics companies (known as “3PLs”) that offer warehousing and shipping services. What Shopify is doing is what platforms do best: act as an interface between two modularized pieces of a value chain.
On one side are all of Shopify’s hundreds of thousands of merchants: interfacing with all of them on an individual basis is not scalable for those 3PL companies; now, though, they only need to interface with Shopify.
The same benefit applies in the opposite direction: merchants don’t have the means to negotiate with multiple 3PLs such that their inventory is optimally placed to offer fast and inexpensive delivery to customers; worse, the small-scale sellers I discussed above often can’t even get an audience with these logistics companies. Now, though, Shopify customers need only interface with Shopify.
Moreover, this is what Shopify has already accomplished when it comes to referral partners (who drive new merchants onto the platform), developers (who build apps for managing Shopify stores) and theme designers (who sell themes to customize the look-and-feel of stores). COO Harley Finkelstein said at Unite:
You’ve often heard me say that we at Shopify want to create more value for your partners than we capture for ourselves, and I find the best way to demonstrate this is by looking at what I call the “Partner Economy”. The “Partner Economy” is the amount of revenue that flows to all of you our partners…in 2018 Shopify made about a billion dollars [Editor: in revenue]. We estimate that you, our partners, made more than $1.2 billion.
In other words, Shopify clears the Bill Gates Line — it captures a minority of the value in the ecosystem it has created — and the Shopify Fulfillment Network should fit right in:
What is powerful about this model is that it leverages the best parts of modularity — diversity and competition at different parts of the value chain — and aligns the incentives of all of them. Every referral partner, developer, theme designer, and now 3PL provider is simultaneously incentivized to compete with each other narrowly and ensure that Shopify succeeds broadly, because that means the pie is bigger for everyone.
This is the only way to take on an integrated Aggregator like Amazon: trying to replicate what Amazon has built up over decades, as Walmart has attempted, is madness. Amazon has the advantage in every part of the stack, from more customers to more suppliers to lower fulfillment costs to faster delivery.
The only way out of that competition is differentiation; granted, Walmart has tried buying and launching new brands exclusive to its store, but differentiation when it comes to e-commerce goods doesn’t arise from top down planning. Rather, it bubbles up from widespread opportunity (and churn!), like that created by Shopify, supported by an entire aligned ecosystem.
When I get things wrong — and I was very much wrong about Facebook’s blockchain plans — the reason is usually a predictable one: confirmation bias. That is, I already have an idea of what a company’s motivations are, and then view news through that lens, failing to think critically about what parts of that news might actually disconfirm my assumptions.
Start with the obvious: this isn’t a Bitcoin competitor. And why would it be? The entire point of Bitcoin is to be distributed; Facebook’s power come from its centralization. Indeed, this is probably the single most important prism through which to examine whatever it is that Facebook does in the space: the company is not going betray its dominant position, but rather seek to strengthen it. That is why I am not too concerned about not knowing the implementation details: take it as a given that whatever role users have to play in this network, Facebook will have final control.
I stand by the first part of that excerpt: for all of the positive attributes Facebook is highlighting about Project Libra — which Facebook, in conjunction with the newly formed Libra Association, announced last week — it is unreasonable to expect that Facebook would invest significant resources in something that would weaken its position. What I got wrong was presuming that meant overt Facebook control. Frustratingly, it was an error that should have both been obvious in my original analysis and also clear in the broader view of the Internet I have explained through Aggregation Theory.
Libra is being presented as a cryptocurrency based on a blockchain: transactions are recorded on a shared ledger and verified by “miners” independently solving cryptographic problems and arriving at a consensus that the transaction is legitimate and should be added to the ledger permanently.
In practice, it is much more complicated: while a limited set of “validators” — aka miners — share a history of transactions in (individual) blocks that are chained together (i.e. a blockchain), what Libra actually exposes is the current state of the ledger. In practice this means that adding new transactions can be much quicker and more efficient — more akin to adding a line to a spreadsheet than rebuilding the entire spreadsheet from scratch.
In other words, there is a trade-off between trust and efficiency: whereas anyone can “rebuild the spreadsheet” in the case of a cryptocurrency like Bitcoin, where the blockchain is fully exposed, normal users have to trust Libra’s validators.1 On the other hand, Bitcoin, thanks to the overhead of communicating and verifying every transaction, can only manage around 7 transactions a second; Libra is promising 1,000 transactions per second.
Who, then, are the validators? Well, Facebook is one, but only one: currently there are 28 “Founding Members”, including merchants, venture capitalists, and payment networks, that meet two of the following three criteria:
These “Founding Members” are required to make a minimum investment of $10 million and provide computing power to the network. In addition, there are separate requirements for non-profit organizations and academic institutions that rely on a mixture of budget, track record, and rankings; a minimum investment may not be necessary. Libra intends to have 100 Founding Members by the time it launches next year.
Here is the important thing to understand about the Libra Association: while its members — who again, are the validators — do control the Libra protocol, Facebook does not control the validators. Which, by extension, means that Facebook will not control Libra.
To understand the distinction, consider an alternative route that Facebook could have taken: a so-called “Facebook Coin”. In that case Facebook would have had total control over the protocol, and to be sure, this would have distinct advantages for Facebook specifically and the usability of a “Facebook Coin” generally:
This is the Trust-Efficiency tradeoff taken to the opposite extreme from Bitcoin:
With Bitcoin, there is no need to trust anyone — you can verify the entire blockchain yourself — but at the cost of efficiency of transactions. A Facebook Coin, on the other hand, would require complete trust of Facebook, but transactions would be far more efficient as a result.
The most obvious example of this is WeChat Pay: WeChat handles the transactions, stores the money, and is the sole source of authority about who owns what, and thanks to the ubiquity of WeChat and the efficiency of this model, WeChat Pay (along with Alipay) has become the default payment mechanism in China.
Unsurprisingly, WeChat doesn’t use any sort of blockchain-based technology. Why would it? The entire point of a blockchain is to distribute a ledger across multiple parties, which is fundamentally less efficient than simply storing the entire ledger in a single database managed by one party.
This gets at the error in analysis I referenced above: because I was anchored on the idea of Facebook capturing transaction data, I missed that when the Wall Street Journal reported last month that Facebook was using some sort of blockchain technology (leaving aside the quibble on the definition noted above) it was an obvious signal that whatever Facebook was announcing would not be completely controlled by Facebook, because if the goal were Facebook control of a Facebook Coin then a blockchain would be a silly way to implement it.
The best way to understand Libra, then, is as a sort of distributed ledger that is a compromise between a fully public blockchain and an internal database:
This means that the overall system is much more efficient than Bitcoin, while the necessary level of trust is spread out to multiple entities, not one single company:
The trade-off is that Libra is not fully permissionless, although the Libra White Paper does say that is the long-term goal:
To ensure that Libra is truly open and always operates in the best interest of its users, our ambition is for the Libra network to become permissionless. The challenge is that as of today we do not believe that there is a proven solution that can deliver the scale, stability, and security needed to support billions of people and transactions across the globe through a permissionless network. One of the association’s directives will be to work with the community to research and implement this transition, which will begin within five years of the public launch of the Libra Blockchain and ecosystem.
Time will tell if this is possible: if you flip the “trust” axis in the above graphs the current state of affairs looks like this:
It may very well prove to be the case that there is a sort of efficient frontier when it comes to “no-trust” versus “efficiency”: that is, any decrease in necessary trust requires a corresponding decrease in efficiency. From my perspective the safest assumption about Libra’s future is that efficiency will be the ultimate priority, which means that the more that Libra is used the more difficult it will be to ever transition to a permission-less model.
Still, even if Libra remains controlled by an ever-expanding-but-still-limited set of validators, that is likely to be a far easier “sale” than a Facebook Coin controlled by a single company. Leaving aside the fact Facebook is not exactly swimming in trust these days when it comes to users, why would any other large company want to adopt a currency with a single point of corporate control?
Keep in mind the situation in the United States and other developed countries is much different than China: credit cards have their flaws, particularly in terms of fees, but they are widely accepted by merchants and widely used by consumers. China, on the other hand, mostly leapfrogged credit cards entirely; this meant that WeChat Pay’s (and Alipay’s) competition was cash: in that case the relative advantages of WeChat Pay relative to cash (which are massive) could overcome any concerns around centralized control.
A theoretical Facebook Coin’s relative advantage to credit cards, on the other hand, would be massively smaller, which means obstacles to widespread adoption — like trusting Facebook exclusively — would likely be insurmountable:
Thus the federation of trust inherent in Libra, despite the loss of efficiency that entails: by not being in control, and by actively including corporations like Spotify and Uber that will provide places to use Libra outside of Facebook, and payment networks like Visa and PayPal that will facilitate such usage, Facebook is increasing the chances that Libra will actually be used instead of credit cards.
I do think it is overly cynical to completely dismiss the advertised benefits of Libra: remittances, for example, have been the go-to example of how cryptocurrencies can have societal benefit for a long time for a very good reason — the current system exacts major fees from the population that can least afford to bear them. And, while I just spent an entire section on credit cards, the reality is that credit card penetration is much lower amongst the poor in developed countries and in developing countries generally: a digital currency ultimately premised on owning a smartphone has the potential to significantly expand markets to the benefits of both consumers and service providers.
To put it another way, Libra has the potential to significantly decrease friction when it comes to the movement of money; of course this potential is hardly limited to Libra — the reduction in friction is one of the selling points of digital currencies generally — but by virtue of being supported by Facebook, particularly the Calibra wallet that will be both a standalone app and also built into Facebook Messenger and WhatsApp, accessing Libra will likely be much simpler than accessing other cryptocurrencies. When it comes to decreasing friction, simplifying the user experience matters just as much as eliminating intermediary institutions.
There is also another component of trust beyond caring about who is verifying transactions: confidence that the value of Libra will be stable. This is the reason why Libra will have a fully-funded reserve denominated in a basket of currencies. This does not foreclose Libra becoming a fully standalone currency in the long run, but for now both users and merchants will be able to trust that the value of Libra will be sufficiently stable to use it for transactions.
If all of these bets pay off — that users and merchants will trust a consortium more than Facebook; that Libra will be cheaper and easier to use, more accessible, and more flexible than credit cards; and that Libra itself will be a reliable store of value — then that decrease in friction will be realized at scale.
And this is when this bet would pay off for Facebook (and the second point I missed in my earlier analysis): the implication that digital currencies will do for money what the Internet did for information is that the very long-term trend will be towards centralization around Aggregators. When there is no friction, control shifts from gatekeepers controlling supply to Aggregators controlling demand. To that end, by pioneering Libra, building what will almost certainly be the first wallet for the currency, and bringing to bear its unmatched network for facilitating payments, Facebook is betting it will offer the best experience for digital currency flows, giving it power not by controlling Libra but rather by controlling the most users of Libra.
Libra’s success, if it comes, will likely proceed in stages, with different challenges and competitors at each stage:
It is easy to see how Facebook, given its size, would thrive in that final state, for the reasons I detailed above. Just as Google long boasted that the more people use the Internet the more revenue Google generates, it stands to reason that the more people use digital money the more it would benefit dominant digital companies like Facebook, whether that be through advertising, transactions, or simply making networks that much more valuable.
That, though, is also a reason to be skeptical: the idea of Google making more money by people using the Internet more was once viewed as a happy alignment of incentives that justified Google’s services being free; today the centralization — and thus money-making potential — that follows a reduction in friction is much better understood, and there is much more concern about just how much power these Aggregators have.
This is particularly the case with Facebook: despite all of the company’s efforts to design a system that does not entail trusting Facebook exclusively — again, this is not a Facebook Coin — Libra is already widely known as a Facebook initiative. Unless the consumer benefits are truly extraordinary, that may be enough to prevent Libra from ever gaining escape velocity. This applies even more to the Calibra wallet: Facebook promises not to mix transaction data with profile data, but that entails, well, trust that Facebook may have already lost.
Still, that doesn’t mean digital currencies will never make it: I do think that Libra gets closer to a workable balance between trust and efficiency than Bitcoin, at least when it comes to being usable for transactions and not simply a store of value; the question is who can actually get such a currency off the ground. Certainly Facebook’s audacity and ambition should not be underestimated, and the company’s network is the biggest reason to believe Libra will work; Facebook’s brand is the biggest reason to believe it will not.
Validator APIs to support full nodes (nodes that have a full replica of the blockchain but do not participate in consensus). This feature allows for the creation of replicas that can support scaling access to the blockchain and the auditing of the correct execution of transactions.
However, only validators can actually validate transactions (unlike Bitcoin where anyone can be a miner/validator) [↩]
Four years ago I wrote Aggregation Theory, which argued that technology companies, uniquely enabled by zero marginal costs, were dominant by virtue of user preference driving suppliers onto their platforms, creating a virtuous cycle. Then, one month later, I predicted that the end state of Aggregation Theory would be increased demands for antitrust action. From Aggregation and the New Regulation:
This last point is key: under Aggregation Theory the winning aggregators have strong winner-take-all characteristics. In other words, they tend towards monopolies. Google is perhaps the best Aggregation Theory example of all — the company modularized individual pages from the publications that housed them even as it became the gateway to said pages for the vast majority of people — and so, given their success, perhaps it shouldn’t be a surprise that the company is under formal investigation by the European Union.
There was a second more subtle point in that article, though:
In other words, the regulation situation for these massive winner-take-all companies is not hopeless, but it has changed: their strength derives from the customer relationships they own, which means quiet backroom deals and straight-up arm wrestling of the Google and Uber varieties are liable to backfire in the face of overwhelming public opinion; it is in shaping that public opinion that the real battle will be fought. And while it’s true that the direct relationship aggregation companies have with their users is an advantage in this fight, the overwhelming power of social media is the new counterweight: it is easier than ever to reach said users with a report or column that resonates deeply. Your average writer or reporter has more (potential) power, not less.
This seems like the best explanation for how we have arrived at the current moment; Reuters reported last week that the U.S. Department of Justice and the Federal Trade Commission were divvying up tech companies for potential antitrust investigations — Google and Apple to the former, and Facebook and Amazon to the latter — a seemingly natural endpoint to what has been a mounting drumbeat for regulatory action against tech.
There’s just one problem: it’s not clear what there is to investigate.
I should state an obligatory caveat: I am not a lawyer or economist, which is relevant given that U.S. antitrust cases are adjudicated in court and largely driven by expert testimony. That reality, though, only underscores the point: any case against these four companies (with possibly one exception, which I will get to momentarily), will be extremely difficult to win.1 To explain why, it is worth examining all four companies with regards to:
In addition, for comparison’s sake, I will evaluate late 1990’s Microsoft, the last major tech antitrust case in the United States, along the same dimensions.
The FTC defines monopolization as follows:
Courts do not require a literal monopoly before applying rules for single firm conduct; that term is used as shorthand for a firm with significant and durable market power — that is, the long term ability to raise price or exclude competitors. That is how that term is used here: a “monopolist” is a firm with significant and durable market power. Courts look at the firm’s market share, but typically do not find monopoly power if the firm (or a group of firms acting in concert) has less than 50 percent of the sales of a particular product or service within a certain geographic area. Some courts have required much higher percentages. In addition, that leading position must be sustainable over time: if competitive forces or the entry of new firms could discipline the conduct of the leading firm, courts are unlikely to find that the firm has lasting market power.
There are (at least) two major questions that arise from this: how is the relevant market defined, and what does it mean for market power to be sustainable over time?
1990s Microsoft: Microsoft was found to have a monopoly on operating systems for personal computers, and that advantage was found to be durable because of the lock-in created by the network effects between developers using the Windows API and users. Both conclusions were reasonable.
Google: Google certainly has a dominant position in search, but the real question is around durability. Google has long argued that “Competition is only a click away”, which has the welcome benefit of being true.
The European Commission handled this objection by arguing that Google also enjoys network effects:
There are also high barriers to entry in these markets, in part because of network effects: the more consumers use a search engine, the more attractive it becomes to advertisers. The profits generated can then be used to attract even more consumers. Similarly, the data a search engine gathers about consumers can in turn be used to improve results.
This is certainly a much more tenuous lock-in than the Windows API, but I think it is a plausible one.
Apple: There is no company for which the question of market definition matters more than Apple. The company is eager to point out that the iPhone has a minority smartphone share in every market in which it competes; even in the U.S., Apple’s best market, the iPhone has 45% share, less than the 50 percent of sales the FTC suggests as a cut-off.
In Europe, Apple is likely in trouble when it comes to the European Commission’s investigation of Apple about Spotify’s complaints about the App Store. In the Google Android case the European Commission determined that “Google is dominant in the markets for general internet search services, licensable smart mobile operating systems and app stores for the Android mobile operating system.” That last clause leaves room for Apple to be found dominant in app store for the iOS mobile operating system, at which point taking 30% of Spotify’s revenue (or else forbidding Spotify to even link to a web page with a sign-up form) will almost certainly be ruled illegal.
I strongly suspect the Department of Justice will have a much more difficult time convincing a federal court that such a narrow definition is appropriate, but at the same time, I’m not certain that “smartphones” are the correct market definition either. Suggesting that users changing ecosystems is a sufficient antidote to Apple’s behavior is like suggesting that users subject to a hospital monopoly in their city should simply move elsewhere; asking a third party to remedy anticompetitive behavior by incurring massive inconvenience with zero immediate gain is just as problematic as making up market definitions to achieve a desired result.
Facebook: Here again market definitions are very fuzzy. Most people have multiple social media accounts across both Facebook and non-Facebook services, which means any sort of workable market share definition would have to rely on “time-spent” or some other zero-sum metric. Moreover, it’s not clear what is or is not a social network: does iMessage count? What about text messaging generally? What about email?
There certainly is an argument that Google and Facebook are a duopoly when it comes to digital advertising, but it is not as if either has the power to foreclose supply: there is effectively infinite advertising inventory on the Internet, which suggests that Google and Facebook earn more advertising dollars because they are better at advertising, not because they foreclose competition.
Amazon: There really is no plausible argument that Amazon has a monopoly. Yes, the company has around 37% of e-commerce sales, but (1) that is obviously less than 50% and (2) the competition is only a click away! Moreover, it’s not clear why “e-commerce” is the relevant market, and in terms of retail Amazon has low single-digits market share.
But for a few exceptions, everything that follows is moot if the company in question is not found to have a durable monopoly. After all, “anticompetitive behavior” is simply another name for “driving differentiation”, which no one should want to be illegal for any company that is not in a dominant position; it is the potential to make outsized profits that drives innovation.
Still, it is worth examining what, if anything, these companies do that might be considered problematic.
1990s Microsoft: Microsoft was found guilty of illegally bundling Internet Explorer with Windows and unfairly restricting OEMs from shipping computers with alternative browsers (or alternative operating systems). The first objection is particularly interesting in 2019, given that it is unimaginable that any operating system would ship without web browser functionality (which, at a minimum, would obviate an essential distribution channel for 3rd-party software). The second is much more problematic: as I wrote in Where Warren’s Wrong, competition-constraining contracts from dominant players should be viewed with extreme skepticism, as their purpose is almost always to extend dominance, not increase consumer welfare.
Google: Again — and note a developing theme here — Google’s anticompetitive behavior is relatively clear. First, the company consistently favors its own properties in search results, particularly “above-the-fold” — that is, results that are not actually search results but which seek to answer the user’s query directly. A partial list:
Of these local is probably the most open-and-shut case (although Google’s efforts around travel and hospitality are on the same track): Google Maps results were worse, got better when Google scraped data from competitors (which it stopped doing after an FTC investigation), and now is somewhat competitive by sheer force of exposure to customers defaulting to Google search.
Then, of course, there is Android, where Google leveraged the Play Store to force Android OEMs to feature Search and Chrome, and further forbade said OEMs from shipping any phones with open-source Android alternatives (a la Microsoft). This is one case the European Commission got exactly right.
Apple: As I argued in Antitrust, the App Store, and Apple, Apple is leveraging its position in the smartphone market to earn rents in the market for digital goods:
To put it another way, Apple profits handsomely from having a monopoly on iOS: if you want the Apple software experience, you have no choice but to buy Apple hardware. That is perfectly legitimate. The company, though, is leveraging that monopoly into an adjacent market — the digital content market — and rent-seeking. Apple does nothing to increase the value of Netflix shows or Spotify music or Amazon books or any number of digital services from any number of app providers; they simply skim off 30% because they can.
For this to be illegal does not necessarily require that Apple have a monopoly: tying (i.e. iOS users must use the App Store) is per se illegal in theory, but in practice the Supreme Court has dramatically constricted the definition of tying to include a requirement that the tie-er have market dominance; the Supreme Court also declined to review the Court of Appeals decision in the Microsoft case, which held that courts should use a rule of reason test for software specifically that also considers the benefits of tying, not simply the downsides.
I would certainly argue that the requirement that digital content use Apple’s payment processor (and thus give up 30%) has downsides that outweigh the benefits, but the truth is that this is a case that, under U.S. antitrust law, is harder to make than it was 20 years ago.
Facebook: There are certainly plenty of reasons to be upset with Facebook when it comes to issues of privacy, but the company has not done anything illegal from an antitrust perspective.
I am, to be clear, distinguishing anti-competitive behavior from anti-competitive mergers. I have made the case as to why Facebook’s acquisition of Instagram was so problematic, and this is the area that needs the most urgent attention from anyone who cares about competition. The single best way to maintain a dominant position in a market as dynamic as technology is to use the outsized profits that come from winning in one market to buy the winner in another; it follows, then, that the best way to spur competition in the long run is to force companies to compete with new entrants, not buy them out.
Amazon: Make no mistake, Amazon drives a very hard bargain with its suppliers. Those suppliers, though, have a whole host of alternatives through which to sell their product. Meanwhile, those hard bargains accrue to consumers’ benefit.
Similarly, it is very hard to see why Amazon can’t offer its own branded goods; this practice is widespread in retail, and for good reason: consumers get a better price, not only on the store-branded goods, but also on 3rd-party goods that can be priced more competitively since the retailer is making its margin on its own goods.
In short, more than any company on this list, the arguments against Amazon fall apart on the first point: Amazon simply isn’t a monopoly.
Remedies by definition come last: there has to be something worth remedying! Still, it is interesting to consider what the appropriate remedy for each company would be if they were indeed found to be a monopoly engaged in illegal anticompetitive behavior.
1990s Microsoft: Microsoft was originally ordered to be broken-up, although this remedy was overruled on appeal. The idea was that Windows would better serve all 3rd-party software suppliers if it weren’t incentivized to favor its own offerings. Ultimately, though, the company agreed to open up its API, although critics argued that the specifics simply cemented Windows’ dominance, instead of making it possible to build a Windows alternative that could run 3rd-party Windows applications.
The European Commission went further both in terms of requiring interoperability and also presenting users with choice in terms of both browsers and media players. In both cases 3rd-party competitors actually won in the long run — but they won because they were clearly better (first Firefox and then Chrome, and iTunes).
Google: An effective Google remedy would likely be more about constraining Google behavior than it would be about restructuring Google itself. Google might be forbidden from offering its own results for things like local search, or be forced to feature results from competitors according to an algorithm overseen by a court observer. There would also likely be a large fine.
Apple: The obvious remedy for Apple would be allowing 3rd-party payment processors for apps; frankly, I think this might go too far, as there are real benefits to Apple controlling everything API-related on the iOS platform. I would be satisfied with Apple allowing apps to launch web views for payment processing that is clearly handled on the app’s own webpage.
Alternatively, Apple could be forced to significantly reduces its App Store take rate, but I would prefer that Apple be forced to compete for payment processing business, which would achieve a similar result.
Facebook: Facebook, fascinatingly enough, given its lack of anticompetitive behavior, has the most obvious remedy: break apart Facebook, Instagram, and WhatsApp. I do believe this would be beneficial for competition: Instagram being an independent company would not only add another competitor for digital advertising, but would also make other companies like Snapchat more competitive by virtue of forcing advertisers to diversify. Again, though, this is more about a failure in merger review.
Amazon: Amazon has made anti-competitive acquisitions of its own, like Zappos and Diapers.com. Those platforms are gone though, making any sort of breakup unrealistic (this is likely at least one factor in Facebook’s plans to integrate messaging across its platforms — that will make a breakup that much more difficult). And as far as selling its own products goes, not only is that probably not a problem, but there is little evidence 3rd-party sellers are being hurt by Amazon’s policies, and plenty of evidence that they are helped by having access to Amazon’s customers. Moreover, highly differentiated suppliers have found success prioritizing other retailers if Amazon squeezes too hard.
Ideally, an antitrust action is not simply about punishing bad behavior in the past, but also about ensuring competition going forward. To that end, it is worth considering whether the upheaval that would result from any sort of investigation would actually make a long-term difference.
1990s Microsoft: Here the Microsoft case is particularly pressing. It is my contention that Microsoft failed to compete on the Internet and in mobile because the company was fundamentally unsuited to do so, both in terms of culture and capability.
The implication of this conclusion is that the antitrust case against Microsoft was largely a waste of time: the company would have been surpassed by Google and Apple regardless (and that the company only returned to prominence when it embraced a market that suited its capabilities and transformed its culture).
Many disagree to be sure, arguing that the antitrust case prevented Microsoft from foreclosing Google, although it is never clear as to how Microsoft would have done so (nor any explanation as to why Microsoft failed in mobile, where they were not constrained). A better argument is IBM: the government may have ultimately failed in its antitrust case against the mainframe behemoth, but IBM did voluntarily separate its software sales from its hardware sales, setting the stage for its own disruption; then again, the bigger factor was that IBM simply didn’t care enough about PCs to lock them down effectively.
Google: I wrote that we had reached Peak Google in 2014; clearly I was wrong, at least as far as the company’s results and stock price were concerned, but notably the company is ever more dependent on search advertising. One of my biggest mistakes was underestimating the degree to which Google could monetize mobile, not simply through increased adoption but also stuffing results with ever more ads (which, in the limited viewport of smartphones, are even easier to tap on).
That, though, is also an argument that my mistake was one of timing, not thesis (still a mistake, to be clear). For all of Google’s seeming advantages in machine learning, the company has yet to come up with a true second act in terms of driving revenue and profits (with the notable exception of YouTube, an acquisition).
Frankly, I suspect this is why Google is the most at-risk in this analysis: when a company is growing, it has no need to engage in anti-competitive behavior; it is only when the low-hanging fruit is gone that the risk of leveraging one market into another becomes worth it.
Apple: That analysis applies to Apple as well: the company introduced the “Services Narrative” in the 6S cycle, which in retrospect was when iPhone growth plateaued. Suddenly the rent Apple collected from apps was not simply an added bonus to a thriving iPhone business but a core driver of the company’s stock price.
At the same time, it is not as if iPhones are disappearing: there is still an argument to act for the sake of all of the businesses that will be hurt in the meantime. The same argument applies to Google: just because antitrust action isn’t necessarily causal when it comes to a company being eclipsed doesn’t mean it can’t be an important tool to maintain competition in the meantime.
Facebook: As I noted above, Instagram bought Facebook another five-to-ten years of dominance. That, though, is itself evidence that social networks are not forever. Each generation has its own preferences, and as long as acquisition rules around network-based companies are significantly beefed up, the best solution for Facebook, at least from an antitrust perspective, is simply time.
Amazon: This probably deserves a longer article at some point, but I think there is reason to believe that Amazon’s consumer business has also slowed considerably. The company is pushing more into ads, squeezing its suppliers, and driving customers to 3rd-party merchants with their attendant higher margins (for Amazon). This makes sense: there are certain categories of products that make sense for e-commerce, and Amazon does very well there, but will — and perhaps has — hit a ceiling as far as overall retail share is concerned.
Indeed, a mistake many tech company critics make is assuming that graphs that are up-and-to-the-right continue indefinitely; nearly all of those graphs are S-curves that will flatten out, and it is dangerous making regulatory decisions without some sort of insight as to when that flattening will occur.
Ultimately, when it comes to antitrust actions against tech companies in the U.S., there really isn’t nearly as much there as all of the attendant fervor would suggest. Google is absolutely vulnerable, Apple somewhat less so, and it is very hard to see any sort of case against Facebook or Amazon.
And again, this is probably a trailing indicator: Google and Apple have maximized their gains from their most important products, while Facebook and Amazon (particularly AWS) still have growth potential. I don’t think this alignment is a coincidence.
That is not to say that tech deserves no regulation: questions of privacy, for example, are something else entirely. Nor, for that matter, is antitrust irrelevant in the United States generally: concentration has increased dramatically throughout the economy.
What is driving that concentration matters, though: at the end of the day tech companies are powerful because consumers like them, not because they are the only option. Consumer welfare still matters, both in a court of law and in the court of public opinion.