Stitch Fix and the Senate

There was an interesting line of commentary around the news that Stitch Fix, the personalized clothing e-commerce company, was going to IPO: these numbers are incredible! Take this article in TechCrunch as an example (emphasis mine):

Stitch Fix has filed to go public, finally revealing the financial guts of the startup which will be a test of modern e-commerce businesses that are looking to hit the market — and the numbers look pretty great!

Let’s start off really quick with profits: aside from the last two quarters, Stitch Fix posted a six-quarter streak of positive net income. We talk a lot about companies that are planning to go public that show pretty consistent (or even increasing) losses, but Stitch Fix looks like a company that has actually managed to build a healthy business. The company finally lost money in the last two quarters, but even then, its losses decreased quarter-over-quarter — with the company only losing around $4.5 million in the second quarter this year.

Compare this to the TechCrunch article written when Box, a company that ultimately IPO’d at a similar market-cap as Stitch Fix will (~$2 billion), first filed its IPO:

Box has long been rumored to have quickly growing revenues and large losses, which has proven to be the case. For the full-year period that ended January 2014, Box’s revenues grew to $124 million, up from $58.8 million the year prior. However, the company’s net loss also expanded in the period, with Box posting losses of $168 million for the full-year period that ended January 2014, more than its total top line for the period. In the period prior, Box lost a more modest $112 million.

What is driving Box’s yawning losses? Sales and marketing. The company’s line item for those expenses expanded from $99.2 million for the year ending January 2013, to $171 million for the year ending January 31, 2014. That was the lion’s share of Box’s $100 million increasing in operating costs during the period. Or, put more simply, Box spent more dollars on selling its products in the year than it brought in revenue during the period. This could indicate customer churn, or merely a tough market for cloud products.

In fact, both explanations were completely wrong: Box’s losses were due to the company investing in future growth; a detailed look at cohorts revealed that Box was increasing profitability over time because churn was negative (because existing customers were increasing spend by more than the revenue lost by those leaving), and the share of cloud spending amongst enterprise broadly is only going in one direction — up. To that end, investing more in future growth — even though it made the company unprofitable in the short-term — was an obviously correct decision.

Stitch Fix Concerns

To that end, I find Stitch Fix’s number more concerning than I did Box’s:

  • First, the average revenue per client has decreased over time: according to the numbers provided in Stitch Fix’s S-1, the average client in 2016 generated $335 in revenue in the first six months, and $489 in revenue for the first 12 months; there is not a comparable set of numbers for earlier cohorts (itself a red flag), but the average 2015 client generated $718 in revenue over two years. To the extent these cohorts can be compared, that means $335 in the first six months, $154 in the second six months, and an average of $115 in the third and fourth six-month periods.
  • Second, these clients are increasingly expensive to acquire. Stitch Fix increased its ‘Selling, General and Administrative Expenses’ by 56% last year, but revenue increased by only 34%; advertising spend specifically increased from $25.0 million to $70.5 million (182%), vastly outpacing revenue growth.
  • Third, revenue growth is slowing substantially, despite the fact Stitch Fix has expanded its product offerings, both within its core women’s market as well as expansions to Petite, Maternity, Men’s, and Plus apparel. I noted last year’s revenue growth was 34%; the previous year’s growth was 113%, and the year before that 368%.

The problem for Stitch Fix is the same bugaboo encountered by the majority of consumer companies: the lack of a scalable advantage in customer acquisition costs. I wrote about this earlier this year in the context of Uber:

Uber’s strength — and its sky-high valuation — comes from the company’s ability to acquire customers cheaply thanks to a combination of the service’s usefulness and the effects of aggregation theory: as the company acquires users (and as users increases their usage) Uber attracts more drivers, which makes the service better, which makes it easier to acquire marginal users (not by lowering the price but rather by offering a better service for the same price). The single biggest factor that differentiates multi-billion dollar companies is a scalable advantage in customer acquisition costs; Uber has that.

On the other hand, it seems likely Stitch Fix does not, even though the company did argue in its S-1 that it benefited from network effects:

We believe we are the only company that has successfully combined rich client data with detailed merchandise data to provide a personalized shopping experience for consumers. Clients directly provide us with meaningful data about themselves, such as style, size, fit and price preferences, when they complete their initial style profile and provide additional rich data about themselves and the merchandise they receive through the feedback they provide after receiving a Fix. Our clients are motivated to provide us with this detailed information because they recognize that doing so will result in a more personalized and successful experience. This perpetual feedback loop drives important network effects, as our client-provided data informs not only our personalization capabilities for the specific client, but also helps us better serve other clients.

This may be true — it makes sense that it would be — but while it may help Stitch Fix better serve new customers it is not clear how it helps the company acquire said customers in the first place. Instead, like most consumer companies, it seems likely that Stitch Fix leveraged word-of-mouth to own its core market of women who value convenience in shopping for clothing, but struggled to break beyond that segment — and to the extent it did, found consumers who spend less and churn more. In other words, unlike successful aggregators, the improvement generated by the network effect, such that it was, was less than the increase in acquisition cost:

The key to truly break-out consumer companies is having these lines reversed: the network should be generating an improvement in benefits that exceeds the cost of acquiring customers, fueling a virtuous cycle.

Stitch Fix’s Success

That said, I fully expect Stitch Fix’s IPO to be successful: how could it not be? In stark contrast to many would-be aggregators, Stitch Fix has taken a shockingly small amount of venture capital — only $42.5 million. Instead the company has been profitable — on an absolute basis in 2015-2016, and quite clearly on a unit basis (including acquisition costs) throughout — and cash flow positive. That increased marketing expenditure is being paid by current customers, not venture capitalists. To that end, a $2 billion IPO would be a massive win for Stitch Fix’s investors and Katrina Lake, the founder.

Moreover, while Stitch Fix’s growth may be slowing, that is by no means fatal: the company is a perfectly valid business to own exactly as it is. Indeed, I am in fact deeply impressed by Stitch Fix: it seems quite clear that early on Lake realized that the company was not an aggregator, which meant building a business, well, normally. That means making real profits, particularly on a unit basis. Even then, though, the company was clearly worthy of venture capital: Baseline Ventures and Benchmark will see a 10x+ return.

To that end, Stitch Fix is a more important company than it may seem at first glance: it proves there is a way to build a venture capital-backed company that is not an aggregator, but still a generator of outsized returns. The keys, though, are positive unit economics from the get-go, and careful attention to profitability. The reason this matters is that these sorts of companies are by far the more likely to be built: Google and Facebook are dominating digital advertising, Amazon is dominating undifferentiated e-commerce, Microsoft and Amazon are dominating enterprise, and Apple is dominating devices. To compete with any of them is an incredibly difficult proposition; better to build a real differentiated business from the get-go, and that is exactly what Stitch Fix did.

RSUs, Options, and Taxes

The other winners in Stitch Fix’s IPO are all of its employees that hold stock options and restricted stock units (RSUs): they too have benefited from the company’s reticence in raising money; those options and RSUs are priced significantly below the company’s IPO price. And, when that IPO happens later this week, said employees will benefit tremendously — rightly alongside the IRS. When the IPO happens those stock options and RSUs will become taxable, and the majority of Stitch Fix employees will have to sell some portion of their holdings to cover their bill. This is entirely reasonable: they will have earned their reward for building Stitch Fix into the impressive company it has become, and they will pay taxes on that reward.

What would not have been reasonable, though, would have been to pay those taxes before the IPO. After all, when Stitch Fix started it was not at all certain the company would reach this milestone: there are a whole host of companies that raised far more than Stitch Fix’s $42.5 million that ended up going out of business, or being sold off as an acquihire such that employees earned nothing.

Indeed, I suspect that startup employees are, on balance, terribly underpaid: most take on jobs with lower salaries relative to established companies simply for the chance of making an outsized return should the company they work for IPO; the odds of that happening mean the expected value of the options and RSUs they receive are quite low.

To that end, it is tempting to be skeptical about venture capital protestations about a Senate tax provision that would tax stock options and RSUs at the moment they vest; given the uncertain value, any startup seeking to attract employees would have to significantly up their cash compensation, which would be better for most employees (and thus worse for most venture capitalists) — in the short term, anyways.

The Startup Ecosystem

The problem with this point of view is that the startup employee frame is much too narrow: leaving aside the fact than anyone qualified to work at a startup is already far better off than nearly everyone on earth, the broader issue is that the scope for building successful venture-backed companies is narrowing.

Stitch Fix is a perfect example: I just explained that the company has uncertain growth prospects, but is still a big success thanks in large part to its disciplined approach to the bottom line. That should be a model for more companies: quickly determine if your business can be an aggregator with the scalable acquisition cost advantages that come with it, and if not, build a sustainable business sooner rather than later. That will allow everyone to benefit: founders, venture capitalists, employees, and most importantly, consumers.

A disciplined approach to the bottom line, though, means taking full advantage of a start-up’s number one recruiting tool: stock options and RSUs, in lieu of fully competitive salaries. Had Stitch Fix had to pay its employees in cash the company would have likely had to raise more money, reducing the likelihood of a successful outcome for everyone — including the IRS.

The downside, though, is even more acute for the companies that might seek to become aggregators themselves and so challenge companies like Google, Facebook, or Amazon directly. Any such would-be disruptor — and keep in mind, disruption is the only means by which these companies might ever be threatened — would need to raise huge amounts of capital, likely over an extended period of time. Moreover, the odds of success would be commensurately lower, making it even more likely associated stock options and RSUs might be worthless. To that end, taxing said options and RSUs would make start-up jobs even less attractive, and any sort of alternative — including increased cash compensation — would not only reduce the likelihood of startup success but also deny employees the chance to share in a successful outcome.

Tech’s Constituencies

Like most commentators, I am often guilty of lumping all of technology into one broad bucket; that makes sense when considering the impact of technology on society broadly. This tax bill, though, is a reminder that tech has two distinct constituencies with concerns that don’t always align:

  • Incumbents have successful business models that throw off oodles of cash; their concern is about protecting those models, and they will spend to do so
  • Venture capitalists and founders are seeking to build new businesses that, more often than not, threaten those incumbents; their edge is the opportunity to build businesses perfectly aligned to the problem they are seeking to solve

Tech employees get different benefits from each camp: the former provides high salaries and great perks; you can have a very nice life working for Facebook or Google. Startups, on the other hand, offer a chance to own a (small) piece of something substantial, at the cost of short-term salary — and that is worth preserving. Not only is it important to offer an accessible route up the economic ladder, former startup employees are a key part of the Silicon Valley ecosystem, often providing the initial funding for other new companies.

To that end, what is critical to understand about this proposed tax change is that incumbent companies won’t be hurt at all: sure, they may have to change their compensation to be more cash-rich and RSU-light, but cash isn’t really a constraint on their business. Higher salaries are a small price to pay if it means startups that might challenge them are handicapped; small wonder none of the big companies are lobbying against this provision.

Towards a Startup Lobby

This isn’t the first time the needs of the big tech companies has diverged from startups: net neutrality, for example, is much more important if you don’t have the means to pay to play. The same thing applies to tax laws more broadly, including corporate tax reform and offshore holdings.

Perhaps the most clear example, though, is antitrust: the companies that are hurt the most by Google, Facebook, and Amazon dominance are not analog publishers or retailers, but more direct competitors for digital advertising or e-commerce — mostly startups. Nearly all of the lobbying about this issue, though, is funded by the incumbents, for all of the reasons noted above: they have cash to burn, and business models to protect.

To that end it might behoove the startup community — and to be more specific, venture capitalists — to start building a counterweight. I am optimistic this Senate provision will ultimately be stripped from the proposed tax bill, but that the very foundation of startup compensation was so suddenly threatened should serve as a wake-up call that depending on Google or Apple largesse to represent the tech industry is ultimately self-defeating.

Apple at Its Best

The history of Apple being doomed doesn’t necessarily repeat, but it does rhyme.

Take the latest installment, from Professor Mohanbir Sawhney at the Kellogg School of Management (one of my former professors, incidentally):

Have we reached peak phone? That is, does the new iPhone X represent a plateau for hardware innovation in the smartphone product category? I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims…

As we have seen, when the vector of differentiation shifts, market leaders tend to fall by the wayside. In the brave new world of AI, Google and Amazon have the clear edge over Apple. Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration, such as real-time language translation when used with Google’s special headphones…The shifting vector of differentiation to AI and agents does not bode well for Apple…

Sheets of glass are simply no longer the most fertile ground for innovation. That means Apple urgently needs to shift its focus and investment to AI-driven technologies, as part of a broader effort to create the kind of ecosystem Amazon and Google are building quickly. However, Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.

It is an entirely reasonable argument, particularly that last line: I myself have argued that Apple needs to rethink its organizational structure in order to build more competitive services. If the last ten years have shown us anything, though, it is that discounting truly great hardware — and the sort of company necessary to deliver that — is the surest way to be right in theory and wrong in reality.

The Samsung Doom

When Stratechery started in 2013, Samsung was ascendent, and the doomsayers were out in force. The arguments were, in broad strokes, the same: hardware innovation was over, and Android’s good enough features, broader hardware base, and lower prices would soon mean that the iPhone would go the way of the Mac relative to Windows.1

At that time the flagship iPhone was the iPhone 5; Apple was still only making one iPhone a year. That phone — the one that, many claimed, was the peak of hardware innovation — featured a larger (relative to previous iPhones) 4 inch LED screen, 8MP rear camera and 1.2 MP front camera, and Apple’s A6 32-bit system-on-a-chip, the first from the company that was not simply a variation on a licensed ARM design. To be sure, the relatively small screen size was a readily apparent problem: one of my first articles argued that Samsung’s focus on larger-screens was a meaningful advantage that Apple should copy.

Obviously Apple eventually did just that with the iPhones 6 and 6 Plus, but screen size is hardly the only thing that changed: later that year Apple introduced the iPhone 5S, which included the A7 chip that blew away the industry by going 64-bit years ahead of schedule; Apple has enjoyed a massive performance advantage relative to the rest of the industry ever since. The iPhone 5S also included Touch ID, the first biometric authentication method that worked flawlessly at scale (and enabled Apple Pay), the usual camera improvements, as well as a new ‘M7’ motion chip that laid the groundwork for Apple’s fitness focus (and the Apple Watch).

And, even as critics clamored that the pricing of the iPhone 5C, launched alongside the 5S, meant the company was going to be disrupted, the iPhone 5S sold in record numbers — just like every previous iPhone had.

The iPhone X

I’m spoiled, I know: gifted with the rationalization of being a technology analyst, I buy an iPhone every year. Even so, I thought the iPhone 7 was a solid upgrade: it was noticeably faster, had an excellent screen, and the camera was great; small wonder it sold in record number everywhere but China.2 What it lacked, though — and I didn’t fully appreciate this until I got an iPhone X — was delight:

Face ID isn’t perfect: there are a lot of edge cases where having Touch ID would be preferable. By its fourth iteration in the iPhone 7, Touch ID was utterly dependable and, like the best sort of technology, barely noticeable.

FaceID takes this a step further: while it takes a bit of time to change engrained habits, I’m already at the point where I simply pick up the phone and swipe up without much thought;3 authenticating in apps like 1Password is even more of a revelation — you don’t have to actually do anything.

In these instances the iPhone X is reaching the very pinnacle of computing: doing a necessary job, in this case security, better than humans can.4 The fact that this case is security is particularly noteworthy: it has long been taken as a matter of fact that there is an inescapable trade-off between security and ease-of-use; TouchID made it far easier to have effective security for the vast majority of situations, and FaceID makes it invisible.

The trick Apple pulled, though, was going beyond that: the first time I saw notifications be hidden and then revealed (as in the GIF above) through simply a glance produced the sort of surprise-and-delight that has traditionally characterized Apple’s best products. And, to be sure, surprise-and-delight is particularly important to the iPhone X: so much is new, particularly in terms of the interaction model, that frustrations are inevitable; in that Apple’s attempt to analogize the iPhone X to the original iPhone is more about contrasts than comparisons.

The Original iPhone and Overshooting

While the iPod wheel may be the most memorable hardware interface in modern computing, and the mouse the most important, touch is, for obvious reasons, the most natural. That, though, only elevates the original iPhone’s single button: everything about touch interfaces needed to be invented, discovered, and figured out; it was that button that made it accessible to everyone — when in trouble, hit the button to escape.

Over the years that button became laden with ever more functionality: app-switching, Siri, TouchID, reachability. It was the physical manifestation of another one of those seemingly intractable trade-offs: functionality and ease-of-use. Sure, the iPhone 5 I referenced earlier was massively more capable than the original iPhone, and the iPhone X vastly more capable still, but in fact an argument based on specifications makes the critics’ point: the more technology that gets ladled on top, the more inaccessible it is to normal users. Clayton Christensen, in the The Innovators’ Dilemma, called this “overshooting”:

Disruptive technologies, though they initially can only be used in small markets remote from the mainstream, are disruptive because they subsequently can become fully performance-competitive within the mainstream market against established products. This happens because the pace of technological progress in products frequently exceeds the rate of performance improvement that mainstream customers demand or can absorb. As a consequence, products whose features and functionality closely match market needs today often follow a trajectory of improvement by which they overshoot mainstream market needs tomorrow. And products that seriously underperform today, relative to customer expectations in mainstream markets, may become directly performance-competitive tomorrow.

This was the reason all of those iPhone critics were so certain that Apple’s days were numbered. “Good-enough” Android phones, sold for far less an iPhone, would surely result in low-end disruption. Here’s Christensen in an interview with Horace Dediu:

The transition from proprietary architecture to open modular architecture just happens over and over again. It happened in the personal computer. Although it didn’t kill Apple’s computer business, it relegated Apple to the status of a minor player. The iPod is a proprietary integrated product, although that is becoming quite modular. You can download your music from Amazon as easily as you can from iTunes. You also see modularity organized around the Android operating system that is growing much faster than the iPhone. So I worry that modularity will do its work on Apple.

Shortly after the iPhone 5S/5C launch, I made the case that Christensen was wrong:

Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated…

Not all consumers value — or can afford — what Apple has to offer. A large majority, in fact. But the idea that Apple is going to start losing consumers because Android is “good enough” and cheaper to boot flies in the face of consumer behavior in every other market. Moreover, in absolute terms, the iPhone is significantly less expensive relative to a good-enough Android phone than BMW is to Toyota, or a high-end bag to one you’d find in a department store…

Apple is — and, for at least the last 15 years, has been — focused exactly on the blind spot in the theory of low-end disruption: differentiation based on design which, while it can’t be measured, can certainly be felt by consumers who are both buyers and users.

Needless to say, in 2013 we weren’t anywhere close to peak iPhone: in the quarter I wrote that article — 4Q 2013, according to Apple’s fiscal calendar, the weakest quarter of the year — the company sold 34 million iPhones; the next quarter Apple booked $58 billion in revenue. We are now four years on, and last quarter — 4Q 2017, again according to Apple’s fiscal quarter — the company sold 47 million iPhones; next quarter Apple is forecasting between $84 and $87 billion in revenue.

More importantly, the experience of using an iPhone X, at least in these first few days, has that feeling: consideration, invention, and yes, as the company is fond to note, the integration of hardware and software. Look again at that GIF above: not only does Face ID depend on deep integration between the camera system, system-on-a-chip, and operating system, but the small touch of displaying notifications only when the right person is looking at them depends on one company doing everything. That still matters.

Moreover, it’s worth noting that the iPhone X is launching into a far different market than the original iPhone did: touch is not new, but rather the familiar; changing many button paradigms into gestures certainly presents a steeper learning curve for first-time smartphone users, but for how many users will the iPhone X be their first smartphone?

Artificial Intelligence and New Market Disruption

Still, I noted that while Apple doom-sayers rhyme, they don’t repeat. The past four years may have thoroughly validated my critique of low-end disruption and the iPhone, but there is another kind of disruption: new market disruption. Christensen explains the difference in The Innovator’s Solution:

Different value networks can emerge at differing distances from the original one along the third dimension of the disruption diagram. In the following discussion, we will refer to disruptions that create a new value network on the third axis as new-market disruptions. In contrast, low-end disruptions are those that attack the least-profitable and most overserved customers at the low end of the original value network.

Christensen ultimately concluded that the iPhone was a new market disruptor of the PC: it was seemingly less capable yet simpler to use, and thus attracted non-consumption, and eventually gained sufficient capabilities to attract PC users as well. This is certainly true as far as it goes;5 certainly there are an order of magnitude more smartphone users than there ever were PC users.

And, to that end, Sawhney’s argument is in this way different from the doomsayers of old: it’s not that Apple will be disrupted by “good-enough” cheap Android, but rather because a new vector is emerging — artificial intelligence:

The vector of differentiation is shifting yet again, away from hardware altogether. We are on the verge of a major shift in the phone and device space, from hardware as the focus to artificial intelligence (AI) and AI-based software and agents.

This means nothing short of redefinition of the personal electronics that matter most to us. As AI-driven phones like Google’s Pixel 2 and virtual agents like Amazon Echo proliferate, smart devices that understand and interact with us and offer a virtual and/or augmented reality will become a larger part of our environment. Today’s smartphones will likely recede into the background.

Makes perfect sense, but for one critical error: consumer usage is not, at least in this case, a zero sum game. This is the mistake many make when thinking about the way in which orthogonal businesses compete:

The presumption is that the usage of Technology B necessitates no longer using Technology A; it follows, then, that once Technology B becomes more important, Technology A is doomed.

In fact, though, most paradigm shifts are layered on top of what came before. The Internet was used on PCs, social networks are used alongside search engines. Granted, as I just noted, smartphones are increasingly replacing PCs, but even then most use is additive, not substitutive. In other words, there is no reason to expect that the arrival of artificial intelligence means that people will no longer care about what smartphone they use. Sure, the latter may “recede into the background” in the minds of pundits, but they will still be in consumers’ pockets for a long time to come.

There’s a second error, though, that flows from this presumption of zero-summedness: it ignores the near-term business imperatives of the various parties. Google is the best example: were the company to restrict its services to its own smartphone platform the company would be financially decimated. The most attractive customers to Google’s advertisers are on the iPhone — just look at how much Google is willing to pay to acquire them6 — and while Google could in theory convince them to switch by keeping its superior services exclusive, in reality such an approach is untenable. In other words, Google is heavily incentivized to preserve the iPhone as a competitive platform in terms of Google’s own services; granted, Android is still better in terms of easy access and defaults, but the advantage is far smaller than it could be.

Apple, meanwhile, is busy building competing services of its own, and while it’s easy — and correct — to argue that they aren’t really competitive with Google’s, that doesn’t really matter because competition isn’t happening in a vacuum. Rather, Apple not only enjoys the cost of switching advantage inherent to all incumbents, but also is, as the iPhone X shows, maintaining if not extending the user experience advantage that comes from its integrated model. That, by extension, means that Apple’s services need only be “good enough” — there’s that phrase! — to let the company’s other strengths shine.

This results in a far different picture: the “hurdle rate” for meaningful Android adoption by Apple’s customer base is far greater than the doom-sayers would have you think.

Apple’s Durable Advantage

I am no Apple pollyanna: I first made the argument years ago that the ultimate Apple bear case is the disappearance of hardware that you touch (which remains the case); I also complimented the company for having the courage to push towards that future.

Indeed, Apple’s aggressiveness in areas like wearables and, at least from a software perspective, augmented reality, suggest the company will press its hardware advantage to get to the future before its rivals, establishing a beachhead that will be that much more difficult for superior services offerings to dislodge. Moreover, there is evidence that Google sees the value in Apple’s approach: the company’s push into hardware may in part be an attempt to find a new business model, but establishing the capabilities to compete in hardware beyond the smartphone is surely a goal as well.

What is fascinating to consider is just how far might Apple go if it decided to do nothing but hardware and its associated software: if Google Assistant could be the iPhone default, why would any iPhone user even give a second thought to Android? I certainly don’t expect this to happen, but that giving away control of what seems so important might, in fact, secure Apple’s future more strongly than anything else, is the most powerful signal possible that the integration of hardware and software — and the organizational knowledge, structure, and incentives that come from that being a company’s primary business model — remains a far more durable competitive advantage than many theorists would have you think.

  1. Which, for the record, is a misreading of history []
  2. Speaking of China, the point of that article was that hardware differentiation mattered more there than anywhere else; I expect the iPhone X to sell very well indeed []
  3. Many of those edge cases are in cases where you are not picking up the phone and thus triggering wake-on-rise; the car, for example, or the desk []
  4. To be clear, this is all relative; in fact, Face ID is arguably even less secure than Touch ID. Sure, 1 in a million chances of a match are better than 1 in 50,000 if the sample is fully random, but given that close siblings, for example, can overcome it in theory is a reminder that relevant samples are not always random. The broader point, though, is that security people use is better than superior solutions they do not. []
  5. Christensen never did explain why the iPhone defeated Nokia et al, who he originally expected to overcome the iPhone; I put forward my theory in Obsoletive []
  6. What I wrote in this Daily Update about Google’s acquisition costs almost certainly explains the bump in Apple’s services revenue last quarter; more on this in tomorrow’s Daily Update []

Tech Goes to Washington

There was a striking moment during the Senate hearing about Facebook, Twitter, and Google’s role in the 2016 U.S. election, that suggested the entire endeavor would be a bit of a farce, marked by out-of-tech Senators oblivious to how the Internet actually works. The three companies’ home-state Senator, Diane Feinstein, had just finished asking about the ability to target custom audiences (including a request that Sean Edgett, Twitter’s acting general counsel, explain what ‘impressions’ were), and handed the floor to Nebraska Senator Ben Sasse:

Did you catch Feinstein in the background asking “Did he say 330 million?” with surprise in her voice? What might she have thought had it been noted that Facebook has 2 billion users! At that moment it was hard to see this hearing amounting to anything; the next Senator, Dick Durbin of Illinois, asked why Facebook didn’t, and I quote, “hold the phone” when a Russian intelligence agency took out the ads. A few Senators later Richard Blumenthal demanded Twitter determine how many people declined to vote after seeing tweets suggesting voters could text their choice, and that Facebook reveal whom may have taught the Russian intelligence agency how to do targeting; both requests are, quite obviously, unknowable by the companies in question.

Meanwhile, the tech companies were declaring at every possible opportunity that they understood that the Russian problem was serious, and that they were committed to fixing it. “We do believe these tools are powerful, and yet we have a responsibility to make sure they’re not used to inflame division,” said Colin Stretch, Facebook’s general counsel. He later stated, “We want our ad tools to be used for political discourse, certainly. But we do not want our ad tools to be used to inflame and divide.” It seemed like a concerted PR effort designed to sooth Senators animated more by scoring political points than by actually understanding the issues at hand: yes, the problem is serious, yes, we are committed to fixing it, and of course, it is so complicated that only we can.

What made the hearing worth watching, though, were three lines of questioning that blew this position apart.

Senator John Kennedy on Facebook’s Power

The single most compelling line of questioning came from Louisiana junior Senator John Kennedy;1 first, he exposed the company’s implied claims that they could actually fix the problem as a sham:

This is the exact issue I discussed in The Super Aggregators and the Russians:

Super-aggregators not only have zero transaction costs when it comes to users and content, but also when it comes to making money. This is at the very core of why Google and Facebook are so much more powerful than any of the other purely information-centric networks. The vast majority of advertisers on both networks never deal with a human (and if they do, it’s in customer support functionality, not sales and account management): they simply use the self-serve ad products like the one pictured above (or a more comprehensive tool built on the companies’ self-serve API).

I added up the numbers in Trustworthy Networking, estimating that Facebook served 276 million unique ads per quarter, and my entire point was the same as Kennedy’s: there is no way that Facebook could ever review every ad, much less investigate who is behind them, without completely ruining their revenue model.

Kennedy wasn’t done, though: he went on to press Stretch in particular about just how much data Facebook has about, well, everyone:

Stretch was insistent that Facebook would never look up the data on any one individual, both because of internal policies as well as the way the company’s data store was engineered. What Kennedy was driving at, though, is that Facebook could; here is the transcription:

Kennedy: Let’s suppose your CEO came to you — not you, but somebody who could do it in your company — maybe you could — and said, “I want to know everything we can find out about Senator Graham. I want to know the movies he likes, I want to know the bars he goes to. I want to know who his friends are. I want to know what schools he goes — went to.” You could do that, couldn’t you?

Stretch: The answer is absolutely not. We have limitations in place on our ability to —

Kennedy: No, no, I’m not asking about your rules. I’m saying you have the ability to do that. Don’t you?

Stretch: Again, Senator. the answer is no. we’re not —

Kennedy: You can’t put a name to a face to a piece of data? You’re telling me that?

Stretch: So we have designed our systems to prevent exactly that, to protect the privacy of our users.

Kennedy: I understand. But you can get around that to find that identity, can’t you?

Stretch: No, Senator. I cannot.

Kennedy: That’s your testimony under oath.

Stretch: Yes, it is.

Senator Kennedy is an interesting character. He speaks with a Southern drawl — the contrast to Stretch, a Harvard-law educated former Supreme Court clerk who sounded exactly like his biography, was stark. Kennedy, though, is impressive in his own right: after graduating magna cum laude from Vanderbilt he received his J.D. from Virginia, and then was a Rhodes Scholar, receiving a Bachelor of Civil Law with first class honours from Oxford. After a winding career in politics, including a switch from the Democratic party to the Republicans, he was elected to the Senate last fall.

What Kennedy surely realized — and what Stretch, apparently, did not — is that Facebook had already effectively answered Kennedy’s question: the very act of investigating the accounts used by Russian intelligence entailed doing the sort of sleuthing that Kennedy wanted Stretch to say was possible. Facebook dived deep into an account by choice, came to understand everything about it, and then shut it down and delivered the results to Congress. It follows that Facebook could — not would, but could — do that to Senator Graham or anyone else.

To be clear, Stretch made clear that Facebook did this because the accounts in question had been deemed inauthenetic; that removed all of the external legal, internal policy, and business model limitations that would prevent Facebook from doing such forensic work to an individual account.

Still, Kennedy’s two lines of questions combined revealed the tech companies’ testimony for the paradox it was: on the one hand, their sheer scale means it is impossible to fully stamp out activities like Russian meddling; on the other, that same scale means they all have the most intimate information on nearly everyone.

Senator Ted Cruz on Bias

Senator Ted Cruz’s line of questioning highlighted just how problematic this power is:

Try, but for a moment, to set your personal politics aside; Cruz is driving at a very fundamental question: is what is acceptable driven by what is right or what is collectively decided? The temptation is surely to choose the former: right is right! And indeed, I suspect that most of my readers believe that Cruz is wrong about most political questions. It is worth, though, considering the alternative: what if the powers that be decide unilaterally?

This line of questioning highlights the problems raised by Kennedy: if the powers that be also happen to have massive investigatory power over basically everyone, then at what point do the internal rules and norms against utilizing that power become overwhelmed by the demand that right thinking be enforced? The tech companies argued throughout this testimony that they took their responsibility seriously, and would snuff out bad actors. Who, though, decides who those bad actors are?

The point of democracy has never been about having the most efficient form of government; no company, for example, would make decisions in such a manner. The best companies are in many respects totalitarian: CEOs have the final say, and employees either get on board or get out. That, though, is only viable because the downside is merely financial; when governments go wrong, on the other hand, far worse can result. That is democracy’s upside: it may not get the most done, but that applies to good outcomes as well as bad.

This also highlights the absurdity in Stretch’s declaration that “We want our ad tools to be used for political discourse, certainly. But we do not want our ad tools to be used to inflame and divide.” Politics is inflammatory, and it does divide. To endeavor to stamp out inflammatory and divisive statements is, by definition, to exercise a degree of power that is clearly latent in Facebook et al, and clearly corrosive to the democratic process.

Senator Al Franken on Tech’s Vulnerability

Cruz’s statement acknowledged that the junior Senator from Minnesota had been very critical of his presidential candidacy; indeed, it is hard to imagine two politicians that fall further apart on the political spectrum. To that end, Franken took a very different tack than Cruz: while the latter was concerned with the tech companies’ lack of neutrality, Franken was disgusted by their lack of action:

There are two levels to this exchange: technically, Franken is off the mark. To reduce Russian interference to buying political ads with rubles is to skate over the complexity of this issue: how do you know what is a political ad, for one, and simply looking at currency is almost certainly a relatively useless signal, for another.

Rhetorically, though, Franken is devastating. Befitting his background as a comedian, Franken has a knack for framing the question at hand in a way that is easy for laypeople to understand, and all but impossible for Facebook to answer. Stretch looks like a fool, not because he is wrong, but because he is right.

This matters; the biggest thing the tech companies have going for them is that they are popular, and this controversy is largely centered within the coastal tech-media bubble. What Franken demonstrated, though, is that this position is potentially more fragile than it seems.


I still believe that, on balance, blaming tech companies for the last election is, more than anything, a convenient way to avoid larger questions about what drove the outcome. And, as I noted, the fact is that tech companies remain popular with the broader public.

What this hearing highlighted, though, is the degree to which the position of Facebook in particular has become more tenuous. The fact of the matter is that Facebook (and Google) is more powerful than any entity we have seen before. Magnifying the problem is that, over the last year, Facebook has decided to “take responsibility”, and what is that but a commitment to exercise their control over what people see?

Indeed, this is where Facebook’s inescapable internal bias surely played a role: the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly: traumatized by the election of a candidate deemed unacceptable Facebook has committed itself to exercising its power, and that is in itself a cause for alarm.

More broadly, it is hard to escape the conclusion that tech companies have been unable to resist the ring of power: the end game of aggregation is unprecedented control over what people see; the only way to handle that power without risking the abuse of it is a commitment to true neutrality. That Facebook, Twitter, and Google — which, by the way, holds just as much if not more power than Facebook, but without the attendant media scrutiny — have committed to fixing the Russian problem is itself more problematic than those urging they do just that may realize.

  1. No relation to the Massachusetts political family []

Why Facebook Shouldn’t Be Allowed to Buy tbh

There was one line in TechCrunch’s report about Facebook’s purchase of social app tbh [sic] that made me raise my eyebrows (emphasis mine):

Facebook announced it’s acquiring positivity-focused polling startup tbh and will allow it to operate somewhat independently with its own brand.

tbh had scored 5 million downloads and 2.5 million daily active users in the past nine weeks with its app that lets people anonymously answer kind-hearted multiple-choice questions about friends who then receive the poll results as compliments. You see questions like “Best to bring to a party?,” “Their perseverance is admirable?” and “Could see becoming a poet?” with your uploaded contacts on the app as answer choices.

tbh has racked up more than 1 billion poll answers since officially launching in limited states in August, mostly from teens and high school students, and spent weeks topping the free app charts. When we profiled tbh last month in the company’s first big interview, co-creator Nikita Bier told us, “If we’re improving the mental health of millions of teens, that’s a success to us.”

Financial terms of the deal weren’t disclosed, but TechCrunch has heard the price paid was less than $100 million and won’t require any regulatory approval. As part of the deal, tbh’s four co-creators — Bier, Erik Hazzard, Kyle Zaragoza and Nicolas Ducdodon — will join Facebook’s Menlo Park headquarters while continuing to grow their app with Facebook’s cash, engineering, anti-spam, moderation and localization resources.

This isn’t quite right. I suspect TechCrunch, and whatever source they “heard” from, is referencing the Hart-Scott-Rodino Antitrust Improvements Act. In order to reduce the burden on the Fair Trade Commission and the Antitrust Division of the Department of Justice, an acquirer only needs to report acquisitions (and wait for a specified time period to give time for review) for which the total value is more than a specified threshold; for 2017, that threshold is $80.8 million. To that end, I wouldn’t be surprised if this deal is worth approximately $80.7 million; that would mean Facebook doesn’t have to submit this acquisition for review.

However, just because Facebook doesn’t have to submit this acquisition for review doesn’t mean it can’t be reviewed; indeed, in a closely-watched case from 2014, the FTC successfully sued to undo a $28 million acquisition that had already been consummated. That was only one of many acquisitions the FTC has investigated that didn’t cross the Hart-Scott-Rodino threshold; in most cases the FTC acted in response to complaints from customers or competitors.

Might an analyst complain as well? The FTC can, and should, investigate this acquisition.

The Social-Communications Map

In late 2013, Facebook made their most concerted effort to buy Snapchat (for $3 billion); that was when I made the Social-Communications Map:

The goal of this map was to show that there was no single social app that covered all of humanity’s social needs: there were critical differences in how people perceived1 different social apps, and that no one app could fill every part of this map.

Facebook, for its part, had, for better or worse, transitioned to a public app that not only handled symmetric relationships, but, at least according to perception, asymmetric broadcast as well; that, though, left an opening for an app like Snapchat. Thus Facebook’s acquisition drive: the company had already secured Instagram, giving it a position in asymmetric ephemeral broadcast apps; Snapchat rebuffed advances, so the company soon moved on to WhatsApp.

The importance of these two acquisitions cannot be overstated: Facebook has always been secure in its dominance of permanent social relationships, a position that has given the company a dominant position in digital advertising. However, while everyone may need a permanent place on the Internet (all of those teenagers people say Facebook needs to reach have Facebook accounts), the ultimate currency is attention, and much like real life, it is ephemeral conversation that dominates. Facebook, by virtue of early decisions around privacy and significant bad press about the dangers of revealing too much, was locked out of this sphere, so it bought in.

The FTC’s Failure

Those acquisitions, by the way, were, per the Hart-Scott-Rodino Act, submitted to the FTC; in the case of Instagram the agency sent what sure seems like a form letter; I’ll quote it in full:

The Commission has been conducting an investigation to determine whether the proposed acquisition of Instagram, Inc. by Facebook, Inc. may violate Section 7 of the Clayton Act or Section 5 of the Federal Trade Commission Act.

Upon further review of this matter, it now appears that no further action is warranted by the Commission at this time. Accordingly, the investigation has been closed. This action is not to be construed as a determination that a violation may not have occurred, just as the pendency of an investigation should not be construed as a determination that a violation has occurred. The Commission reserves the right to take such further action as the public interest may require.

And so the single most injurious acquisition with regards to competition in not just social networking specifically but digital advertising broadly was approved. Section 7 of the Clayton Act (post its 1950 amendment), states:

No person shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no person subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of one or more persons engaged in commerce or in any activity affecting commerce, where in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition, of such stocks or assets, or of the use of such stock by the voting or granting of proxies or otherwise, may be substantially to lessen competition, or to tend to create a monopoly.

“Lessen competition” is exactly what happened. Instagram, super-charged both with the Facebook social graph and the Facebook ad machine, is not only dominating its native ephemeral asymmetric broadcasting space but increasingly preventing Snapchat from expanding. WhatsApp, meanwhile, dominates the messaging space across most of the world,2 and is the most prominent arrow in Facebook’s “future growth” quiver.

The consolidation of attention has translated into dominance in digital advertising. Facebook accounted for 77% of revenue growth in digital advertising in the United States in 2016; add in Google and the duopoly’s share of growth was 99%. Even Snapchat, which after rightly rebuffing Facebook’s acquisition offers, IPO’d earlier this year for $24 billion,3 has seen revenue declines, all while Facebook ever more blatantly rips off the product.

The Privacy Red Herring

The FTC’s response to the WhatsApp acquisition is more interesting: there the agency’s focus was privacy, specifically insisting that Facebook not change WhatsApp’s more stringent promises around user data without affirmative consent from users. This followed a few years after Facebook’s consent decree with the FTC that demanded the company not share user data without their permission.

There’s just one problem: whatever limitations this consent decree may have placed upon Facebook, the reality is that the company is a self-contained ecosystem: prohibiting the permissionless sharing of personal information in fact entrenches Facebook’s position. Take, for example, Europe’s vaunted GDPR law: as I explained in the Daily Update, data portability that, for privacy reasons, excludes the social graph (because your friends didn’t give you permission to share their information with other services) makes it that much harder for competition to arise.

So it was with the FTC’s restrictions around the WhatsApp deal: the agency reiterated that Facebook couldn’t violate user’s privacy, and completely ignored that the easiest away around privacy restrictions is to simply own all of a user’s social interactions.

Understanding Social Networks

Perhaps the most fanciful regulatory document of all, though, is not from the FTC, but rather the United Kingdom’s Office of Fair Trading. Its review of the Instagram deal rested on its analysis of Facebook Camera, an app that no longer exists.

There are several relatively strong competitors to Instagram in the supply of camera and photo editing apps, and those competitors appear at present to be a stronger constraint on Instagram than Facebook’s new app. The majority of third parties did not believe that photo apps are attractive to advertisers on a stand-alone basis, but that they are complementary to social networks. The OFT therefore does not believe that the transaction gives rise to a realistic prospect of a substantial lessening of competition in the supply of photo apps.

“The supply of photo apps.” What a stunningly ignorant evaluation of what Instagram already was: not simply a photo filter app but a social network in its own right. The part about revenue generation, though, was even more amazing:

The parties’ revenue models are also very different. While Facebook generates revenue from advertising and users purchasing virtual and digital goods via Facebook, Instagram does not generate any revenue.

This bit, five year on, still leaves me speechless: Instagram didn’t generate advertising revenue because that’s not how social networks work. As Mark Zuckerberg frequently explains, there is a formula for monetization: first grow users, then increase engagement, next attract businesses, and finally sell ads. Just because Instagram, at the time of this acquisition, was still in Stage 1, did not preclude the possibility of Stage 4; the problem is that the Office of Fair Trading simply had no idea how this world worked.

The issue is straightforward: networks are the monopoly makers of the Internet era. To build one is extremely difficult, but, once built, nearly impregnable. The only possible antidote is another network that draws away the one scarce resource: attention. To that end, when it comes to the Internet, the single most effective tool in antitrust regulation is keeping social networks in separate competitive companies. That the FTC and Office of Fair Trading failed to do so in the case of Instagram and WhatsApp is to the detriment of everyone.

Facebook and tbh

This is the context for Facebook’s tbh acquisition. The app, new as it is, is attacking greenspace in the Social-Communication Map:

tbh is hardly the only contender: Secret and Yik Yak were others. Secret failed due to the lack of an organizational mechanic and negativity; Yik Yak fixed the former by utilizing location, but suffered from the same negativity problem. tbh has clearly learned lessons from both: the app leverages both location and your address book as an organizing mechanic, and is engineered from the ground-up to be focused on positivity.

Moreover, it’s easy to see how it could be super-charged by Facebook: the social graph is probably even more powerful than the address book in terms of building a network, and provides multiple outlets for connections established on tbh. Just as importantly, Facebook can in the short term fund tbh and, in the long run, simply graft the service onto its cross-app sales engine. It’s a great move for both parties.

What is much more questionable, though, is whether this is a great deal for society. tbh is, by definition, winning share in the zero sum competition for attention in the ultra-desirable teenage demographic in particular, and that’s good news for any would-be Facebook competitors. Why should it be ok for Facebook to simply swallow up another app, small thoughh it may currently be? Again, simply looking at narrowly-defined marketshare estimations or non-existent revenue streams is to fundamentally misunderstand how social networks work.

Indeed, I’ve already made my position clear — social networks should not be allowed to acquire other social networks:

Facebook should not be allowed to buy another network-based app; I would go further and make it prima facie anticompetitive for one social network to buy another. Network effects are just too powerful to allow them to be combined. For example, the current environment would look a lot different if Facebook didn’t own Instagram or WhatsApp (and, should Facebook ever lose an antitrust lawsuit, the remedy would almost certainly be spinning off Instagram and WhatsApp).

The FTC dropped the ball with Instagram and WhatsApp; absent a time machine, the best time to do the right thing is right now.

Or, perhaps, Facebook should be allowed to proceed — but with conditions. My second demand is about the social graph:

All social networks should be required to enable social graph portability — the ability to export your lists of friends from one network to another. Again Instagram is the perfect example: the one-time photo-filtering app launched its network off the back of Twitter by enabling the wholesale import of your Twitter social graph. And, after it was acquired by Facebook, Instagram has only accelerated its growth by continually importing your Facebook network. Today all social networks have long since made this impossible, making it that much more difficult for competitors to arise.

Requiring Facebook to offer its social graph to any would-be competitor as a condition of acquiring tbh would be a good outcome; unfortunately, it is perhaps the most unlikely, given the FTC’s commitment to unfettered privacy (without a consideration of the impact on competition).

What shouldn’t be allowed is what Facebook clearly hopes — and suggests — will happen: no regulatory review at all. The FTC has the power, and it’s time to use it.

  1. Perceived is a critical point: Twitter and Instagram, for example, are permanent, but are perceived by most as being ephemeral (arguably Twitter’s has shifted in the public conscious as being something that is more permanent) []
  2. The most noteworthy exceptions being the United States (mixed), China (WeChat), South Korea (Kakao), Japan, Thailand, and Taiwan (LINE) []
  3. As an aside, for all of Snapchat’s troubles to justify its $24 billion IPO, keep in mind that the vast majority of the commentariat insisted Spiegel was irrational to turn down $3 billion; it’s a reminder that few understand exponential growth curves []

Goodbye Gatekeepers

I’d be remiss in not stating the obvious: Harvey Weinstein is a despicable human being, who did evil things. It’s worth noting, though, the structure of Hollywood that made it possible for him to do so much evil with such frequency for so long.

The Structure of Hollywood

                           

There has always been a large “supply” of movie actors, directors, script writers, etc.; Los Angeles is famous for being a city of transplants, particularly young men and women eager to make a go of it in show business, certain their breakthrough opportunity is the next audition, the next script, the next movie pitch.

The supply of movies, though, is limited. These two charts from Stephen Follows tell the story. First, the number of feature films:

Then, the number of studio versus non-studio films:

Back in 1980, shortly after the creation of Weinstein’s Miramax production company, there were just over 100 movies release in US cinemas a year; in 2016, there were 736, but for “wide Studio releases” — Weinstein’s territory — there were only 93. Suppose there are five meaningful acting jobs per movie: that means there are only about 500 meaningful acting jobs a year. And Weinstein not only decided who filled many of those 500 roles, he had an outsized ability to affect who filled the rest by making or breaking reputations.

Weinstein was a gatekeeper, presented with virtually unlimited supply while controlling limited distribution: those that wished to reach consumers had to accede to his demands, no matter how criminally perverse they may have been. Lauren O’Connor, an employee at the Weinstein Company, summed up the power differential that resulted in an internal memo uncovered by the New York Times:

I am a 28 year old woman trying to make a living and a career. Harvey Weinstein is a 64 year old, world famous man and this is his company. The balance of power is me: 0, Harvey Weinstein: 10.

What made Hollywood’s structure particularly nefarious was the fact that selecting actors is such a subjective process. Movies are art — what appeals to one person may not appeal to another — making people like Weinstein cultural curators. If he were to not select an actor, or purposely damaged their reputation through his extensive contacts with the press, they wouldn’t have a chance in Hollywood. After all, there were many others to choose from, and no other routes to making movies.

All the News That’s Fit to Print

Jim Rutenberg, the New York Times’ media columnist, highlighted Weinstein’s press contacts in a follow-up piece entitled Harvey Weinstein’s Media Enablers:

The real story didn’t surface until now because too many people in the intertwined news and entertainment industries had too much to gain from Mr. Weinstein for too long. Across a run of more than 30 years, he had the power to mint stars, to launch careers, to feed the ever-famished content beast. And he did so with quality films that won statuettes and made a whole lot of money for a whole lot of people.

Sharon Waxman, a former reporter for the New York Times, said on The Wrap that the New York Times itself belonged on that list:

I simply gagged when I read Jim Rutenberg’s sanctimonious piece on Saturday about the “media enablers” who kept this story from the public for decades…That’s right, Jim. No one — including The New York Times. In 2004, I was still a fairly new reporter at The New York Times when I got the green light to look into oft-repeated allegations of sexual misconduct by Weinstein…The story I reported never ran.

After intense pressure from Weinstein, which included having Matt Damon and Russell Crowe call me directly to vouch for Lombardo and unknown discussions well above my head at the Times, the story was gutted. I was told at the time that Weinstein had visited the newsroom in person to make his displeasure known. I knew he was a major advertiser in the Times, and that he was a powerful person overall.

Weinstein’s alleged pressuring of the New York Times — and his ability to influence the media generally — rested on the fact that the media is also a gatekeeper. The New York Times still brags as such in its print edition:

“All the News That’s Fit to Print” is rather clear about how the New York Times’ views itself: the arbiter — that is gatekeeper — of what news ought to be consumed by the public. In truth, though, by 2004 that gatekeeper role was already breaking down; perhaps the most famous example involved another set of allegations of sexual misconduct, when in 1998 the Drudge Report reported the news that Newsweek wouldn’t:

The gate could not hold.

The Structure of Newspapers

After Waxman’s post, New York Times’ editor-in-chief Dean Baquet argued that “it is unimaginable” that her story was killed due to pressure from Weinstein; in fact, though, an examination of the structure of the newspaper business suggests it is quite imaginable.

In 2004, the New York Times had $3.3 billion in revenue, up 2.4% year-over-year. That increase, though, belied deeper problems: circulation had dropped a percentage point year-over-year; revenue growth came from a 6% increase in adverting rates. Advertising was the New York Times’ primary revenue source, accounting for 66% of total revenue, and given that in 2003 the average Hollywood movie spent an average of $34.8 million in advertising, some portion of that undoubtedly came from Weinstein specifically.

The reason that circulation decline suggested a problem is that the ability of the New York Times and other newspapers to command advertising depended on being a gatekeeper: advertisers didn’t take out newspaper ads because they loved newspapers, they took out newspaper ads because it was an effective way to reach potential customers:

“Gatekeeper” is another way to say “integrator”, and as I have explained previously, the key to the newspaper business model was controlling distribution and integrating editorial content and ads. In 2004, though, that integration was the verge of falling apart; the Internet meant advertisers would reach customers directly. It had already happened with Craigslist and classifieds, and first ad networks and then social networks would do the same to display ads, causing newspaper advertising revenue to plummet to levels not seen since the 1950s:

2004 came after that first Craigslist-inspired decline, and it’s all too easy to imagine Weinstein’s threats having their intended effect.

Journalism Worth Paying For

The ultimate credit for the New York Times story goes first and foremost to the women willing to go on the record, and then to Jodi Kantor and Megan Twohey, the reporters who investigated and wrote it. If Waxman’s allegations are true, though, then it’s worth pointing out that the New York Times is in a very different place than it was in 2004.

Last year the New York Times had $1.6 billion in revenue, a 53% decrease from 2004. Critically, though, the source of that revenue had flipped on its head: advertising accounted for only 37% of revenue, while circulation was 57%, up from 54% in 2015, and only 27% in 2004; by all account circulation is up significantly more in 2017.

That image is from the company’s 2020 strategy report, which declared that the editorial product should align with the company’s focus on subscriptions; Baquet told Recode that it was his job “to do as many ‘Amazons’ as possible”, referring to the paper’s investigative report on Amazon’s working conditions. Certainly this Weinstein piece fits: whatever expenses the New York Times spent reporting this story will be more than made up in the burnishing of the company’s reputation for journalism that is worth paying for.

Admittedly, “Journalism worth paying for” doesn’t have the same ring as “All the News That’s Fit to Print”, but it is a far better descriptor of the New York Times’ new business model:

In a world where the default news source is the Facebook News Feed, the New York Times is breaking out of the inevitable modularization and commodification entailed in supplying the “news” to the feed. That, in turn, requires building a direct relationship with customers: they are the ones in charge, not the gatekeepers of old — even they must now go direct.

YouTube and the Movies

In the aftermath of the New York Times report (and another from The New Yorker), various stories have alluded to the fact that Weinstein has less power than he used to. I can’t say I know enough about the particulars of Hollywood to know whether that it true in a relative sense, but there’s no question movies are less important than ever before. Indeed, the industry looks a lot like newspapers in 2004; revenue is increasing due to higher prices, even as the number of movie-goers steadily declines (graph from The Numbers):

Meanwhile, more and more cachet — and star power — is flowing to serialized television, particularly distributors like Netflix and HBO that go directly to customers. And don’t forget YouTube: video is a zero sum activity — time spent watching one source of video is time not spent watching another — and YouTube showed over a billion hours of video worldwide every day in 2016.

YouTube represents something else that is just as important: the complete lack of gatekeepers. Google CEO Sundar Pichai said on an earnings’ call earlier this year that “Every single day, over 1,000 creators reached the milestone of having 1,000 channel subscribers.” That is an astounding number in its own right; what is even more remarkable is that while Hollywood has only ~3,500 acting slots a year (including all movies, not just major studios), YouTube creates 100 times as many “stars” over the same time period.

The End of Gatekeepers

It is easy to see the downsides of the destruction of gatekeepers; in 2016, before the election, I explained how the collapse of media gatekeepers meant the collapse of political gatekeepers. From The Voters Decide:

There is no one dominant force when it comes to the dispersal of political information, and that includes the parties described in the previous section. Remember, in a Facebook world, information suppliers are modularized and commoditized as most people get their news from their feed. This has two implications:

  • All news sources are competing on an equal footing; those controlled or bought by a party are not inherently privileged
  • The likelihood any particular message will “break out” is based not on who is propagating said message but on how many users are receptive to hearing it. The power has shifted from the supply side to the demand side

This is a big problem for the parties as described in The Party Decides. Remember, in Noel and company’s description party actors care more about their policy preferences than they do voter preferences, but in an aggregated world it is voters aka users who decide which issues get traction and which don’t. And, by extension, the most successful politicians in an aggregated world are not those who serve the party but rather those who tell voters what they most want to hear.

I can imagine there are many that long for the days when the media — and by extension the parties — could effectively determine presidential nominees. The Weinstein case, though, is a reminder of just how rotten gatekeepers can be. Their very structure is ripe for abuse by those in power, and suppression of those wishing to break through; consumers, meanwhile, are taken for granted.

For my part, I’m thankful such structures are increasingly untenable: perhaps the New York Times didn’t spike that 2004 story because of pressure from Weinstein, but there’s no doubt that for decades “All the News That’s Fit to Print” was shamefully deficient in reporting about news and groups that weren’t on the radar of New York newspaper editors. And, selfishly, I wouldn’t have the career I do without the absence of gatekeepers: anyone can set up a website and send an email and instantly compete with the New York Times and everyone else for attention and subscription dollars.

Most importantly, though, the end of gatekeepers is inevitable: the Internet provides abundance, not scarcity, and power flows from discovery, not distribution.1 We can regret the change or relish it, but we cannot halt it: best to get on with making it work for far more people than gatekeepers ever helped — or harassed.

  1. And fortunately, to date, those that own distribution — the aggregators — have tried to be neutral; that’s a good thing []

Google’s Search for the Sweet Spot

This was my favorite slide from yesterday’s Google hardware event:

Oh, sorry, wrong picture. Here you go:

For Google, a cloud on the slide is not necessary: it is the very essence of the company. Hardware, well, perhaps a little over-compensation is in order.

Apple’s Sweet Spot

Steve Jobs’, in his last keynote, framed that slide as a new direction for Apple after the company’s brilliant digital hub strategy, introduced ten years prior:

In fact, though, Apple was building Digital Hub 2.0, with the iPhone at the center:

Sure, iCloud kept files in sync (usually), but the iPhone was the juggernaut it was because it hit the perfect sweet spot of company, market, and value chain:

Company: Apple from the very beginning has been premised on the idea of integrating hardware and software, and the iPhone was the ultimate expression of that premise.

Market: The smartphone market was the best market technology has ever seen: not only did everyone need a phone, but in developed countries carriers subsidized top-end models because they drove higher average revenue per subscriber. Moreover, because a phone was something you took with you everywhere, there was far more value placed on non-technical attributes like fit-and-finish and brand.

Value Chain: Apple’s integration delivered sustainable differentiation in the smartphone value chain, forcing every other element, from suppliers to network providers to app makers to modularize themselves around Apple’s integration.

The result was the most successful product ever.

Google Search’s Sweet Spot

Given that Google is the second most valuable company in the world (after Apple), it is quite clear the company has found a sweet spot of its own. Indeed, Google Search ticks the same boxes as the iPhone:

Company: Google is built around the idea that superior technology is all that matters; that was certainly the case with search, which brilliantly leveraged the connectivity inherent to the web to make itself better; unlike its competitors, the bigger the web became, the better Google itself became.

Market: The truth is that the best technology does not always win; what made Google search the dominant force that it was and remains was the openness of the web. The less friction there was in the traversal of information the more that sheer technological prowess matters.

Value Chain: Google is the king of aggregators because, when information shifted from scarcity to abundance, discovery became the point of leverage, and Google was better at discovery than anyone. That allowed the company to integrate end users and discovery, making search the single best place to advertise for all kinds of industries.

Building truly transformative products requires all three: a company that is the best at serving a market at the point in the value chain where integration can drive sustainable profits.

Google’s Differentiator

Last year, after the company’s first ‘Made By Google’ event, I framed the company’s hardware efforts in the context of the search business model. Specifically:

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?

It seemed at the time that Google planned to use the Google Assistant as a differentiator to drive Pixel sales; a few months later, the company either backed down or admitted to what was the better course of action: making Assistant a part of Android. It remains unclear whether the pursuit of alternatives by Android OEM’s was the cause or result of Assistant’s temporary Pixel-exclusivity.

What seems truer than ever, though, is this:

Google has adopted Alan Kay’s maxim that “People who are really serious about software should make their own hardware.” To that end the company introduced multiple hardware devices, including a new phone, the previously-announced Google Home device, new Chromecasts, and a new VR headset. Needless to say, all make it far easier to use Google services than any 3rd-party OEM does, much less Apple’s iPhone.

Yesterday the company doubled down: there were two new Google Home devices, including a competitor for Amazon’s Echo Dot and Apple’s HomePod, an updated Pixel phone, a new laptop with Google Assistant built-in (including into the optional stylus), headphones, and even a standalone camera.

What was most striking, though, was Google CEO Sundar Pichai’s opening:

We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work…We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we think we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts to AI.

Note that last line: Google’s confidence comes from the Company perspective: artificial intelligence, at least its machine learning manifestation, fits perfectly in Google’s wheelhouse of acting on massive amounts of data. Simply being good at something, though, is not enough: you need the market and value chain fit as well.

Indeed, this is why startups often beat incumbents: incumbents have resource advantages, but everything about their company is focused on a solved problem — the one that propelled them from startup to incumbent in the first place. Startups, on the other hand, have the luxury of conforming every aspect of their company around the problem to be solved, perfectly serving the market and capturing the point of integration/differentiation in the value chain.

Google’s Committment

So how does Google fare? Start with value chains: I actually found the breadth of Google products to be impressive, both proof that my suspicions about hardware being the best way to monetize Google software was correct, and evidence of a real commitment on Google’s part to realizing that opportunity. The idea is straight from Apple’s playbook: monetize software by selling integrated hardware at a healthy margin (products competing directly with Amazon excepted).

The breadth is also necessary: for Assistant to reach its potential it is necessary that it be available everywhere, and everywhere happens to hit on both Apple and Amazon’s achilles heel — Apple, the devotion to the phone as the center of everything, and Amazon’s lack of a phone platform of its own.

The question, though, is the market, and this is where I appreciate Pichai’s perhaps inadvertent honesty: Google is excited about AI because Google is good at AI; the success of AI as a differentiator, though, depends on whether or not there is a market for it. That remain to be seen.

That said, perhaps the most surprising news from yesterday came from an interview Pichai did with The Verge’s Dieter Bohn:

As ambitious as Google is with its own hardware, it’s still a tiny drop in the bucket compared to the company’s online business. Pichai won’t say when we can expect to see hardware sales become a big, broken-out part of its financial calls, outside of saying it’ll definitely happen in the next five years.

That’s no small thing: I have hammered the company repeatedly for its failure to break out different business units, particularly YouTube, so a pledge to disclose hardware sales is significant. It also hints at a deeper commitment: specifically, I wouldn’t be surprised if Google announced dedicated retail stores sooner rather than later. AI may be the future, and Google may be the best at it, but sometimes markets need to be made, not simply seized.

Trustworthy Networking

Fifteen years on, this paragraph from a Bill Gates’ memo is a bit cringe-inducing:

The events of last year — from September’s terrorist attacks to a number of malicious and highly publicized computer viruses — reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it’s the airlines or computer systems.

Equivocating computer viruses with the worst terrorist attack in U.S. history may be a bit over-the-top, but for Microsoft, anyways, 2001 was a period of real crisis: the company’s software was hit by seven different worms,1 all following on the heels of the previous year’s massively damaging ILOVEYOU worm. More and more consumers were scared to even use their computers.

That was the context for perhaps the second-most famous Gates’ memo — Trustworthy Computing — from which the above excerpt was taken. This was the core takeaway:

There are many changes Microsoft needs to make as a company to ensure and keep our customers’ trust at every level – from the way we develop software, to our support efforts, to our operational and business practices. As software has become ever more complex, interdependent and interconnected, our reputation as a company has in turn become more vulnerable. Flaws in a single Microsoft product, service or policy not only affect the quality of our platform and services overall, but also our customers’ view of us as a company…

In the past, we’ve made our software and services more compelling for users by adding new features and functionality, and by making our platform richly extensible. We’ve done a terrific job at that, but all those great features won’t matter unless customers trust our software. So now, when we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box, and we must constantly refine and improve that security as threats evolve.

‘Trustworthy Computing’ was in many respects the inevitable counterpart to Gates’ most-famous memo: 1995’s The Internet Tidal Wave:

In this memo I want to make clear that our focus on the Internet is crucial to every part of our business. The Internet is the most important single development to come along since the IBM PC was introduced in 1981. It is even more important than the arrival of the graphical user interface (GUI).

Obviously Gates was right, but the memo went further: it is packed with ideas about how Microsoft can “superset the Web” in order to “make it clear that Windows machines are the best choice for the Internet”; to that end Gates wrote, “I want every product plan to try and go overboard on Internet features.” And, when Microsoft did exactly that, the result was a set of products with massive security holes, resulting in a crisis. The faster you move towards the future, the more unintended consequences — security debt, if you will — there inevitably will be.

The analogy to Facebook is straightforward: operating with the motto of “Move Fast and Break Things” the company has spent the last decade going overboard, as it were, on connecting everyone and everything. And then, to handle the deluge of information that resulted, the company helpfully presents an algorithmically curated News Feed that shows exactly what it thinks its users will enjoy seeing the most (engagement being a necessary proxy for enjoyment). It is truly a marvel: individual customization at global scale.

There have, though, been side effects.

Russian Ads

I wrote about Russian political ads on Facebook two weeks ago, explaining how the ads were bought through Facebook’s self-serve ad model; this allows the company’s five million advertisers — given that number, by definition the vast majority are small and medium-sized businesses — to run ads without having to interact with another human.

This, I argued, was a good thing, and I absolutely stand by it. From that article:

The biggest beneficiaries of zero transaction costs on the super-aggregators are not traditional advertisers, whether that be companies like CPG conglomerates or presidential campaigns. Both have the resources to advertise anywhere and everywhere, and indeed, often find that the fine-tooth targeting on super-aggregators isn’t worth the effort required. The folks that do benefit, though, are those that wouldn’t have a voice otherwise: startups and niche offerings, both in terms of business and politics. Google and Facebook have opened the field to far more entrants, and while that means there are more folks with bad intentions, there are also a whole lot more folks with ideas that were shut out by the significant transaction costs inherent in pre-Internet platforms.

That line, “folks with bad intentions”, should sound familiar: that is exactly what led to Microsoft’s crisis in 2001. Instead of building for local networks that were protected by the fact that access was non-scalable (i.e. physical access was required), Microsoft products were now on the Internet where they could be attacked from anywhere by anyone. And, when you have to defend against anyone, the likelihood of facing “folks with bad intentions” becomes a certainty. So it is with Facebook self-serve ads.

What is just as important to note, though, is that a scalable solution is also required. In the case of Microsoft, it obviously wasn’t viable to simply rip out Internet connectivity from its products; it is similarly foolhardy to suggest that Facebook abandon all of the benefits of the self-serve model by, for example, reviewing every ad.

To reiterate the point, this is impossible. To use the Russian ad numbers as a proxy, consider the math:

  • The $100,000 spent by 470 inauthentic account identified by Facebook was good for 3,000 ads, which means each ad cost an average of $30.
  • As a quick but essential aside, this exercise is going to be a very rough approximation, because the price paid for an ad varies hugely depending on how finely targeted it is, and how competitive said targeting opportunities are. In the case of these ads, Facebook revealed yesterday that for 50% of the ads less than $3 was spent, and for 99% of the ads less than $1,000 was spent (and 25% weren’t even shown because they failed to win the auction for the audience they targeted). However, given that Facebook only reveals the percentage change in its average price per ad, not the actual amount, $30 is the best we can do.
  • Last quarter Facebook had $9.2 billion in ad revenue, which was an increase of 47% over the year prior. Using that $30/ad number, that means last quarter there were approximately 276 million unique ads on Facebook (each of which could be shown multiple times, of course).

Again, the actual number could be different than this by a huge margin — it is very likely that this Russian ad buy is not at all representative — but that margin could go in either direction. The important takeaway is that looking at every ad means effectively killing self-serve, which not only kills Facebook’s revenue model, but, far more importantly, removes a truly accessible and disruptive advertising channel for small and medium businesses, particularly those uniquely enabled by the Internet.

Fixing Facebook Ads

What makes far more sense is for Facebook to find a point of leverage; for Microsoft, this was relatively easy — harden the operating system, which the company did with XP Service Pack 2. Facebook’s challenge is harder, but the point of leverage seems clear: advertisers themselves, not advertisements; after all, 5 million all-time is a much more manageable number than 276 million a quarter. To that end, the company also announced yesterday a change in how it handled U.S. political advertisers:

Increasing requirements for authenticity. We’re updating our policies to require more thorough documentation from advertisers who want to run US federal election-related ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads. As Mark said, we won’t catch everyone immediately, but we can make it harder to try to interfere.

This is the right point of leverage, but this policy change is inadequate. The only advertisers affected here are those that explicitly declare they are running ads for US federal elections; what about state elections, or other countries, or, pertinent to this case, bad actors?

Facebook should increase requirements for authenticity from all advertisers, at least those that spend significant amounts of money or place a large number of ads. I do believe it is important to make it easy for small companies to come online as advertisers, so perhaps documentation could be required for a $1,000+ ad buy, or a cumulative $5,0000, or after 10 ads (these are just guesses; Facebook should have a much clearer idea what levels will increase the hassle for bad actors yet make the platform accessible to small businesses). This will make it more difficult for bad actors in elections of all kinds, or those pushing scummy advertising generally.

Secondly, the most scalable counterweight to bad ads is massively increased transparency. Facebook took steps in this regard as well; from the same post:

Making advertising more transparent. We believe that when you see an ad, you should know who ran it and what other ads they’re running — which is why we show you the Page name for any ads that run in your feed. To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well — including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency. We try to catch content that shouldn’t be on Facebook before it’s even posted — but because this is not always possible, we also take action when people report ads that violate our policies. We’re grateful to our community for this support, and hope that more transparency will mean more people can report inappropriate ads.

This will eliminate so-called Dark Ads which could only be seen by those targeted; again, though, Facebook didn’t go far enough. These ads can still only be seen by going to the actual pages, which are impossible to know about unless you are shown an ad; the company should have a central, searchable, repository of all those hundreds of millions of ads. Again, it is worth pointing out that this will hurt some small businesses (larger competitors can easily pick up on their marketing strategies), but the tradeoff when it comes to oversight of not just political ads but ads of all types is worth it.

What Facebook has to realize is that while both of these proposals are likely to hurt the bottom line — the first will increase friction in advertisers coming on board (or ramping up spend), while the second will have a commodification effect on ads — this scandal is, to use Gates’ words, “not only affect[ing] the quality of our platform and services overall, but also [their] customers’ view of [them] as a company.” This matters because Facebook’s biggest risk is government regulation, and that is ultimately a political question, where the opinion of the body politic matters greatly.

Filter Bubbles

All that said, its worth stepping back for a moment and putting this scandal in context. I gently mocked Gates for equivocating computer viruses to terrorist attacks, but the suggestion that $100,000 in Facebook ads — of which only 46% ran before the election — swung the presidential results is just as questionable. Frankly, if spending $100,000 on Facebook had that level of return, the company would be worth many multiples of the $492 billion it is today! It is concerning and frustrating to me as a citizen to see so many spend far more time prosecuting these ads at the expense of a broader reflection on the state of the country.

That includes Facebook, by the way. I actually tend to agree with Zuckerberg’s post-election comment — which he since apologized for — that it was “crazy” to think that ‘Fake News’ influenced the election; my view is that Fake News is symptom of a far more serious problem: filter bubbles.

To that end, the Zuckerberg statement that truly concerned me was on the company’s Q2 2016 earnings call; this was a few months after the brouhaha over alleged bias in the Trending Topics module, and Zuckerberg was asked about the filter bubble problem:

So we have studied the effect that you’re talking about, and published the results of our research that show that Facebook is actually, and social media in general, are the most diverse forms of media that are out there. And basically what — the way to think about this is that, even if a lot of your friends come from the same kind of background or have the same political or religious beliefs, if you know a couple of hundred people, there’s a good chance that even maybe a small percent, maybe 5% or 10% or 15% of them will have different viewpoints, which means that their perspectives are now going to be shown in your News Feed.

And if you compare that to traditional media where people will typically pick a newspaper or a TV station that they want to watch and just get 100% of the view from that, people are actually getting exposed to much more different kinds of content through social media than they would have otherwise or have been in the past. So it’s a good sounding theory, and I can get why people repeat it, but it’s not true. So I think that that’s something that if folks read the research that we put out there, then they’ll see that.

Actually, this is…questionable news (I can’t quite bring myself to use the obvious term). The Facebook-commissioned study Zuckerberg referenced had massive problems, including a non-representative sample, a non-reviewable proprietary data set (thus making the study non-reviewable), and beyond that, the study’s results actually did support the idea of filter bubbles.2

It’s rather a meta problem: I suspect Zuckerberg’s own bubble makes him inclined to dismiss the possibility of filter bubbles, while the bubble Facebook’s most strident critics live in means they too are focusing on the wrong thing. Certainly this is a conversation where everyone has more to lose; those scapegoating Facebook probably don’t want to think about their own responsibility, such that it may be, for an election result they disagree with, and the stakes are even higher for Facebook: giving people what they want to see is far more important to the company’s business model than $100,000 in illegal ads, unintended consequences or not.

  1. The Anna Kournikova, Sadmind, Sircam, Code Red, Code Red II, Nimda, and Klez worms, respectively []
  2. Further references here, here, here, and here []

Defining Aggregators

(Note: this is not a typical Stratechery article; there is no over-arching narrative or reference to current news. Rather, the primary goal is to provide a future point of reference)

Aggregation Theory describes how platforms (i.e. aggregators) come to dominate the industries in which they compete in a systematic and predictable way. Aggregation Theory should serve as a guidebook for aspiring platform companies, a warning for industries predicated on controlling distribution, and a primer for regulators addressing the inevitable antitrust concerns that are the endgame of Aggregation Theory.

Aggregation Theory was first coined in this eponymously-titled 2015 article. That article followed on the heels of a series of posts about Airbnb, Netflix, and web publishing that, I realized, fit together into a broader framework that was applicable to a range of Internet-enabled companies. Over the ensuing two years I have significantly fleshed out the ideas in that original article, yet subsequent articles necessarily link to an article that marked the beginning of Aggregation Theory, not the current state.

That noted, the original article is very much worth reading, particularly its description of how value has shifted away from companies that control the distribution of scarce resources to those that control demand for abundant ones; the purpose of this article is to catalog exactly what the latter look like.

The Characteristics of Aggregators

Aggregators have all three of the following characteristics; the absence of any one of them can result in a very successful business (in the case of Apple, arguably the most successful business in history), but it means said company is not an aggregator.

Direct Relationship with Users

This point is straight-forward, yet the linchpin on which everything else rests: aggregators have a direct relationship with users. This may be a payment-based relationship, an account-based one, or simply one based on regular usage (think Google and non-logged in users).

Zero Marginal Costs For Serving Users

Companies traditionally have had to incur (up to) three types of marginal costs when it comes to serving users/customers directly.

  • The cost of goods sold (COGS), that is, the cost of producing an item or providing a service
  • Distribution costs, that is the cost of getting an item to the customer (usually via retail) or facilitating the provision of a service (usually via real estate)
  • Transaction costs, that is the cost of executing a transaction for a good or service, providing customer service, etc.

Aggregators incur none of these costs:

  • The goods “sold” by an aggregator are digital and thus have zero marginal costs (they may, of course, have significant fixed costs)1
  • These digital goods are delivered via the Internet, which results in zero distribution costs2
  • Transactions are handled automatically through automatic account management, credit cards payments, etc.3

This characteristic means that businesses like Apple hardware and Amazon’s traditional retail operations are not aggregators; both bear significant costs in serving the marginal customer (and, in the case of Amazon in particular, have achieved such scale that the service’s relative cost of distribution is actually a moat).

Demand-driven Multi-sided Networks with Decreasing Acquisition Costs

Because aggregators deal with digital goods, there is an abundance of supply; that means users reap value through discovery and curation, and most aggregators get started by delivering superior discovery.

Then, once an aggregator has gained some number of end users, suppliers will come onto the aggregator’s platform on the aggregator’s terms, effectively commoditizing and modularizing themselves. Those additional suppliers then make the aggregator more attractive to more users, which in turn draws more suppliers, in a virtuous cycle.

This means that for aggregators, customer acquisition costs decrease over time; marginal customers are attracted to the platform by virtue of the increasing number of suppliers. This further means that aggregators enjoy winner-take-all effects: since the value of an aggregator to end users is continually increasing it is exceedingly difficult for competitors to take away users or win new ones.

This is in contrast to non-platform companies that face increasing customer acquisition costs as their user base grows. That is because initial customers are often a perfect product-market fit; however, as that fit decreases, the surplus value from the product decreases as well and quickly turns negative. Generally speaking, any business that creates its customer value in-house is not an aggregator because eventually its customer acquisition costs will limit its growth potential.

One additional note: the aforementioned Apple and Amazon do have businesses that qualify as aggregators, at least to a degree: for Apple, it is the App Store (as well as the Google Play Store). Apple owns the user relationship, incurs zero marginal costs in serving that user, and has a network of App Developers continually improving supply in response to demand. Amazon, meanwhile, has Amazon Merchant Services, which is a two-sided network where Amazon owns the end user and passes all marginal costs to merchants (i.e. suppliers).

Classifying Aggregators

Aggregation is fundamentally about owning the user relationship and being able to scale that relationship; that said, there are different levels of aggregation based on the aggregator’s relationship to suppliers:

Level 1 Aggregators: Supply Acquisition

Level 1 Aggregators acquire their supply; their market power springs from their relationship with users, but is primarily manifested through superior buying power. That means these aggregators take longer to build and are more precarious in the short-term.

The best example of a Level 1 Aggregator is Netflix. Netflix owns the user relationship and bears no marginal costs in terms of COGS, distribution costs,4 or transaction costs.5 Moreover, Netflix does not create shows, but it does acquire them (increasingly exclusively to Netflix); the more content Netflix acquires, the more its value grows to potential users. And, the more users Netflix gains, the more it can spend on acquiring content in a virtuous cycle.

Level 1 aggregators typically operate in industries where supply is highly differentiated, and are susceptible to competitors with deeper pockets or orthogonal business models.

Level 2 Aggregators: Supply Transaction Costs

Level 2 Aggregators do not own their supply; however, they do incur transaction costs in bringing suppliers onto their platform. That limits the growth rate of Level 2 aggregators absent the incursion of significant supplier acquisition costs.

Uber is a Level 2 Aggregator (and Airbnb in some jurisdictions due to local regulations). Uber owns the user relationship and bears no marginal costs in terms of COGS, distribution costs, or transaction costs. Moreover, Uber does not own cars; those are supplied by drivers who sign up for the platform directly. At that point, though Uber needs to undertake steps like background checks, vehicle verification, etc. that incur transaction costs both in terms of money as well as time. This limits supply growth which ultimately limits demand growth.

Level 2 aggregators typically operate in industries with significant regulatory concerns that apply to the quality and safety of suppliers.

Level 3 Aggregators: Zero Supply Costs

Level 3 Aggregators do not own their supply and incur no supplier acquisition costs (either in terms of attracting suppliers or on-boarding them).

Google is the prototypical Level 3 Aggregator: suppliers (that is, websites) are not only accessible by Google by default, but in fact actively make themselves more easily searchable and discoverable (indeed, there is an entire industry — search engine optimization (SEO) — that is predicated on suppliers paying to get themselves onto Google more effectively).

Social networks are also Level 3 Aggregators: initial supply is provided by users (who are both users and suppliers); over time, as more and more attention is given to the social networks, professional content creators add their content to the social network for free.

Level 3 aggregators are predicated on massive numbers of users, which means they are usually advertising-based (which means they are free to users). An interesting exception is the aforementioned App Stores: in this case the limited market size (relatively speaking) is made up by the significantly increased revenue-per-customer available to app developers with suitable business models (primarily consumable in-app purchases).

The Super-Aggregators

Super-Aggregators operate multi-sided markets with at least three sides — users, suppliers, and advertisers — and have zero marginal costs on all of them. The only two examples are Facebook and Google, which in addition to attracting users and suppliers for free, also have self-serve advertising models that generate revenue without corresponding variable costs (other social networks like Twitter and Snapchat rely to a much greater degree on sales-force driven ad sales).

For more about Super-Aggregators see this article.

Regulating Aggregators

Given the winner-take-all nature of Aggregators, there is, at least in theory, a clear relationship between Antitrust and Aggregation. However, traditional jurisprudence is limited by three factors:

  • The key characteristic of Aggregators is that they own the user relationship. Critically, the user chooses this relationship because the aggregator offers a superior service. This makes it difficult to make antitrust arguments based on consumer welfare (the standard for U.S. jurisprudence for the last 35 years).
  • The nature of digital markets is such that aggregators may be inevitable; traditional regulatory relief, like breaking companies up or limiting their addressable markets will likely result in a new aggregator simply taking their place.
  • Aggregators make it dramatically simpler and cheaper for suppliers to reach customers (which is why suppliers work so hard to be on their platform). This increases the types of new businesses that can be created by virtue of the aggregators existing (YouTube creators, Amazon merchants, small publications, etc.); regulators should take care to preserve these new opportunities (and even protect them).

These are guidelines for regulation; determining specifics is an ongoing project for Stratechery, as are the definitions in this article.

  1. And yes, in the very long run, all fixed costs are marginal costs; that said, while the amount of capital costs for aggregators is massive, their userbase is so large that even over the long run the fixed costs per user are infinitesimal, particularly relative to revenue generated []
  2. In terms of the marginal customer; in aggregate there are of course significant bandwidth costs, but see the previous footnote []
  3. Credit card fees are a significant transaction cost that do limit some types of businesses, but will generally be ignored in this analysis []
  4. Obviously bandwidth in the aggregate is a particularly large cost of Netflix []
  5. In all cases, credit card fees excepted []

Books and Blogs

A book, at least a successful one, has a great business model: spend a lot of time and effort writing, editing, and revising it up front, and then make money selling as many identical copies as you can. The more you sell the more you profit, because the work has already been done. Of course if you are successful, the pressure is immense to write another; the payoff, though, is usually greater as well: it is much easier to sell to customers you have already sold to before than it is to find customers for the very first time.

There is, though, at least from my perspective, a downside to this model: a book, by necessity, is a finished object; that is why it can be printed and distributed at scale. The problem is that one’s thoughts may not be final; indeed, the more vital the subject, the more likely a book, with its many-month production process, is to be obsolete the moment it enters its final state of permanence.

When I started Stratechery four years ago, with my 384 Twitter followers and little else, the thought of writing a book never crossed my mind; not only did I not have a contract, I didn’t even have a topic beyond the business and strategy of technology, a niche I thought was both under-served and that I had the inklings of a point of view on.

Since then it has been an incredible journey, especially intellectually: instead of writing with a final goal in mind — a manuscript that can be printed at scale — Stratechery has become in many respects a journal of my own attempts to understand technology specifically and the way in which it is changing every aspect of society broadly. And, it turns, out, the business model is even better: instead of taking on the risk of writing a book with the hope of one-time payment from customers at the end, Stratechery subscribers fund that intellectual exploration directly and on an ongoing basis; all they ask is that I send them my journals of said exploration every day in email form.

To put it another way, at least in my experience, the lowly blog has fully disrupted the mighty book: the former was long thought to be an inferior alternative, or at best, a complementary piece for an author looking to drum up an audience; slowly but surely, though, the tools have gotten better, everything from social media for marketing to Stripe for payments to WordPress for publishing to tools like Memberful for subscriber management. It became increasingly apparent, to me anyways, that while books remained a fantastic medium for stories, both fiction and non, blogs were not only good enough, they were actually better for ideas closely tied to a world changing far more quickly than any book-related editorial process can keep up with.

To be sure, I had discovered in 2015 what might have been a worthy book topic: Aggregation Theory. That, though, makes my point: the biggest problem I have with Aggregation Theory is that that old article I keep linking to is incomplete. My thinking on what Aggregation Theory is, what its implications are, and how that should affect strategy both inside and outside of technology and, particularly over the last year, potential regulation, has evolved considerably.

To that end, it is with relief I write the following article: Defining Aggregators. I’ll be honest: it’s more for me than for you; my thinking has evolved, and clarified, and I want to link to something that represents my point of view in 2017, not just 2015. That I can do so by merely hitting ‘Publish’ is a great thing: these ideas are very much alive, and I don’t really see the point of trees that are dead, literally or virtually.

Note: This article is meant as an introduction to Defining Aggregators; it is posted as a separate article as I plan to link to that article many times in the future

The Super-Aggregators and the Russians

In August 2011, just a day or two into my career at Microsoft, I sat in on a monthly review meeting for Hotmail (now known as Outlook.com); the product manager running the meeting was going through the various geographies and their relevant metrics — new users, churn, revenue, etc. — and it was, well, pretty boring. It was only later that I realized just how astounding “boring” was; a small group of people in a conference room going over numbers that represented hundreds of millions of people and dollars in revenue, and most of us cared far more about what was on the menu for lunch.

I’ve reflected on that meeting often over the years, particularly when it comes to Facebook and controversies like censoring too much, censoring too little, or “fake news”, and I was reminded of it again with this tweet:

Mark Warner, the senior Senator from Virginia, is referring to a Russian company, thought to be linked to the Kremlin’s propaganda efforts, having bought $100,000 worth of political ads on Facebook, some number of which directly mentioned 2016 presidential candidates Donald Trump and Hillary Clinton. Facebook has released limited details about the ads, likely due to its 2012 consent decree with the FTC, which bars the company from unilaterally making private information public, as well as the problematic precedent of releasing information without a clear order compelling said release. To that end, it was reported over the weekend that special counsel Robert Mueller received a much more comprehensive set of data from Facebook after obtaining a search warrant.

Even with all that context, though, I found Senator Warner’s tweet puzzling: how else would the propaganda group have paid? Facebook’s self-service ad portal lets you buy ads in 55 different currencies, including the Russian Ruble:1

That, though, brought me back to that Hotmail meeting: that I, and probably many more in the tech industry, find the idea of Facebook selling ads in rubles to strangers to be utterly unremarkable, even as thousands find it equally outrageous and damning, is a reminder of just how unprecedented and misunderstood aggregators like Facebook continue to be, and what a challenge it will be to regulate them.

The Cellular Network Company

Senator Warner, it should be noted, is considered one of the most technologically literate people in the entire Senate — and the richest. Warner originally made his fortune by facilitating the sale of cellular phone licenses; he then co-founded Columbia Capital, a venture capital firm which specialized in cellular businesses: the firm’s early investments included Nextel, BroadSoft, and MetroPCS.

A cellular network company is certainly a new kind of business that is similar to today’s tech giants in many respects:

  • At a fundamental level, cellular network companies are about the movement of information — voice and text, in Warner’s era — not physical goods. Moreover, because this information is digital, there are no marginal distribution costs in its transfer. This is the same characteristic of companies like Google and Facebook.
  • A cellular network company has massive fixed costs and minimal marginal costs; one more minute of talk time costs practically nothing to provide, unless the network is saturated, at which point significant capital investment is necessary. Today’s internet services are similar: marginal usage is effectively free, although significant capital investments in data centers are necessary (as well as significant ongoing bandwidth costs, which are effectively zero to serve any one individual but huge in aggregate).
  • A cellular network company is, quite obviously, a network. That means the value of the service increases as the number of customers increases. This produces a powerful virtuous cycle in which new customers increase the value of the network such that it becomes attractive to new marginal customers, further increasing the value of the network for the next set of marginal customers; this “network effect” is the most common driver of the sort of “scalable advantage in customer acquisition costs” that I discussed in the case of Uber, and is a hallmark of Facebook in particular (but also Google and all of the aggregators).
  • Cellular network companies have direct relationships with their customers.

These four characteristics may seem familiar: they are all parts of Aggregation Theory, and I’ve written about each of those components in the two years since I first wrote about the theory.2 There is one more piece, though, that I have only mentioned in passing: zero transaction costs. This is the piece that apparently sets Facebook beyond Senator Warner’s understanding,3 and it is perhaps the key reason why Facebook and other aggregators are unlike any other company we have seen before; oh, and it explains this Russian ad buy.

Transaction Costs

Go back to the generic cellular network company I discussed above, and think about what is entailed in adding a new customer (and leaving aside the marketing expenditure to make them aware of and desirous of the service in the first place):

  • Talk with the customer on the phone or in person
  • Collect identifying details and run a credit check
  • Provision a SIM card and/or a phone
  • Receive payment
  • Manage contract renewals and cancellations and other customer service

While some of these activities could be automated, the reality is that the cost of customer management had a linear curve: more customers meant more costs. Moreover, these costs accumulated, limiting the natural size of any company; at some point the complexity of managing some finite number of customers across some finite number of geographic areas cost more than the marginal profit of adding one more customer, and that limited how big a company could grow (which, to be clear, could be very large indeed!).

What makes aggregators unique, though, is that thanks to the Internet they have zero transaction costs: for Google, or Airbnb, or Uber, or Netflix, or Amazon, or the online travel agents, adding one more customer is as simple as adding one more row in a database. Everything else is automated, from sign-up to billing to the delivery of the service in question. This is why all of these companies are global, often from day one, and, as I explained in Beyond Disruption, why they start at the high end of a market and work their way down.

Note that aggregators can deal with the physical world and still have zero transaction costs, at least on the consumer side: Airbnb deals with rooms, but bears no transaction costs when it comes to signing up new customers; Amazon and Uber are similar with regards to e-commerce and transportation, respectively. Netflix doesn’t deal in physical goods (beyond its old DVD business), although it does bear significant transaction costs when it comes to sourcing content (in addition to actually paying for the content), but when it comes to customers there are no marginal costs at all.

Facebook and Google, though are a special case: they are (and yes, I know this is the least imaginative term ever) super-aggregators.

Super-Aggregators

What makes Facebook and Google unique is that not only do they have zero transaction costs when it comes to serving end users, they also have zero transaction costs when it comes to both suppliers and advertisers.

Start with supply: not only is the vast majority of online content accessible to Google’s search engine (unsurprisingly, the biggest exception is Facebook), but in fact that content wants to be discovered by Google. Nearly every site on the web has a sitemap that is intended not for humans but for web crawlers, Google’s in particularly, and there is an entire industry dedicated to search engine optimization (SEO). Netflix is on the opposite side of the spectrum here (unlike YouTube, it should be noted): the company has to actively source content and pay for it. Uber and Airbnb and Amazon are in the middle: theoretically there is an open platform for suppliers but there are costs involved in bringing them online.

Facebook takes this to another level: its users are its most important content providers, and they do it for free. Professional content providers aren’t far behind, not only linking to all of their content but increasingly putting said content on Facebook directly (to the extent Facebook is paying for content it is to juice this cycle of self-interested content production on Facebook).

That said, there are a few more companies that have a similar content model: Twitter, Snapchat, LinkedIn, Yelp, etc. All run on user-generated content augmented by professional content placing links or original material on their services. However, there is still one more thing that separates Facebook and Google from the rest: advertisers.

Super-aggregators not only have zero transaction costs when it comes to users and content, but also when it comes to making money. This is at the very core of why Google and Facebook are so much more powerful than any of the other purely information-centric networks. The vast majority of advertisers on both networks never deal with a human (and if they do, it’s in customer support functionality, not sales and account management): they simply use the self-serve ad products like the one pictured above (or a more comprehensive tool built on the companies’ self-serve API).

This is the level that the other social networks have not reached: Twitter grew revenue, but primarily through its sales team, which meant that costs increased inline with revenue; the company never gained the leverage that comes from having a self-serve ad platform (specifically, the self-serve platform costs are fixed but the revenue is marginal).

Snap is following in Twitter’s footsteps: to date the vast majority of the company’s revenue has come from its sales team; the company has a perfunctory API for self-serve ads, but most of the volume springs from the aforementioned deals made by its sales team. Similar stories can be told about LinkedIn, Yelp, and other advertising-based businesses.

This, then, is a super-aggregator: zero transaction costs not just in terms of user acquisition, but also supply acquisition, and most importantly, revenue acquisition, and Google and Facebook are the ultimate examples.

Facebook and the Russians

This is why I was confused that Senator Warner made a big deal out of the fact Facebook was paid in Russian Rubles: the entire premise of the company’s revenue model is that anyone can run an ad without having to talk to another human, and obviously a key component of such a model is supporting multiple currencies.

Again, though, this is the first such model in economic history: it seems I am the one who was blinded by my having experienced the meaning of scale. In that Hotmail meeting everyone and everything was reduced to a number on a spreadsheet: the United States, Japan, Brazil, Russia, all were simply another row. So I naturally assume it is in the case of Facebook ads: that some advertisers buy in dollars, some in Yen, some in Real, others in Rubles is unremarkable to me, and, I suspect, many of the folks working at these companies.

And yet, it is not at all unrealistic that this be very remarkable to everyone else, even someone with the technical and business background of Senator Warner. It would immediately be eyebrow-raising should any of the companies he managed or was invested in suddenly started transacting in Russian Rubles! For a super-aggregator, though, it is not only unremarkable, it is the system working as designed.

This applies to the content of those ads, too: last week, when ProPublica reported that Facebook enabled anti-Semitic targeting, I told a friend that a similar story would come out about Google within a week; it only took one day. When you makes something frictionless — which is another way of describing zero transaction costs — it becomes easier to do everything, both good and evil.

Regulating the Super-Aggregators

This should probably be another article — indeed, it’s an article I’ve been working towards for a long time now — but this appreciation of what Super-Aggreagators are, and how it is a Russian propaganda outfit could buy Facebook ads that likely broke the law, gives insight into a number of principles that should guide people like Senator Warner as they consider potential regulation:

  • Don’t Force the Super-Aggregators to Make Editorial Decisions: It has been distressing to see how quickly some folks have resorted to insisting that Google and Facebook start having a point-of-view on content on their platforms. The problem is not that they might be effective, but rather that it is inevitable that they will be. I wrote in Manifestos and Monopolies:

    My deep-rooted suspicion of Zuckerberg’s manifesto has nothing to do with Facebook or Zuckerberg; I suspect that we agree on more political goals than not. Rather, my discomfort arises from my strong belief that centralized power is both inefficient and dangerous: no one person, or company, can figure out optimal solutions for everyone on their own, and history is riddled with examples of central planners ostensibly acting with the best of intentions — at least in their own minds — resulting in the most horrific of consequences; those consequences sometimes take the form of overt costs, both economic and humanitarian, and sometimes those costs are foregone opportunities and innovations. Usually it’s both.

    The best solution in my estimation is enforced neutrality; to the extent limitations are put in place they should be enforced by another entity with far more accountability to the people than either of these Super-Aggregators. That probably means the government (with the obvious caveat that authoritarian governments would certainly prefer to use Facebook for their own ends).

  • Focus on Transparency: The personalization afforded by Super-Aggregators means their advertising is simply not comparable to anything that has come before: television commercials, radio jingles, newspaper ads, all are publicly disseminated and thus can be tracked (the one possible exception is direct mail, which, unsurprisingly, has been the home of the foulest sort of political advertising in particular). Digital ads, on the other hand, can be shown to a designated audience without anyone else knowing. It is worth debating whether this level of secrecy should be allowed in general; it seems without question, in my mind, that it should not be allowed for political ads. Of course, that begs the question of what is a political ad, which again points towards regulation (which, per point one, is preferable to the unaccountable Google and Facebook deciding).

  • Remember the Benefits of Zero Transaction Costs: The biggest beneficiaries of zero transaction costs on the super-aggregators are not traditional advertisers, whether that be companies like CPG conglomerates or presidential campaigns. Both have the resources to advertise anywhere and everywhere, and indeed, often find that the fine-tooth targeting on super-aggregators isn’t worth the effort required. The folks that do benefit, though, are those that wouldn’t have a voice otherwise: startups and niche offerings, both in terms of business and politics. Google and Facebook have opened the field to far more entrants, and while that means there are more folks with bad intentions, there are also a whole lot more folks with ideas that were shut out by the significant transaction costs inherent in pre-Internet platforms.

There’s one final consideration that should apply to regulation, broadly: given that Google and Facebook are already well-established with businesses that serve users, suppliers, and advertisers in a virtuous cycle, it is unlikely that regulation of any kind will have meaningful effects on their bottom lines. Indeed, I expect Google and Facebook to be mostly cooperative with whatever regulation comes from these recent revelations.

Rather, the companies that will be hurt are those seeking to knock Google and Facebook off their perch; given that they are not yet super-aggregators, they will not have the feedback loops in place to overcome overly prescriptive regulation such that they can seriously challenge Google and Facebook.

For example, consider the much-touted General Data Protection Regulation (GDPR) set to take effect in the European Union next year. There is lot of excitement about how this regulation will limit Google and Facebook in particular, by, for example, limiting the use of personal data and enforcing data portability (and not just a PDF of your data — services will be required to build API access for easy export).

The reality, though, is that given that Google and Facebook make most of their money on their own sites, they will be hurt far less than competitive ad networks that work across multiple sites; that means that even more digital advertising money — which will continue to grow, regardless of regulation — will flow to Google and Facebook. Similarly, given that the data portability provisions explicitly exclude your social network — exporting your friends requires explicit approval from your friends — it will be that much harder to bootstrap a competitor.

This is the reality of regulation: as much as the largest incumbents may moan and groan, they are, in nearly all cases, the biggest beneficiaries. To be sure, that doesn’t mean regulation isn’t appropriate — it should be far more obvious to everyone that Russians were purchasing election-related ads on Facebook — but rather that it be expressly designed to limit the worst abuses and enable meaningful competitors, even if they accept payment in Russian Rubles.

  1. For what it’s worth, Stratechery has never actually taken out a Facebook ad, or any ad for that matter []
  2. Yes, I’m writing about Aggregation Theory again; I explain why I do so often here []
  3. Presuming his tweet was not as cynical as it very well might have been []