Disney and Fox

It’s always a risk writing about a deal before it is official: CNBC reported a month ago that Disney was in talks to acquire many of 21st Century Fox’s assets, including its eponymous movie studio, TV production company, cable channels, and international assets (but not the Fox broadcast network, Fox News, FS1 — Fox’s sports channel — and Fox Business). All was quiet until last week, when CNBC again reported that the deal was on, and now included 21st Century Fox’s Regional Sports Networks.

As I write this it is now widely reported that the deal is imminent; most notably, Comcast has dropped out of the bidding, which means the only question is whether or not Disney can close the deal: they would be crazy not to.

The Logic of Acquisition

The standard reason given for most acquisitions is so-called “synergy”: the idea that the two firms together can generate more revenue with lower costs than they could independently; most managers point towards the second half of that equation, promising investors significant cuts through reducing the number of workers doing the same thing. Certainly that is an argument in Disney’s favor: nearly everything 21st Century Fox does Disney does as well.

Still, it’s not exactly a convincing argument; acquisitions also incur significant costs: the price of the acquired asset includes a premium that usually more than covers whatever cost savings might result, and there are significant additional costs that come from integrating two different companies. Absent additional justification, the cost-savings argument comes across as justification for management empire-building, not value creation.

That’s not always the reason though: the cost-savings argument is often a fig-leaf for an acquisition that reduces competition; better for management to claim synergies in costs than synergies that result in cornering a market. The result is managers who routinely make weak arguments in public and strong arguments in the boardroom.

The best sort of acquisitions, though, are best described by the famous Wayne Gretzky admonition, “Skate to where the puck is going, not where it has been”; these are acquisitions that don’t necessarily make perfect sense in the present but place the acquirer in a far better position going forward: think Google and YouTube, Facebook and Instagram, or Disney’s own acquisition of Capital Cities (which included ESPN).

What makes this potential acquisition so intriguing is that it is a mixture of all three — and which of the three you pick depends on the time frame within which you view the deal.

The Not-so-distant Past

Go back to that Capital Cities acquisition: the 1995 deal was, at the time, the second largest acquisition ever, and it primed Disney to dominate the burgeoning cable television era.

I sketched out the structure of cable TV earlier this year in The Great Unbundling:

The key takeaway should be a familiar one: economic power came from controlling distribution. The cable line was the only way for consumers to obtain the fullest possible array of in-home entertainment, which meant that distributors were able to charge consumers as much as they could bear.

The real negotiations in this value chain took place between content producers and distributors, which ultimately determined who made the most profit, and here Capital Cities + Disney was a powerful combination:

  • First and foremost, ESPN was well on the way to establishing itself as the most indispensable cable channel amongst consumers, allowing it to command carriage fees that were multiples higher than any other channel.
  • Secondly, Disney was able feed content to ABC just in time for the FCC having loosened regulations on broadcast networks producing their own content (instead of acquiring it).
  • Third, Disney built a bundle within the bundle: distributors had to not only pay for ESPN, but also for the Disney channel, A&E, Lifetime, and the host of spinoff channels that followed; any proper accounting for ESPN’s ultimate contribution to Disney’s bottom line should include the above-average carriage fees charged by all of Disney’s properties.

This all seems obvious in retrospect, but at the time most of the attention was on ABC, then the most-profitable broadcast channel. The puck, though, was already moving towards cable.

The Fading Present

The dominant story in media over the last few years has been the slow-but-steady breakdown of that cable TV model. In August 2015, now-Disney CEO Bob Iger — who joined the company in that Capital Cities deal — admitted on an earnings call that ESPN and the company’s other networks, which had previously generated 45% of Disney’s revenue and 68% of its profit, were losing customers:

We are realists about the business and about the impact technology has had on how product is distributed, marketed and consumed. We are also quite mindful of potential trends among younger audiences, in particular many of whom consume television in very different ways than the generations before them. Economics have also played a part in change and both cost and value are under a consumer microscope. All of this has and will continue to put pressure on the multichannel ecosystem, which has seen a decline in overall households as well as growth in so-called skinny or cable light packages.

In fact, as I detailed earlier this year, Disney had not been realistic at all about the “impact technology [had] had on how product is distributed, marketed and consumed”:

Back in 2012 the media company signed a deal with Netflix to stream many of the media conglomerate’s most popular titles…Iger’s excitement was straight out of the cable playbook: as long as Disney produced differentiated content, it could depend on distributors to do the hard work of getting customers to pay for it. That there was a new distributor in town with a new delivery method only mattered to Disney insomuch as it was another opportunity to monetize its content.

The problem now is obvious: Netflix wasn’t simply a customer for Disney’s content, the company was also a competitor for Disney’s far more important and lucrative customer — cable TV. And, over the next five years, as more and more cable TV customers either cut the cord or, more critically, never got cable in the first place, happy to let Netflix fulfill their TV needs, Disney was facing declines in a business it assumed would grow forever.

That business was predicated on cable’s monopoly on in-home entertainment; what Netflix offered was an alternative:

Netflix’s path to a full-blown cable TV competitor is one of the canonical examples of a ladder strategy:

Netflix started by using content that was freely available (DVDs) to offer a benefit — no due dates and a massive selection — that was orthogonal to the established incumbent (Blockbuster). This built up Netflix’s user base, brand recognition, and pocketbook

Netflix then leveraged their user base and pocketbook to acquire streaming rights in the service of a model that was, again, orthogonal to incumbents (linear television networks). This expanded Netflix’s user base, transformed their brand, and continued to increase their buying power

With an increasingly high-profile brand, large user base, and ever deeper pockets, Netflix moved into original programming that was orthogonal to traditional programming buyers: creators had full control and a guarantee that they could create entire seasons at a time

Each of these intermediary steps was a necessary prerequisite to everything that followed, culminating in yesterday’s announcement: Netflix can credibly offer a service worth paying for in any country on Earth, thanks to all of the IP it itself owns. This is how a company accomplishes what, at the beginning, may seem impossible: a series of steps from here to there that build on each other. Moreover, it is not only an impressive accomplishment, it is also a powerful moat; whoever wishes to compete has to follow the same time-consuming process.

Another way to characterize Netflix’s increasing power is Aggregation Theory: Netflix started out by delivering a superior user experience of an existing product (DVDs) to a dedicated set of customers, leveraged that customer base to gain new kinds of supply (streaming content), gaining more customers and more supply, and ultimately leveraged those customers to modularize supply such that the streaming service now makes an increasing amount of its content directly.

What Disney is seeking to prove, though, is that it can compete with Netflix directly by following a very different path.

The Onrushing Future

I’ve long argued that the only way to break away from the power of aggregators is through differentiation; it’s why I argued after that Iger earnings call that Disney would be OK — after all, differentiated content is Disney’s core competency, as demonstrated by its ability to extract profits from cable companies.

The implication of Netflix’s shift to original programming, though, isn’t simply the fact that the streaming company is a full-on competitor for cable TV: it is a competitor for differentiated content as well. That gives Netflix far more leverage over content suppliers like Disney than the cable companies ever had.

Consider the comparison in terms of BATNA (Best Alternative to a Negotiated Agreement): for distributors the alternative to not carrying ESPN was losing a huge number of customers who cared about seeing live sports; that’s not much of an alternative! Netflix, on the other hand, can — and is! — going straight to creators for content that viewers can watch instead of whatever Disney may choose to withhold if Netflix’s price is unsatisfactory.1 Clearly it’s working: Netflix isn’t simply adding customers, it is raising prices at the same time, the surest sign of market power.

Therefore, the only way for Disney to avoid commoditization is to itself go vertical and connect directly with customers: thus the upcoming streaming service, the removal of its content from Netflix, and, presuming it is announced, this deal. When the acquisition was rumored last month, I wrote in a Daily Update:

This gets at why this deal makes so much sense for Disney. The company already announced that Star Wars and Marvel content would indeed be a part of the streaming service (that is what was still up in the air when I wrote Disney’s Choice), but the company is absolutely right to not stop there: being a true Netflix competitor means having more content, not less — and that content doesn’t necessarily have to be fresh! Streaming shifted television from a world based on scarcity — there are only 24 hours in the day times however many channels there are, and a channel can only show one thing at a time — to one based on abundance: you can watch anything you want at anytime and it can be different from everyone else.

Moreover, not only does 21st Century Fox have a lot of content, it has content that is particularly great for filling out a streaming library: think The Simpsons, or Family Guy; according to estimates I’ve seen, in terms of external content Fox owns eight of Netflix’s most streamed shows — more than Disney’s six. This content is useful not only for driving sign-ups with certain audiences, but especially for reducing churn; the latter requires a different content strategy than the former.

Whereas Netflix laddered-up to its vertical model and used its power as an aggregator of demand to gain power over supply, Disney is seeking to leverage — and augment — its supply to gain demand. The end result, though, would look awfully similar: a vertically integrated streaming offering that attracts and keeps customers with exclusive content, augmented with licensing deals.

If Disney is successful, it will be a truly remarkable shift: away from a horizontal content company predicated on leveraging its investment in content across as many outlets as possible, to a vertical streaming company that uses its content to achieve higher average revenue from a smaller number of customers willing to pay directly — smaller in the United States, that is; as Netflix is demonstrating, owning it all means the ability to extend the model worldwide.2

The Antitrust Question

I suspect the final hangup in Disney and 21st Century Fox’s negotiations are termination fees: who pays whom if the deal falls through. There is an obvious reason for concern — antitrust. That, of course, gets at some the reasons (but not all) as to why the deal makes sense in the first place. What is fascinating, though, is that the nature of the concern changes depending on the time frame through which one views this deal.

If one starts with a static view of the world as it is at the end of 2017, then there may be some minor antitrust concerns, but probably nothing that would stop the deal. Disney might have to divest a cable channel or two (the company’s power over distributors would be even stronger; basically the opposite of the some of the concerns that halted the Comcast acquisition of Time Warner), and potentially be limited in its ability to make operational decisions about Hulu (Disney would have a controlling stake after the merger; Comcast was similarly restricted after acquiring NBC Universal, but there the concern was more about Comcast’s conflict of interest with regards to its cable TV business competing with Hulu). The Hulu point is interesting in its own right: Disney could choose to focus its streaming efforts there instead of building its own service, but I suspect it would rather own it all.

In addition, Disney and 21st Century Fox combined for 40% of U.S. box office revenue in 2016; that probably isn’t enough to stop the deal, and as silly as it sounds, don’t underestimate the clamoring of fans for the unification of the Marvel Cinematic Universe in swaying popular opinion!

The view changes, though, if you look only a year or two ahead: what I just described above — the “truly remarkable shift” in Disney’s business model — is a shift to vertical foreclosure. The entire point of Disney vastly increasing its content library is to offer that library exclusively on its own streaming service, not competitors’ — especially not on Netflix. Given the current state of antitrust law, which has ignored vertical mergers for years, this would normally be an academic point, except the current state was fundamentally shifted just a few weeks ago, when the Department of Justice sued to block AT&T’s acquisition of Time Warner due to vertical foreclosure concerns.

It’s not a perfect comparison: for one thing, AT&T’s distribution service (DirecTV) already exists, for another, it is impossible to see that acquisition as anything but a vertical one; as I just noted, though, today the Disney-Fox acquisition is a horizontal one. Would the Justice Department sue based on Disney’s potential, as opposed to its reality? And there’s a political angle too: if the AT&T-Time Warner acquisition were indeed blocked as retaliation by the Trump administration against CNN, then it would follow that the administration would be willing to accommodate 21st Century Fox Chairman Rupert Murdoch.

What is most interesting, though, is the long-term view: I have been writing for years that Netflix’s status as an aggregator was positioning the company to dominate entertainment, and it was only eight months ago that I despaired of Disney and the other entertainment companies ever figuring out how to fight back. What has been so impressive over the last few months is the extent and speed with which Disney has seemingly figured it out — and acted accordingly.

Is that a bad thing? Note how much the situation changed once Netflix became a viable competitor for cable TV: competition is a wonderful thing, most of all for consumers. To that end, might it be better for consumers, not-so-much today but ten years from now, if Disney were fully empowered to compete with Netflix? What is preferable? A dominant streaming company and a collection of content companies trying to escape the commoditization trap, or two dominant streaming companies that can at least try to hold each other accountable?

It’s not a great choice, to be honest; certainly Amazon Prime Video is a possible competitor, although the service is both empowered by its business model and also held back. Other tech companies are making noises in the area, but more tech company dominance hardly seems like an answer!

Frankly, I’m not sure of the answer: I am both innately suspicious of these huge mergers and also sympathetic because I see so clearly the centralizing power of the Internet. The big are combining because the giants are coming: if anything, they are already here.

  1. The primary reason Netflix doesn’t have sports content is because it is not evergreen and thus doesn’t provide a cumulative advantage in terms of lowering customer acquisition costs over time; however, not being subject to the one-sided negotiations inherent to sports rights is a nice side benefit []
  2. Just as interesting is the prospective acquisition of regional sports networks and what that means for the future for ESPN; I will discuss this on tomorrow’s Daily Update []

The Pollyannish Assumption

There was an interesting aside about Apple’s bad week that I wrote about yesterday. It turns out that a user posted the macOS login-as-root bug to Apple’s support forums back on November 13:

On startup, click on “Other”

Enter username: root and leave the password empty. Press enter. (Try twice)
If you’re able to log in (hurray, you’re the admin now), then head over to System Preferences>Users & Groups and create a new Admin account.

Now restart and login to the new Admin Account (you may need a new Apple Id). Once you’re logged into this new Admin Id, you can again proceed to your System Preferences>Users & Groups. Open the Lock Icon with your new Admin ID/Password. Assign “Allow user to administer this computer” to your original Apple ID. Restart.

Most of the discussion about this tidbit has centered on the fact that this user later noted that they had found this solution on some other forum — they couldn’t remember which (this reply has now been hidden on the original thread, but Daring Fireball quoted it here); observers have largely given Apple a pass on having missed the posting on their own forums because those forums are mostly user-generated content (both questions and answers) and Apple explicitly asks posters to file bug reports with Apple directly. It’s understandable that the company missed this post two weeks ago.

For the record, I agree. Managing user-generated content is really hard.

The User-Generated Content Conundrum

Three recent bits of news bring this point about user-generated content home.

First, Twitter; from Bloomberg:

Twitter Inc. said it allowed anti-Muslim videos that were retweeted by President Donald Trump because they didn’t break rules on forbidden content, backtracking from an earlier rationale that newsworthiness justified the posts. On Thursday, a Twitter spokesperson said “there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability.”

Second, Facebook; from The Daily Beast:

In the wake of the #MeToo movement, countless women have taken to Facebook to express their frustration and disappointment with men and have been promptly shut down or silenced, banned from the platform for periods ranging from one to seven days. Women have posted things as bland as “men ain’t shit,” “all men are ugly,” and even “all men are allegedly ugly” and had their posts removed…In late November, after the issue was raised in a private Facebook group of nearly 500 female comedians, women pledged to post some variation of “men are scum” to Facebook on Nov. 24 in order to stage a protest. Nearly every women who carried out the pledge was banned…

When reached for comment a Facebook spokesperson said that the company is working hard to remedy any issues related to harassment on the platform and stipulated that all posts that violate community standards are removed. When asked why a statement such as “men are scum” would violate community standards, a Facebook spokesperson said that the statement was a threat and hate speech toward a protected group and so it would rightfully be taken down.

Third, YouTube. From BuzzFeed:

YouTube is adding more human moderators and increasing its machine learning in an attempt to curb its child exploitation problem, the company’s CEO, Susan Wojcicki, said in a blog post on Monday evening. The company plans to increase its content moderation workforce to more than 10,000 employees in 2018 in order to help screen videos and train the platform’s machine learning algorithms to spot and remove problematic children’s content. Sources familiar with YouTube’s workforce numbers say this represents a 25% increase from where the company is today.

In the last two weeks, YouTube has removed hundreds of thousands of videos featuring children in disturbing and possibly exploitative situations, including being duct-taped to walls, mock-abducted, and even forced into washing machines. The company said it will employ the same approach it used this summer as it worked to eradicate violent extremist content from the platform.

I’m going to be up front with you: I don’t have any clear cut answers here. One of the seminal Stratechery posts is called Friction, and while I’ve linked it many times this line is particularly apt:

Friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

This is exactly the root of the problem: I don’t believe these platforms so much drive this abhorrent content (the YouTube videos are just horrible) as they make it easier than ever before for humans to express themselves, and the reality of what we are is both more amazing and more awful than most anyone ever appreciated.

This is something I have started to come to grips with personally: the exact same lack of friction that results in an unprecedented explosion in culture, music, and art of all kinds, the telling of stories about underrepresented and ignored parts of the population, and yes, the very existence of a business like mine, also results in awful videos being produced and consumed in shocking numbers, abuse being widespread, and even the upheaval of our politics.

The problem is that the genie is out of the bottle: lamenting the loss of friction will not only not bring it back, it makes it harder to figure out what to do next. I think, though, the first place to start — for me anyways — is to acknowledge and fully internalize what I wrote back then: focusing on the upsides without acknowledging the downsides is to misevaluate risk and court disaster. And, for those inclined to see the negatives of the Internet, focusing on the downsides without acknowledging the upsides is to misevaluate reward and endanger massive future opportunities. We have to find a middle way, and neither side can do that without acknowledging and internalizing the inevitable truth of the other.

Content Policing

Go back to the Apple forum anecdote: policing millions of comments posted by hundreds of thousands of posters (I’m guesstimating on numbers here) is really hard, and it’s understandable that Apple missed the post in question; as bad as this bug was, it is still the case that the return on the investment that would have been required to catch this one comment simply doesn’t make sense.

Apple is the easy one, and I started with them on purpose: using a term like “return on investment” gets a whole lot more problematic when dealing with abuse and human exploitation. That doesn’t mean it isn’t a real calculation made by relevant executives though: in the case of Apple, I think most people would agree that whatever investment in forum moderation would be effective enough to catch this post before it was surfaced on Twitter a couple of weeks later would be far better spent buttressing the internal quality control teams that missed the bug in the first place.

That the post was surfaced on Twitter is relevant too; the developer who tweeted about the bug wrote a post on Medium explaining his tweet:

A week ago the infrastructure staff at the company I work for stumbled on the issue while trying to help one of my colleagues recover access to his local admin account. The staff noticed the issue and used the flaw to recover my colleague’s account. On Nov 23, the staff members informed Apple about it. They also searched online and saw the issue mentioned in a few places already, even in Apple Developer Forum from Nov 13. It seemed like the issue had been revealed, but Apple had not noticed yet.

The tweet certainly got noticed, and the bug was fixed within the day. Now to be clear, this isn’t the appropriate way to disclose a vulnerability (to that point, Apple should clarify what exactly happened around that November 23rd disclosure), but broadly speaking, the power of social media is what got this bug fixed as quickly as it was.

Outside visibility and public demands for accountability are what drove the YouTube changes as well: BuzzFeed reported on the child exploitation issue last month after being tipped off by an activist named Matan Uziel who had been rebuffed in his own efforts to contact YouTube. That YouTube was allegedly not receptive to his reach-outs is a bad thing; that there are plenty of ways to raise a ruckus such that they must respond is a good one.

It also gives some outline about how YouTube can better approach the problem in the future: yes, the company is building machine learning algorithms, and yes, the company provides an option for viewers to report content — although it is buried in a submenu:

The point of user reports is to leverage the scale of the Internet to police its own unfathomable scale: there are far more YouTube viewers than there could ever be moderators; meanwhile, there are 400 hours of video uploaded to YouTube every minute.

That approach, though, clearly isn’t enough: it is rooted in the pollyannish view of the Internet I described above — the idea that everything is mostly good but for some bad apples. A more realistic view — that humanity is capable of both great beauty and tremendous evil, and that the Internet makes it easier to express both — demands a more proactive approach. And, conveniently, YouTube already has tools in place.

YouTube’s Flawed Approach

On Google’s last earnings call CEO Sundar Pichai said:

YouTube now has over 1.5 billion users. On average, these users spend 60 minutes a day on mobile. But this growth isn’t just happening on desktop and mobile. YouTube now gets over 100 million hours of watch time in the living room every day, and that’s up 70% in the past year alone.

A major factor driving this growth is YouTube’s machine-learning algorithm for watching more videos; as BuzzFeed noted:

Thanks to YouTube’s autoplay feature for recommended videos, when users watch one popular disturbing children’s video, they’re more likely to stumble down an algorithm-powered exploitative video rabbit hole. After BuzzFeed News screened a series of these videos, YouTube began recommending other disturbing videos from popular accounts like ToysToSee.

The recommendation works hand-in-hand with search which — as you would expect given its parent company — YouTube is very good at. Individuals that want disturbing content can find what they’re looking for, and then, in the name of engagement and pushing up those viewing numbers, YouTube gives them more.

This should expose the obvious flaw in YouTube’s current reporting-based policing strategy: the nature of search and recommendation algorithms is such that most YouTube viewers, who would be rightly concerned and outraged about videos of child exploitation, never even see the videos that need to be reported. In other words, YouTube’s design makes its attempt to leverage the Internet broadly as moderator doomed to fail.

Those exact same search and algorithmic capabilities, though, made it trivial for Uziel and BuzzFeed to find a whole host of exploitive videos. The key difference between Uziel/BuzzFeed and generic YouTube viewers is that the former was looking for them.

Herein lies the fundamental failing of YouTube moderation: to date the video platform has operated under the assumption that 1) YouTube has too much content to review it all and 2) The best way to moderate is to depend on its vast user base. It is a strategy that makes perfect sense with the pollyannish assumption that the Internet by default produces good outcomes with but random exceptions.

A far more realistic view — because again, the Internet is ultimately a reflection of humanity, full of both goodness and its opposite — would assume that of course there will be bad content of YouTube. Of course there will be extremist videos recruiting for terrorism, of course there will be child exploitation, of course there will all manner of content deemed unacceptable by the vast majority of not just the United States but humanity generally.

Such a view would engender a far different approach to moderation. Consider this paragraph from YouTube CEO Susan Wojcicki about YouTube’s latest changes:

We understand that people want a clearer view of how we’re tackling problematic content. Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced. That’s why in 2018 we will be creating a regular report where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.

Make no mistake, transparency is a very good thing (more on this in a moment). What is striking, though, is the reliance on flags: YouTube’s current moderation approach is inherently reactive, whether it be to viewer reports or, increasingly, to machine learning algorithms flagging content. Machine learning is a Google strength, without question, but ultimately the company is built on giving people what they want — including bad actors.

Understanding Demand

A core precept of Aggregation Theory is that digital markets are driven by demand, not supply. This, by extension, is why Google and Facebook in particular dominate: in a world of effectively infinite web pages, the search engine that can pick out the proverbial needle in a haystack is king. It follows, then, that a content moderation approach that starts with supply is inherently inferior to one that starts with demand.

This is why it is critical that YouTube lose its pollyannish assumptions: were the company’s moderation approach to start with the assumption of bad actors, then child exploitation would be perhaps the most obvious place to look for problematic videos. Moreover, we know it works: that is exactly what Uziel and BuzzFeed did. If you know what you are looking for, you will, thanks to Google/YouTube’s search capabilities and recommendation algorithms, find it.

And then you can delete it.

Moreover, you can delete it efficiently. Despite my lecture about humanity containing both good and evil, I strongly suspect that the vast majority of those 400 hours uploaded every minute contain unobjectionable — even beautiful, or educational, or entertaining — content. What is the point, then, of even trying to view it all, a Sisyphean task if there ever was one? Starting with the assumption of bad actors and actively looking for their output — using YouTube and Google’s capabilities as aggregators — makes far more sense.

That, though, means letting go of the convenient innocence inherent to the worldview of most tech executives. I know the feeling: I want to believe that the Internet’s removal of friction and enabling of anyone to publish content is an inherent good thing, because I personally am, like these executives, a massive beneficiary. Reality is far more complicated; accepting reality, though, is always the first step towards policies that actually work.

Facebook, Twitter, and Politics

I would like to end this essay here; alas, most content moderation policies are not so clean cut at YouTube and child exploitation. That is why I included the Twitter and Facebook excerpts above. Both demonstrate the potential downside of the approach I am recommending for YouTube: being proactive is a sure recipe for false positives.

I am reminded, though, of the famous Walt Whitman quote:

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

It is impossible to navigate the Internet — that is, to navigate humanity — without dealing in shades of gray. And the challenges faced by Twitter and Facebook are perfect examples. I, for one, found President Trump’s retweets disgusting, and Facebook’s bans unreasonable. On the other hand, who is Twitter to define what the President of the United States can or cannot post, and Facebook is at least acting consistently with their policies.

Indeed, these two examples are exactly why I have consistently called on these platforms to focus on being neutral. Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.

The phrase “With great power comes great responsibility” is commonly attributed to Spider-Man, but it in fact stems from the French Revolution:

Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir.

English translation: They must consider that great responsibility follows inseparably from great power.

Documenting why and how these platforms have power has, in many respects, been the ultimate theme of Stratechery over the last four-and-a-half year: this is a call to exercise it, in part, and a request to not, in another. There is a line: what is broadly deemed unacceptable, and what is still under dispute; the responsibility of these new powers that be is to actively search out the former, and keep their hands — and algorithms and policies — off the latter. Said French Revolution offers hints at fates if this all goes wrong.

[FREE Daily Update] “Light Touch”, Cable, and DSL; The Broadband Tradeoff; The Importance of Antitrust

Today’s Daily Update, which follows up on Pro-Neutrality, Anti-Title II, is free for everyone.

This update:

  • Discusses the two types of regulation (ex-ante and ex-post) and why it’s worth trying ex-post first
  • Explains what I meant by “light touch”, specifically the rollout of cable broadband and how it differed from DSL
  • Explores the tradeoffs necessary to encourage broadband investment, and why they are so critical to society
  • Ties together the need for stronger antitrust enforcement with the risk of letting ISPs be reclassified
  • Pleads for more consideration of trade-offs and less reduction of complex issues to good versus evil

You can read this Daily Update for free here.

Pro-Neutrality, Anti-Title II

Note: This post was previously titled “Why Ajit Pai is Right.” I have changed it to reflect my interest in a substantive debate, not flame-throwing. The article is unchanged (beyond normal edits).

Weirdly, this article about the American broadband market must start in Portugal.

Last week Federal Communications Commission (FCC) Chairman Ajit Pai circulated a draft order that would undo the 2015 reclassification of Internet Service Providers (ISPs) from what are known as “Title I information services” to “Title II telecommunication providers”; Title II of the Telecommunications Act, originally developed to regulate the AT&T monopoly, gives the FCC broad ability to regulate “common carriers” as utilities. Title I, on the other hand, hands off regulatory oversight to the Federal Trade Commission (FTC).

The net effect of this reclassification would be the elimination of FCC rules restricting the ability of ISPs to block or throttle sites or apps or offer paid prioritization of any Internet content. That is certainly a worthy goal! Who could possibly be in favor of ISPs picking-and-choosing what sites you can visit based on what you are willing to pay? Do we really want to be like Portugal?

There’s just one problem with the tweet I embedded above: Portugal uses Euros, and the language is Portuguese; the tweet above has dollars and English. The image is completely made-up.

Congressman Ro Khanna, who represents Silicon Valley, at least went to the trouble of getting an actual page from a Portuguese carrier:

There are the Euros and Portuguese you would expect, but in this case perhaps it was the language difference that introduced its own issues: Congressman Khanna seems to have missed the text at the top (under ‘+ Smart Net’) that clearly stated that the packages were for an additional 10GB/month of data; in addition to what, you may ask? Simply scroll down the page:

So to recount: one Portugal story is made up, and the other declared that a 10GB family plan with an extra 10GB for a collection of apps of your choosing for €25/month ($30/month) is a future to be feared; given that AT&T charges $65 for a single “Unlimited” plan that downscales video, bans tethering, and slows speeds after 22GB, one wonders if most Americans share that fear.

That, though, is the magic of the term “net neutrality”, the name — coined by the same Tim Wu whose tweet I embedded above — for those FCC rules that justified the original 2015 reclassification of ISPs to utility-like common carriers. Of course ISPs should be neutral — again, who could be against such a thing? What is missing in the ongoing debate, though, is the recognition that, ever since the demise of AOL, they have been. The FCC’s 2015 approach to net neutrality is solving problems as fake as the image in Wu’s tweet; unfortunately the costs are just as real as those in Congressman Khanna’s tweet, but massively more expensive.

The Cost of Regulation

Allow me to state this point plainly: I am absolutely in favor of net neutrality. Indeed, as I explained in 2014’s Netflix and Net Neutrality, I am willing to make trade-offs (specifically data caps) to achieve it. The question at hand, though, is what is the best way to achieve net neutrality? To believe that Chairman Pai is right is not to be against net neutrality; rather, it is to believe that the FCC’s 2015 approach was mistaken.

Any regulatory decision — indeed, any decision period — is about tradeoffs. To choose one course of action is to gain certain benefits and incur certain costs, and it is to forgo the benefits (and costs!) of alternative courses of action. What makes evaluating regulations so difficult is that the benefits are usually readily apparent — the bad behavior or outcome is, hopefully, eliminated — but the costs are much more difficult to quantify. Short-term implementation costs may be relatively straightforward, but future innovations and market entries that don’t happen by virtue of the regulation being in place are far more difficult to calculate. Equally difficult to measure is the inevitable rent-seeking that accompanies regulation, as incumbents find it easier to lobby regulators to foreclose competition instead of winning customers in an open market.

A classic example of this phenomenon is restaurants: who could possibly be against food safety? Then you read about how San Francisco requires 14 permits that take 9 months to issue (plus a separate alcohol permit) and you wonder why anyone opens a restaurant at all (compounded by the fact that already-permitted restaurants have a vested interest in making the process more onerous over time). Multiply that burden by all of the restaurants that never get created and the cost is very large indeed.

This argument certainly applies to net neutrality in a far more profound way: the Internet has been the single most important driver of not just economic growth but overall consumer welfare for the last two decades. Given that all of that dynamism has been achieved with minimal regulatory oversight, the default position of anyone concerned about future growth should be maintaining a light touch. After all, regulation always has a cost far greater than what we can see at the moment it is enacted, and given the importance of the Internet, those costs are massively more consequential than restaurants or just about anything else.

To put it another way, given the stakes, the benefit from regulation must be massive, which is why the “net neutrality” framing is so powerful: I’ll say it again — who can be against net neutrality? Telling stories about speech being restricted or new companies being unable to pay to access customers tap into both the Internet’s clear impact and the foregone opportunity cost I just described — businesses that are never built.

That, though, is exactly the problem: opportunity costs are a reason to not regulate; clear evidence of harm are the reasons to do so despite the costs. What is so backwards about this entire debate is that those in favor of regulation are adopting the arguments of anti-regulators — postulating about future harms and foregone opportunities — while pursuing a regulatory approach that is only justified in the face of actual harm.

The fact of the matter is there is no evidence that harm exists in the sort of systematic way that justifies heavily regulating ISPs; the evidence that does exist suggests that current regulatory structures handle bad actors perfectly well. The only future to fear is the one we never discover because we gave up on the approach that has already brought us so far.

ISPs Acting Badly

The most famous example of an ISP acting badly was a company called Madison River Communication which, in 2005, blocked ports used for Voice over Internet Protocol (VoIP) services, presumably to prop up their own alternative; it remains the canonical violation of net neutrality. It was also a short-lived one: Vonage quickly complained to the FCC, which quickly obtained a consent decree that included a nominal fine and guarantee from Madison River Communications that they would not block such services again. They did not, and no other ISP has tried to do the same;1 the reasoning is straightforward: foreclosing a service that competes with an ISP’s own service is a clear antitrust violation. In other words, there are already regulations in place to deal with this behavior, and the limited evidence we have suggests it works.

Another popularly cited case is Comcast’s attempted throttling of BitTorrent in 2007. While the protocol has legitimate uses, by far the most popular application was piracy; notably, pirate networks typically required users to upload as much content as they downloaded, imposing significant burdens on Comcast’s network. The FCC ordered Comcast to stop in 2008, but a federal court ruled that the FCC lacked the statutory authority given that ISPs were Title I providers (not Title II).

What is important to note, though, is that even before the Court ruled, Comcast had already removed its restrictions, not for fear of regulatory oversight, but by making technical changes to its network to better handle BitTorrent traffic, lending credence to Comcast’s arguments that the initial restrictions were about network management, not content discrimination (and, to be clear, Comcast erred in not being transparent). It is worth noting, by the way, that BitTorrent users were a sort of free-loaders, using massively more bandwidth than the vast majority of Comcast’s customers; this is a case where what is best for end users is much murkier than net neutrality advocates would have you think. What is pertinent, though, is that it happened only once.

Perhaps the most misrepresented episode, though, is MetroPCS. Net neutrality advocates claim that the discount carrier (since bought by T-Mobile) “blocked all video sites except for YouTube”; the reality is that in 2011 MetroPCS unveiled a new pricing plan: $40 for unlimited webpages plus YouTube, $50 for several other additional services, and $60 for unrestricted data. In other words, it wasn’t a net neutrality issue at all: it was an early prototype of what is known as “zero-rating.”

T-Mobile, Zero-Rating, and Competition

Zero-rating means that a particular service does not count against a data cap; widely used all over the world, the practice was popularized in the United States by T-Mobile.

Back in 2011 T-Mobile was a distant fourth-place in the U.S. carrier market, with limited spectrum and a shrinking customer base. That year the company tried to sell itself to AT&T, the second-largest carrier, but that deal was (rightly) blocked by the U.S. Department of Justice for competition reasons. That’s when something amazing happened: T-Mobile decided to actually compete.

The company launched its “Un-carrier” campaign, featuring contract-fee pricing with phone financing, data carryover, and, pertinent to this article, zero-rating on a host of music and video services (the video was downsampled). Customers loved it, leading T-Mobile to grow rapidly, soon overtaking Sprint to become the third-largest carrier in the United States. More importantly, T-Mobile forced the other national carriers to respond: now everyone has phone financing instead of lock-in subsidies, monthly plan prices are significantly lower for everyone, and even AT&T and Verizon, the two largest carriers by far, have returned to unlimited data plans.2

Again, zero-rating is not explicitly a net-neutrality issue: T-Mobile treats all data the same, some data just doesn’t cost money. Net neutrality advocates, though, have railed against zero-rating as violating the “spirit” of net neutrality, and shortly after that 2015 reclassification, the FCC launched an investigation into the practice. I’m sympathetic to the argument; I wrote in 2015:

The problem, though, is that regulations, by virtue of being words on a page, always contains loopholes that violate the “spirit” of the rules and more often than not end up favoring the incumbents, and that is precisely what is happening in the broadband war. Fast lanes would likely have only had an effect on the margins, and consumers would have been only affected indirectly; zero-rate data, though, appeals to consumer pocket books directly in a way that massively benefits whatever Internet companies are signed up to play ball with the ISP…

The FCC has signaled a hesitation to do anything about zero rate plans given the fact they benefit the consumer, at least in the short term. After all, who doesn’t like free? The problem, though, is the effect on competition — particularly the Netflix and Spotify competitors who haven’t yet been borne.

What has happened to the U.S. mobile industry has certainly made me reconsider: if competition and the positive outcomes it has for customers is the goal, then it is difficult to view T-Mobile’s approach as anything but a positive.

The Startups of the Future

Still, what of those companies that can’t afford to pay for zero rating — the future startups for which net neutrality advocates are willing to risk the costs of heavy-handed regulations? In fact, as I noted in that excerpt, zero rating is arguably a bigger threat to would-be startups than fast lanes, yet T-Mobile-style zero rating isn’t even covered by those regulations! This is part of the problem of regulating future harm: sometimes that harm isn’t what you expect, and you have regulated and borne the associated costs in vain.

That aside, the idea that ISPs would be able to successfully block sites and apps that don’t pay for delivery is flawed:

  • First, as noted above, there is no evidence of this happening on a wide-scale.
  • Second, should an ISP try, an increasing number of customers do have alternatives (not enough — more on this in a moment).
  • Third, if the furor over net neutrality has demonstrated anything, it is that the media is ready-and-willing to raise a ruckus if ISPs even attempt to do something untoward; relatedly, a common response to the observation that ISPs have not acted badly to-date because they are fearful of regulation is not an argument for regulation — it is an acknowledgment that ISPs can and will self-regulate.

Most importantly, the idea makes zero economic sense. Remember that ISPs bear massive fixed costs, which means they are motivated to maximize the number of end users. That means not cutting off sites and apps those customers want. Moreover, even in the worst case scenario where ISPs did decide to charge Google and Netflix and whatnot, they could price discriminate and charge the Netflix competitor nothing at all! That would be a far superior financial outcome to a “take-it-or-leave-it” price that would foreclose all future startups (and again, there is zero evidence that this scenario has happened or will happen at all).

What is worth noting, though, is that the current regulations do foreclose startups that rely on low latency levels that might be offered by ISPs at a premium. The most commonly cited example is remote medical care, but the nature of future innovation is that we don’t know what sort of services might be created with something like paid prioritization.

I’d also note that companies like Google and Netflix already have massive advantages along these lines: Netflix places its content on servers within ISPs, and Google has an entire worldwide private network to ensure its results are milliseconds faster than they might be otherwise. The startups that challenge them will do so by being different, which means keeping open the number of possible ways to differentiate is a good thing.

Competition and Neutrality

To recap, given that:

  • Regulation incurs significants costs, both in terms of foregone opportunities and regulatory capture
  • There is no evidence of systemic abuse by ISPs governed under Title I, which means there are no immediate benefits to regulation, only theoretical ones
  • There is evidence that pre-existing regulation and antitrust law, along with media pressure, are effective at policing bad behavior

I believe that Ajit Pai is right to return regulation to the same light touch under which the Internet developed and broadband grew for two decades. I am amenable to Congress passing a law specifically banning ISPs from blocking content, but believe that for everything else, including paid prioritization, we are better off taking a “wait-and-see” approach; after all, we are just as likely to “see” new products and services as we are to see startup foreclosure. And, to be sure, this is an issue than can — and should, if the evidence changes — be visited again.

What is worth far more attention is the state of competition in broadband generally: ISPs have lobbied for limits on public broadband in 25 states, and many local governments make it prohibitively expensive for new ISPs to challenge incumbents (and Title II requirements don’t help either).3 Increasing competition would not only have the same positive outcomes for customers that T-Mobile demonstrated, but would solve the (mostly theoretical) net neutrality issue at the same time: the greatest check on an ISP is the likelihood of an unsatisfied customer leaving.

And, I’d add, if neutrality and foreclosed competition are the issue net neutrality proponents say they are, then Google and Facebook are even bigger concerns than ISPs: both are super-aggregators with unprecedented power and the deepest moats ever seen in technology, and an increasing willingness to not be neutral.

It’s worth stating one last time: I believe deeply in neutrality and in fostering innovation. I think neutrality is the only way the Internet can function at scale, and innovation is how we will survive the Internet-driven transformation that is only just beginning. It is that belief, though, that compels me to push back against this specific regulation, no matter how much I agree with its associated catchphrase.

NOTE: I have written a follow-up to this piece that explains why I’m not making a “Market Will Fix It” argument, the trade-offs in terms of bandwidth investment, and why antitrust enforcement is so critical. You can read it here.

  1. AT&T and Apple did have an agreement to limit VoIP applications on the iPhone; it is is difficult to separate Apple’s motivations from AT&T’s as the former was leveraging the latter to break carrier control — another outcome that was better for customers in the long-run even though it meant foreclosure in the short-run []
  2. Subject to restrictions similar to those I listed above []
  3. Another option is local loop unbundling, which I have discussed in a Daily Update here []

Give Stratechery as a Gift

The holiday season is approaching, and a Stratechery subscription can be given as a gift! You can specify the date of delivery (like, say, December 25, to pick a random example), and include a personalized message to be delivered by email.

To buy a gift visit this page (or send it to someone else to make a request!). Or buy a subscription for yourself here.

Thanks as always for your support.

Stitch Fix and the Senate

There was an interesting line of commentary around the news that Stitch Fix, the personalized clothing e-commerce company, was going to IPO: these numbers are incredible! Take this article in TechCrunch as an example (emphasis mine):

Stitch Fix has filed to go public, finally revealing the financial guts of the startup which will be a test of modern e-commerce businesses that are looking to hit the market — and the numbers look pretty great!

Let’s start off really quick with profits: aside from the last two quarters, Stitch Fix posted a six-quarter streak of positive net income. We talk a lot about companies that are planning to go public that show pretty consistent (or even increasing) losses, but Stitch Fix looks like a company that has actually managed to build a healthy business. The company finally lost money in the last two quarters, but even then, its losses decreased quarter-over-quarter — with the company only losing around $4.5 million in the second quarter this year.

Compare this to the TechCrunch article written when Box, a company that ultimately IPO’d at a similar market-cap as Stitch Fix will (~$2 billion), first filed its IPO:

Box has long been rumored to have quickly growing revenues and large losses, which has proven to be the case. For the full-year period that ended January 2014, Box’s revenues grew to $124 million, up from $58.8 million the year prior. However, the company’s net loss also expanded in the period, with Box posting losses of $168 million for the full-year period that ended January 2014, more than its total top line for the period. In the period prior, Box lost a more modest $112 million.

What is driving Box’s yawning losses? Sales and marketing. The company’s line item for those expenses expanded from $99.2 million for the year ending January 2013, to $171 million for the year ending January 31, 2014. That was the lion’s share of Box’s $100 million increasing in operating costs during the period. Or, put more simply, Box spent more dollars on selling its products in the year than it brought in revenue during the period. This could indicate customer churn, or merely a tough market for cloud products.

In fact, both explanations were completely wrong: Box’s losses were due to the company investing in future growth; a detailed look at cohorts revealed that Box was increasing profitability over time because churn was negative (because existing customers were increasing spend by more than the revenue lost by those leaving), and the share of cloud spending amongst enterprise broadly is only going in one direction — up. To that end, investing more in future growth — even though it made the company unprofitable in the short-term — was an obviously correct decision.

Stitch Fix Concerns

To that end, I find Stitch Fix’s number more concerning than I did Box’s:

  • First, the average revenue per client has decreased over time: according to the numbers provided in Stitch Fix’s S-1, the average client in 2016 generated $335 in revenue in the first six months, and $489 in revenue for the first 12 months; there is not a comparable set of numbers for earlier cohorts (itself a red flag), but the average 2015 client generated $718 in revenue over two years. To the extent these cohorts can be compared, that means $335 in the first six months, $154 in the second six months, and an average of $115 in the third and fourth six-month periods.
  • Second, these clients are increasingly expensive to acquire. Stitch Fix increased its ‘Selling, General and Administrative Expenses’ by 56% last year, but revenue increased by only 34%; advertising spend specifically increased from $25.0 million to $70.5 million (182%), vastly outpacing revenue growth.
  • Third, revenue growth is slowing substantially, despite the fact Stitch Fix has expanded its product offerings, both within its core women’s market as well as expansions to Petite, Maternity, Men’s, and Plus apparel. I noted last year’s revenue growth was 34%; the previous year’s growth was 113%, and the year before that 368%.

The problem for Stitch Fix is the same bugaboo encountered by the majority of consumer companies: the lack of a scalable advantage in customer acquisition costs. I wrote about this earlier this year in the context of Uber:

Uber’s strength — and its sky-high valuation — comes from the company’s ability to acquire customers cheaply thanks to a combination of the service’s usefulness and the effects of aggregation theory: as the company acquires users (and as users increases their usage) Uber attracts more drivers, which makes the service better, which makes it easier to acquire marginal users (not by lowering the price but rather by offering a better service for the same price). The single biggest factor that differentiates multi-billion dollar companies is a scalable advantage in customer acquisition costs; Uber has that.

On the other hand, it seems likely Stitch Fix does not, even though the company did argue in its S-1 that it benefited from network effects:

We believe we are the only company that has successfully combined rich client data with detailed merchandise data to provide a personalized shopping experience for consumers. Clients directly provide us with meaningful data about themselves, such as style, size, fit and price preferences, when they complete their initial style profile and provide additional rich data about themselves and the merchandise they receive through the feedback they provide after receiving a Fix. Our clients are motivated to provide us with this detailed information because they recognize that doing so will result in a more personalized and successful experience. This perpetual feedback loop drives important network effects, as our client-provided data informs not only our personalization capabilities for the specific client, but also helps us better serve other clients.

This may be true — it makes sense that it would be — but while it may help Stitch Fix better serve new customers it is not clear how it helps the company acquire said customers in the first place. Instead, like most consumer companies, it seems likely that Stitch Fix leveraged word-of-mouth to own its core market of women who value convenience in shopping for clothing, but struggled to break beyond that segment — and to the extent it did, found consumers who spend less and churn more. In other words, unlike successful aggregators, the improvement generated by the network effect, such that it was, was less than the increase in acquisition cost:

The key to truly break-out consumer companies is having these lines reversed: the network should be generating an improvement in benefits that exceeds the cost of acquiring customers, fueling a virtuous cycle.

Stitch Fix’s Success

That said, I fully expect Stitch Fix’s IPO to be successful: how could it not be? In stark contrast to many would-be aggregators, Stitch Fix has taken a shockingly small amount of venture capital — only $42.5 million. Instead the company has been profitable — on an absolute basis in 2015-2016, and quite clearly on a unit basis (including acquisition costs) throughout — and cash flow positive. That increased marketing expenditure is being paid by current customers, not venture capitalists. To that end, a $2 billion IPO would be a massive win for Stitch Fix’s investors and Katrina Lake, the founder.

Moreover, while Stitch Fix’s growth may be slowing, that is by no means fatal: the company is a perfectly valid business to own exactly as it is. Indeed, I am in fact deeply impressed by Stitch Fix: it seems quite clear that early on Lake realized that the company was not an aggregator, which meant building a business, well, normally. That means making real profits, particularly on a unit basis. Even then, though, the company was clearly worthy of venture capital: Baseline Ventures and Benchmark will see a 10x+ return.

To that end, Stitch Fix is a more important company than it may seem at first glance: it proves there is a way to build a venture capital-backed company that is not an aggregator, but still a generator of outsized returns. The keys, though, are positive unit economics from the get-go, and careful attention to profitability. The reason this matters is that these sorts of companies are by far the more likely to be built: Google and Facebook are dominating digital advertising, Amazon is dominating undifferentiated e-commerce, Microsoft and Amazon are dominating enterprise, and Apple is dominating devices. To compete with any of them is an incredibly difficult proposition; better to build a real differentiated business from the get-go, and that is exactly what Stitch Fix did.

RSUs, Options, and Taxes

The other winners in Stitch Fix’s IPO are all of its employees that hold stock options and restricted stock units (RSUs): they too have benefited from the company’s reticence in raising money; those options and RSUs are priced significantly below the company’s IPO price. And, when that IPO happens later this week, said employees will benefit tremendously — rightly alongside the IRS. When the IPO happens those stock options and RSUs will become taxable, and the majority of Stitch Fix employees will have to sell some portion of their holdings to cover their bill. This is entirely reasonable: they will have earned their reward for building Stitch Fix into the impressive company it has become, and they will pay taxes on that reward.

What would not have been reasonable, though, would have been to pay those taxes before the IPO. After all, when Stitch Fix started it was not at all certain the company would reach this milestone: there are a whole host of companies that raised far more than Stitch Fix’s $42.5 million that ended up going out of business, or being sold off as an acquihire such that employees earned nothing.

Indeed, I suspect that startup employees are, on balance, terribly underpaid: most take on jobs with lower salaries relative to established companies simply for the chance of making an outsized return should the company they work for IPO; the odds of that happening mean the expected value of the options and RSUs they receive are quite low.

To that end, it is tempting to be skeptical about venture capital protestations about a Senate tax provision that would tax stock options and RSUs at the moment they vest; given the uncertain value, any startup seeking to attract employees would have to significantly up their cash compensation, which would be better for most employees (and thus worse for most venture capitalists) — in the short term, anyways.

The Startup Ecosystem

The problem with this point of view is that the startup employee frame is much too narrow: leaving aside the fact than anyone qualified to work at a startup is already far better off than nearly everyone on earth, the broader issue is that the scope for building successful venture-backed companies is narrowing.

Stitch Fix is a perfect example: I just explained that the company has uncertain growth prospects, but is still a big success thanks in large part to its disciplined approach to the bottom line. That should be a model for more companies: quickly determine if your business can be an aggregator with the scalable acquisition cost advantages that come with it, and if not, build a sustainable business sooner rather than later. That will allow everyone to benefit: founders, venture capitalists, employees, and most importantly, consumers.

A disciplined approach to the bottom line, though, means taking full advantage of a start-up’s number one recruiting tool: stock options and RSUs, in lieu of fully competitive salaries. Had Stitch Fix had to pay its employees in cash the company would have likely had to raise more money, reducing the likelihood of a successful outcome for everyone — including the IRS.

The downside, though, is even more acute for the companies that might seek to become aggregators themselves and so challenge companies like Google, Facebook, or Amazon directly. Any such would-be disruptor — and keep in mind, disruption is the only means by which these companies might ever be threatened — would need to raise huge amounts of capital, likely over an extended period of time. Moreover, the odds of success would be commensurately lower, making it even more likely associated stock options and RSUs might be worthless. To that end, taxing said options and RSUs would make start-up jobs even less attractive, and any sort of alternative — including increased cash compensation — would not only reduce the likelihood of startup success but also deny employees the chance to share in a successful outcome.

Tech’s Constituencies

Like most commentators, I am often guilty of lumping all of technology into one broad bucket; that makes sense when considering the impact of technology on society broadly. This tax bill, though, is a reminder that tech has two distinct constituencies with concerns that don’t always align:

  • Incumbents have successful business models that throw off oodles of cash; their concern is about protecting those models, and they will spend to do so
  • Venture capitalists and founders are seeking to build new businesses that, more often than not, threaten those incumbents; their edge is the opportunity to build businesses perfectly aligned to the problem they are seeking to solve

Tech employees get different benefits from each camp: the former provides high salaries and great perks; you can have a very nice life working for Facebook or Google. Startups, on the other hand, offer a chance to own a (small) piece of something substantial, at the cost of short-term salary — and that is worth preserving. Not only is it important to offer an accessible route up the economic ladder, former startup employees are a key part of the Silicon Valley ecosystem, often providing the initial funding for other new companies.

To that end, what is critical to understand about this proposed tax change is that incumbent companies won’t be hurt at all: sure, they may have to change their compensation to be more cash-rich and RSU-light, but cash isn’t really a constraint on their business. Higher salaries are a small price to pay if it means startups that might challenge them are handicapped; small wonder none of the big companies are lobbying against this provision.

Towards a Startup Lobby

This isn’t the first time the needs of the big tech companies has diverged from startups: net neutrality, for example, is much more important if you don’t have the means to pay to play. The same thing applies to tax laws more broadly, including corporate tax reform and offshore holdings.

Perhaps the most clear example, though, is antitrust: the companies that are hurt the most by Google, Facebook, and Amazon dominance are not analog publishers or retailers, but more direct competitors for digital advertising or e-commerce — mostly startups. Nearly all of the lobbying about this issue, though, is funded by the incumbents, for all of the reasons noted above: they have cash to burn, and business models to protect.

To that end it might behoove the startup community — and to be more specific, venture capitalists — to start building a counterweight. I am optimistic this Senate provision will ultimately be stripped from the proposed tax bill, but that the very foundation of startup compensation was so suddenly threatened should serve as a wake-up call that depending on Google or Apple largesse to represent the tech industry is ultimately self-defeating.

Apple at Its Best

The history of Apple being doomed doesn’t necessarily repeat, but it does rhyme.

Take the latest installment, from Professor Mohanbir Sawhney at the Kellogg School of Management (one of my former professors, incidentally):

Have we reached peak phone? That is, does the new iPhone X represent a plateau for hardware innovation in the smartphone product category? I would argue that we are indeed standing on the summit of peak “phone as hardware”: While Apple’s newest iPhone offers some impressive hardware features, it does not represent the beginning of the next 10 years of the smartphone, as Apple claims…

As we have seen, when the vector of differentiation shifts, market leaders tend to fall by the wayside. In the brave new world of AI, Google and Amazon have the clear edge over Apple. Consider Google’s Pixel 2 phone: Driven by AI-based technology, it offers unprecedented photo-enhancement features and deeper hardware-software integration, such as real-time language translation when used with Google’s special headphones…The shifting vector of differentiation to AI and agents does not bode well for Apple…

Sheets of glass are simply no longer the most fertile ground for innovation. That means Apple urgently needs to shift its focus and investment to AI-driven technologies, as part of a broader effort to create the kind of ecosystem Amazon and Google are building quickly. However, Apple is falling behind in the AI race, as it remains a hardware company at its core and it has not embraced the open-source and collaborative approach that Google and Amazon are pioneering in AI.

It is an entirely reasonable argument, particularly that last line: I myself have argued that Apple needs to rethink its organizational structure in order to build more competitive services. If the last ten years have shown us anything, though, it is that discounting truly great hardware — and the sort of company necessary to deliver that — is the surest way to be right in theory and wrong in reality.

The Samsung Doom

When Stratechery started in 2013, Samsung was ascendent, and the doomsayers were out in force. The arguments were, in broad strokes, the same: hardware innovation was over, and Android’s good enough features, broader hardware base, and lower prices would soon mean that the iPhone would go the way of the Mac relative to Windows.1

At that time the flagship iPhone was the iPhone 5; Apple was still only making one iPhone a year. That phone — the one that, many claimed, was the peak of hardware innovation — featured a larger (relative to previous iPhones) 4 inch LED screen, 8MP rear camera and 1.2 MP front camera, and Apple’s A6 32-bit system-on-a-chip, the first from the company that was not simply a variation on a licensed ARM design. To be sure, the relatively small screen size was a readily apparent problem: one of my first articles argued that Samsung’s focus on larger-screens was a meaningful advantage that Apple should copy.

Obviously Apple eventually did just that with the iPhones 6 and 6 Plus, but screen size is hardly the only thing that changed: later that year Apple introduced the iPhone 5S, which included the A7 chip that blew away the industry by going 64-bit years ahead of schedule; Apple has enjoyed a massive performance advantage relative to the rest of the industry ever since. The iPhone 5S also included Touch ID, the first biometric authentication method that worked flawlessly at scale (and enabled Apple Pay), the usual camera improvements, as well as a new ‘M7’ motion chip that laid the groundwork for Apple’s fitness focus (and the Apple Watch).

And, even as critics clamored that the pricing of the iPhone 5C, launched alongside the 5S, meant the company was going to be disrupted, the iPhone 5S sold in record numbers — just like every previous iPhone had.

The iPhone X

I’m spoiled, I know: gifted with the rationalization of being a technology analyst, I buy an iPhone every year. Even so, I thought the iPhone 7 was a solid upgrade: it was noticeably faster, had an excellent screen, and the camera was great; small wonder it sold in record number everywhere but China.2 What it lacked, though — and I didn’t fully appreciate this until I got an iPhone X — was delight:

Face ID isn’t perfect: there are a lot of edge cases where having Touch ID would be preferable. By its fourth iteration in the iPhone 7, Touch ID was utterly dependable and, like the best sort of technology, barely noticeable.

FaceID takes this a step further: while it takes a bit of time to change engrained habits, I’m already at the point where I simply pick up the phone and swipe up without much thought;3 authenticating in apps like 1Password is even more of a revelation — you don’t have to actually do anything.

In these instances the iPhone X is reaching the very pinnacle of computing: doing a necessary job, in this case security, better than humans can.4 The fact that this case is security is particularly noteworthy: it has long been taken as a matter of fact that there is an inescapable trade-off between security and ease-of-use; TouchID made it far easier to have effective security for the vast majority of situations, and FaceID makes it invisible.

The trick Apple pulled, though, was going beyond that: the first time I saw notifications be hidden and then revealed (as in the GIF above) through simply a glance produced the sort of surprise-and-delight that has traditionally characterized Apple’s best products. And, to be sure, surprise-and-delight is particularly important to the iPhone X: so much is new, particularly in terms of the interaction model, that frustrations are inevitable; in that Apple’s attempt to analogize the iPhone X to the original iPhone is more about contrasts than comparisons.

The Original iPhone and Overshooting

While the iPod wheel may be the most memorable hardware interface in modern computing, and the mouse the most important, touch is, for obvious reasons, the most natural. That, though, only elevates the original iPhone’s single button: everything about touch interfaces needed to be invented, discovered, and figured out; it was that button that made it accessible to everyone — when in trouble, hit the button to escape.

Over the years that button became laden with ever more functionality: app-switching, Siri, TouchID, reachability. It was the physical manifestation of another one of those seemingly intractable trade-offs: functionality and ease-of-use. Sure, the iPhone 5 I referenced earlier was massively more capable than the original iPhone, and the iPhone X vastly more capable still, but in fact an argument based on specifications makes the critics’ point: the more technology that gets ladled on top, the more inaccessible it is to normal users. Clayton Christensen, in the The Innovators’ Dilemma, called this “overshooting”:

Disruptive technologies, though they initially can only be used in small markets remote from the mainstream, are disruptive because they subsequently can become fully performance-competitive within the mainstream market against established products. This happens because the pace of technological progress in products frequently exceeds the rate of performance improvement that mainstream customers demand or can absorb. As a consequence, products whose features and functionality closely match market needs today often follow a trajectory of improvement by which they overshoot mainstream market needs tomorrow. And products that seriously underperform today, relative to customer expectations in mainstream markets, may become directly performance-competitive tomorrow.

This was the reason all of those iPhone critics were so certain that Apple’s days were numbered. “Good-enough” Android phones, sold for far less an iPhone, would surely result in low-end disruption. Here’s Christensen in an interview with Horace Dediu:

The transition from proprietary architecture to open modular architecture just happens over and over again. It happened in the personal computer. Although it didn’t kill Apple’s computer business, it relegated Apple to the status of a minor player. The iPod is a proprietary integrated product, although that is becoming quite modular. You can download your music from Amazon as easily as you can from iTunes. You also see modularity organized around the Android operating system that is growing much faster than the iPhone. So I worry that modularity will do its work on Apple.

Shortly after the iPhone 5S/5C launch, I made the case that Christensen was wrong:

Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated…

Not all consumers value — or can afford — what Apple has to offer. A large majority, in fact. But the idea that Apple is going to start losing consumers because Android is “good enough” and cheaper to boot flies in the face of consumer behavior in every other market. Moreover, in absolute terms, the iPhone is significantly less expensive relative to a good-enough Android phone than BMW is to Toyota, or a high-end bag to one you’d find in a department store…

Apple is — and, for at least the last 15 years, has been — focused exactly on the blind spot in the theory of low-end disruption: differentiation based on design which, while it can’t be measured, can certainly be felt by consumers who are both buyers and users.

Needless to say, in 2013 we weren’t anywhere close to peak iPhone: in the quarter I wrote that article — 4Q 2013, according to Apple’s fiscal calendar, the weakest quarter of the year — the company sold 34 million iPhones; the next quarter Apple booked $58 billion in revenue. We are now four years on, and last quarter — 4Q 2017, again according to Apple’s fiscal quarter — the company sold 47 million iPhones; next quarter Apple is forecasting between $84 and $87 billion in revenue.

More importantly, the experience of using an iPhone X, at least in these first few days, has that feeling: consideration, invention, and yes, as the company is fond to note, the integration of hardware and software. Look again at that GIF above: not only does Face ID depend on deep integration between the camera system, system-on-a-chip, and operating system, but the small touch of displaying notifications only when the right person is looking at them depends on one company doing everything. That still matters.

Moreover, it’s worth noting that the iPhone X is launching into a far different market than the original iPhone did: touch is not new, but rather the familiar; changing many button paradigms into gestures certainly presents a steeper learning curve for first-time smartphone users, but for how many users will the iPhone X be their first smartphone?

Artificial Intelligence and New Market Disruption

Still, I noted that while Apple doom-sayers rhyme, they don’t repeat. The past four years may have thoroughly validated my critique of low-end disruption and the iPhone, but there is another kind of disruption: new market disruption. Christensen explains the difference in The Innovator’s Solution:

Different value networks can emerge at differing distances from the original one along the third dimension of the disruption diagram. In the following discussion, we will refer to disruptions that create a new value network on the third axis as new-market disruptions. In contrast, low-end disruptions are those that attack the least-profitable and most overserved customers at the low end of the original value network.

Christensen ultimately concluded that the iPhone was a new market disruptor of the PC: it was seemingly less capable yet simpler to use, and thus attracted non-consumption, and eventually gained sufficient capabilities to attract PC users as well. This is certainly true as far as it goes;5 certainly there are an order of magnitude more smartphone users than there ever were PC users.

And, to that end, Sawhney’s argument is in this way different from the doomsayers of old: it’s not that Apple will be disrupted by “good-enough” cheap Android, but rather because a new vector is emerging — artificial intelligence:

The vector of differentiation is shifting yet again, away from hardware altogether. We are on the verge of a major shift in the phone and device space, from hardware as the focus to artificial intelligence (AI) and AI-based software and agents.

This means nothing short of redefinition of the personal electronics that matter most to us. As AI-driven phones like Google’s Pixel 2 and virtual agents like Amazon Echo proliferate, smart devices that understand and interact with us and offer a virtual and/or augmented reality will become a larger part of our environment. Today’s smartphones will likely recede into the background.

Makes perfect sense, but for one critical error: consumer usage is not, at least in this case, a zero sum game. This is the mistake many make when thinking about the way in which orthogonal businesses compete:

The presumption is that the usage of Technology B necessitates no longer using Technology A; it follows, then, that once Technology B becomes more important, Technology A is doomed.

In fact, though, most paradigm shifts are layered on top of what came before. The Internet was used on PCs, social networks are used alongside search engines. Granted, as I just noted, smartphones are increasingly replacing PCs, but even then most use is additive, not substitutive. In other words, there is no reason to expect that the arrival of artificial intelligence means that people will no longer care about what smartphone they use. Sure, the latter may “recede into the background” in the minds of pundits, but they will still be in consumers’ pockets for a long time to come.

There’s a second error, though, that flows from this presumption of zero-summedness: it ignores the near-term business imperatives of the various parties. Google is the best example: were the company to restrict its services to its own smartphone platform the company would be financially decimated. The most attractive customers to Google’s advertisers are on the iPhone — just look at how much Google is willing to pay to acquire them6 — and while Google could in theory convince them to switch by keeping its superior services exclusive, in reality such an approach is untenable. In other words, Google is heavily incentivized to preserve the iPhone as a competitive platform in terms of Google’s own services; granted, Android is still better in terms of easy access and defaults, but the advantage is far smaller than it could be.

Apple, meanwhile, is busy building competing services of its own, and while it’s easy — and correct — to argue that they aren’t really competitive with Google’s, that doesn’t really matter because competition isn’t happening in a vacuum. Rather, Apple not only enjoys the cost of switching advantage inherent to all incumbents, but also is, as the iPhone X shows, maintaining if not extending the user experience advantage that comes from its integrated model. That, by extension, means that Apple’s services need only be “good enough” — there’s that phrase! — to let the company’s other strengths shine.

This results in a far different picture: the “hurdle rate” for meaningful Android adoption by Apple’s customer base is far greater than the doom-sayers would have you think.

Apple’s Durable Advantage

I am no Apple pollyanna: I first made the argument years ago that the ultimate Apple bear case is the disappearance of hardware that you touch (which remains the case); I also complimented the company for having the courage to push towards that future.

Indeed, Apple’s aggressiveness in areas like wearables and, at least from a software perspective, augmented reality, suggest the company will press its hardware advantage to get to the future before its rivals, establishing a beachhead that will be that much more difficult for superior services offerings to dislodge. Moreover, there is evidence that Google sees the value in Apple’s approach: the company’s push into hardware may in part be an attempt to find a new business model, but establishing the capabilities to compete in hardware beyond the smartphone is surely a goal as well.

What is fascinating to consider is just how far might Apple go if it decided to do nothing but hardware and its associated software: if Google Assistant could be the iPhone default, why would any iPhone user even give a second thought to Android? I certainly don’t expect this to happen, but that giving away control of what seems so important might, in fact, secure Apple’s future more strongly than anything else, is the most powerful signal possible that the integration of hardware and software — and the organizational knowledge, structure, and incentives that come from that being a company’s primary business model — remains a far more durable competitive advantage than many theorists would have you think.

  1. Which, for the record, is a misreading of history []
  2. Speaking of China, the point of that article was that hardware differentiation mattered more there than anywhere else; I expect the iPhone X to sell very well indeed []
  3. Many of those edge cases are in cases where you are not picking up the phone and thus triggering wake-on-rise; the car, for example, or the desk []
  4. To be clear, this is all relative; in fact, Face ID is arguably even less secure than Touch ID. Sure, 1 in a million chances of a match are better than 1 in 50,000 if the sample is fully random, but given that close siblings, for example, can overcome it in theory is a reminder that relevant samples are not always random. The broader point, though, is that security people use is better than superior solutions they do not. []
  5. Christensen never did explain why the iPhone defeated Nokia et al, who he originally expected to overcome the iPhone; I put forward my theory in Obsoletive []
  6. What I wrote in this Daily Update about Google’s acquisition costs almost certainly explains the bump in Apple’s services revenue last quarter; more on this in tomorrow’s Daily Update []

Tech Goes to Washington

There was a striking moment during the Senate hearing about Facebook, Twitter, and Google’s role in the 2016 U.S. election, that suggested the entire endeavor would be a bit of a farce, marked by out-of-tech Senators oblivious to how the Internet actually works. The three companies’ home-state Senator, Diane Feinstein, had just finished asking about the ability to target custom audiences (including a request that Sean Edgett, Twitter’s acting general counsel, explain what ‘impressions’ were), and handed the floor to Nebraska Senator Ben Sasse:

Did you catch Feinstein in the background asking “Did he say 330 million?” with surprise in her voice? What might she have thought had it been noted that Facebook has 2 billion users! At that moment it was hard to see this hearing amounting to anything; the next Senator, Dick Durbin of Illinois, asked why Facebook didn’t, and I quote, “hold the phone” when a Russian intelligence agency took out the ads. A few Senators later Richard Blumenthal demanded Twitter determine how many people declined to vote after seeing tweets suggesting voters could text their choice, and that Facebook reveal whom may have taught the Russian intelligence agency how to do targeting; both requests are, quite obviously, unknowable by the companies in question.

Meanwhile, the tech companies were declaring at every possible opportunity that they understood that the Russian problem was serious, and that they were committed to fixing it. “We do believe these tools are powerful, and yet we have a responsibility to make sure they’re not used to inflame division,” said Colin Stretch, Facebook’s general counsel. He later stated, “We want our ad tools to be used for political discourse, certainly. But we do not want our ad tools to be used to inflame and divide.” It seemed like a concerted PR effort designed to sooth Senators animated more by scoring political points than by actually understanding the issues at hand: yes, the problem is serious, yes, we are committed to fixing it, and of course, it is so complicated that only we can.

What made the hearing worth watching, though, were three lines of questioning that blew this position apart.

Senator John Kennedy on Facebook’s Power

The single most compelling line of questioning came from Louisiana junior Senator John Kennedy;1 first, he exposed the company’s implied claims that they could actually fix the problem as a sham:

This is the exact issue I discussed in The Super Aggregators and the Russians:

Super-aggregators not only have zero transaction costs when it comes to users and content, but also when it comes to making money. This is at the very core of why Google and Facebook are so much more powerful than any of the other purely information-centric networks. The vast majority of advertisers on both networks never deal with a human (and if they do, it’s in customer support functionality, not sales and account management): they simply use the self-serve ad products like the one pictured above (or a more comprehensive tool built on the companies’ self-serve API).

I added up the numbers in Trustworthy Networking, estimating that Facebook served 276 million unique ads per quarter, and my entire point was the same as Kennedy’s: there is no way that Facebook could ever review every ad, much less investigate who is behind them, without completely ruining their revenue model.

Kennedy wasn’t done, though: he went on to press Stretch in particular about just how much data Facebook has about, well, everyone:

Stretch was insistent that Facebook would never look up the data on any one individual, both because of internal policies as well as the way the company’s data store was engineered. What Kennedy was driving at, though, is that Facebook could; here is the transcription:

Kennedy: Let’s suppose your CEO came to you — not you, but somebody who could do it in your company — maybe you could — and said, “I want to know everything we can find out about Senator Graham. I want to know the movies he likes, I want to know the bars he goes to. I want to know who his friends are. I want to know what schools he goes — went to.” You could do that, couldn’t you?

Stretch: The answer is absolutely not. We have limitations in place on our ability to —

Kennedy: No, no, I’m not asking about your rules. I’m saying you have the ability to do that. Don’t you?

Stretch: Again, Senator. the answer is no. we’re not —

Kennedy: You can’t put a name to a face to a piece of data? You’re telling me that?

Stretch: So we have designed our systems to prevent exactly that, to protect the privacy of our users.

Kennedy: I understand. But you can get around that to find that identity, can’t you?

Stretch: No, Senator. I cannot.

Kennedy: That’s your testimony under oath.

Stretch: Yes, it is.

Senator Kennedy is an interesting character. He speaks with a Southern drawl — the contrast to Stretch, a Harvard-law educated former Supreme Court clerk who sounded exactly like his biography, was stark. Kennedy, though, is impressive in his own right: after graduating magna cum laude from Vanderbilt he received his J.D. from Virginia, and then was a Rhodes Scholar, receiving a Bachelor of Civil Law with first class honours from Oxford. After a winding career in politics, including a switch from the Democratic party to the Republicans, he was elected to the Senate last fall.

What Kennedy surely realized — and what Stretch, apparently, did not — is that Facebook had already effectively answered Kennedy’s question: the very act of investigating the accounts used by Russian intelligence entailed doing the sort of sleuthing that Kennedy wanted Stretch to say was possible. Facebook dived deep into an account by choice, came to understand everything about it, and then shut it down and delivered the results to Congress. It follows that Facebook could — not would, but could — do that to Senator Graham or anyone else.

To be clear, Stretch made clear that Facebook did this because the accounts in question had been deemed inauthenetic; that removed all of the external legal, internal policy, and business model limitations that would prevent Facebook from doing such forensic work to an individual account.

Still, Kennedy’s two lines of questions combined revealed the tech companies’ testimony for the paradox it was: on the one hand, their sheer scale means it is impossible to fully stamp out activities like Russian meddling; on the other, that same scale means they all have the most intimate information on nearly everyone.

Senator Ted Cruz on Bias

Senator Ted Cruz’s line of questioning highlighted just how problematic this power is:

Try, but for a moment, to set your personal politics aside; Cruz is driving at a very fundamental question: is what is acceptable driven by what is right or what is collectively decided? The temptation is surely to choose the former: right is right! And indeed, I suspect that most of my readers believe that Cruz is wrong about most political questions. It is worth, though, considering the alternative: what if the powers that be decide unilaterally?

This line of questioning highlights the problems raised by Kennedy: if the powers that be also happen to have massive investigatory power over basically everyone, then at what point do the internal rules and norms against utilizing that power become overwhelmed by the demand that right thinking be enforced? The tech companies argued throughout this testimony that they took their responsibility seriously, and would snuff out bad actors. Who, though, decides who those bad actors are?

The point of democracy has never been about having the most efficient form of government; no company, for example, would make decisions in such a manner. The best companies are in many respects totalitarian: CEOs have the final say, and employees either get on board or get out. That, though, is only viable because the downside is merely financial; when governments go wrong, on the other hand, far worse can result. That is democracy’s upside: it may not get the most done, but that applies to good outcomes as well as bad.

This also highlights the absurdity in Stretch’s declaration that “We want our ad tools to be used for political discourse, certainly. But we do not want our ad tools to be used to inflame and divide.” Politics is inflammatory, and it does divide. To endeavor to stamp out inflammatory and divisive statements is, by definition, to exercise a degree of power that is clearly latent in Facebook et al, and clearly corrosive to the democratic process.

Senator Al Franken on Tech’s Vulnerability

Cruz’s statement acknowledged that the junior Senator from Minnesota had been very critical of his presidential candidacy; indeed, it is hard to imagine two politicians that fall further apart on the political spectrum. To that end, Franken took a very different tack than Cruz: while the latter was concerned with the tech companies’ lack of neutrality, Franken was disgusted by their lack of action:

There are two levels to this exchange: technically, Franken is off the mark. To reduce Russian interference to buying political ads with rubles is to skate over the complexity of this issue: how do you know what is a political ad, for one, and simply looking at currency is almost certainly a relatively useless signal, for another.

Rhetorically, though, Franken is devastating. Befitting his background as a comedian, Franken has a knack for framing the question at hand in a way that is easy for laypeople to understand, and all but impossible for Facebook to answer. Stretch looks like a fool, not because he is wrong, but because he is right.

This matters; the biggest thing the tech companies have going for them is that they are popular, and this controversy is largely centered within the coastal tech-media bubble. What Franken demonstrated, though, is that this position is potentially more fragile than it seems.

I still believe that, on balance, blaming tech companies for the last election is, more than anything, a convenient way to avoid larger questions about what drove the outcome. And, as I noted, the fact is that tech companies remain popular with the broader public.

What this hearing highlighted, though, is the degree to which the position of Facebook in particular has become more tenuous. The fact of the matter is that Facebook (and Google) is more powerful than any entity we have seen before. Magnifying the problem is that, over the last year, Facebook has decided to “take responsibility”, and what is that but a commitment to exercise their control over what people see?

Indeed, this is where Facebook’s inescapable internal bias surely played a role: the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly: traumatized by the election of a candidate deemed unacceptable Facebook has committed itself to exercising its power, and that is in itself a cause for alarm.

More broadly, it is hard to escape the conclusion that tech companies have been unable to resist the ring of power: the end game of aggregation is unprecedented control over what people see; the only way to handle that power without risking the abuse of it is a commitment to true neutrality. That Facebook, Twitter, and Google — which, by the way, holds just as much if not more power than Facebook, but without the attendant media scrutiny — have committed to fixing the Russian problem is itself more problematic than those urging they do just that may realize.

  1. No relation to the Massachusetts political family []

Why Facebook Shouldn’t Be Allowed to Buy tbh

There was one line in TechCrunch’s report about Facebook’s purchase of social app tbh [sic] that made me raise my eyebrows (emphasis mine):

Facebook announced it’s acquiring positivity-focused polling startup tbh and will allow it to operate somewhat independently with its own brand.

tbh had scored 5 million downloads and 2.5 million daily active users in the past nine weeks with its app that lets people anonymously answer kind-hearted multiple-choice questions about friends who then receive the poll results as compliments. You see questions like “Best to bring to a party?,” “Their perseverance is admirable?” and “Could see becoming a poet?” with your uploaded contacts on the app as answer choices.

tbh has racked up more than 1 billion poll answers since officially launching in limited states in August, mostly from teens and high school students, and spent weeks topping the free app charts. When we profiled tbh last month in the company’s first big interview, co-creator Nikita Bier told us, “If we’re improving the mental health of millions of teens, that’s a success to us.”

Financial terms of the deal weren’t disclosed, but TechCrunch has heard the price paid was less than $100 million and won’t require any regulatory approval. As part of the deal, tbh’s four co-creators — Bier, Erik Hazzard, Kyle Zaragoza and Nicolas Ducdodon — will join Facebook’s Menlo Park headquarters while continuing to grow their app with Facebook’s cash, engineering, anti-spam, moderation and localization resources.

This isn’t quite right. I suspect TechCrunch, and whatever source they “heard” from, is referencing the Hart-Scott-Rodino Antitrust Improvements Act. In order to reduce the burden on the Fair Trade Commission and the Antitrust Division of the Department of Justice, an acquirer only needs to report acquisitions (and wait for a specified time period to give time for review) for which the total value is more than a specified threshold; for 2017, that threshold is $80.8 million. To that end, I wouldn’t be surprised if this deal is worth approximately $80.7 million; that would mean Facebook doesn’t have to submit this acquisition for review.

However, just because Facebook doesn’t have to submit this acquisition for review doesn’t mean it can’t be reviewed; indeed, in a closely-watched case from 2014, the FTC successfully sued to undo a $28 million acquisition that had already been consummated. That was only one of many acquisitions the FTC has investigated that didn’t cross the Hart-Scott-Rodino threshold; in most cases the FTC acted in response to complaints from customers or competitors.

Might an analyst complain as well? The FTC can, and should, investigate this acquisition.

The Social-Communications Map

In late 2013, Facebook made their most concerted effort to buy Snapchat (for $3 billion); that was when I made the Social-Communications Map:

The goal of this map was to show that there was no single social app that covered all of humanity’s social needs: there were critical differences in how people perceived1 different social apps, and that no one app could fill every part of this map.

Facebook, for its part, had, for better or worse, transitioned to a public app that not only handled symmetric relationships, but, at least according to perception, asymmetric broadcast as well; that, though, left an opening for an app like Snapchat. Thus Facebook’s acquisition drive: the company had already secured Instagram, giving it a position in asymmetric ephemeral broadcast apps; Snapchat rebuffed advances, so the company soon moved on to WhatsApp.

The importance of these two acquisitions cannot be overstated: Facebook has always been secure in its dominance of permanent social relationships, a position that has given the company a dominant position in digital advertising. However, while everyone may need a permanent place on the Internet (all of those teenagers people say Facebook needs to reach have Facebook accounts), the ultimate currency is attention, and much like real life, it is ephemeral conversation that dominates. Facebook, by virtue of early decisions around privacy and significant bad press about the dangers of revealing too much, was locked out of this sphere, so it bought in.

The FTC’s Failure

Those acquisitions, by the way, were, per the Hart-Scott-Rodino Act, submitted to the FTC; in the case of Instagram the agency sent what sure seems like a form letter; I’ll quote it in full:

The Commission has been conducting an investigation to determine whether the proposed acquisition of Instagram, Inc. by Facebook, Inc. may violate Section 7 of the Clayton Act or Section 5 of the Federal Trade Commission Act.

Upon further review of this matter, it now appears that no further action is warranted by the Commission at this time. Accordingly, the investigation has been closed. This action is not to be construed as a determination that a violation may not have occurred, just as the pendency of an investigation should not be construed as a determination that a violation has occurred. The Commission reserves the right to take such further action as the public interest may require.

And so the single most injurious acquisition with regards to competition in not just social networking specifically but digital advertising broadly was approved. Section 7 of the Clayton Act (post its 1950 amendment), states:

No person shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no person subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of one or more persons engaged in commerce or in any activity affecting commerce, where in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition, of such stocks or assets, or of the use of such stock by the voting or granting of proxies or otherwise, may be substantially to lessen competition, or to tend to create a monopoly.

“Lessen competition” is exactly what happened. Instagram, super-charged both with the Facebook social graph and the Facebook ad machine, is not only dominating its native ephemeral asymmetric broadcasting space but increasingly preventing Snapchat from expanding. WhatsApp, meanwhile, dominates the messaging space across most of the world,2 and is the most prominent arrow in Facebook’s “future growth” quiver.

The consolidation of attention has translated into dominance in digital advertising. Facebook accounted for 77% of revenue growth in digital advertising in the United States in 2016; add in Google and the duopoly’s share of growth was 99%. Even Snapchat, which after rightly rebuffing Facebook’s acquisition offers, IPO’d earlier this year for $24 billion,3 has seen revenue declines, all while Facebook ever more blatantly rips off the product.

The Privacy Red Herring

The FTC’s response to the WhatsApp acquisition is more interesting: there the agency’s focus was privacy, specifically insisting that Facebook not change WhatsApp’s more stringent promises around user data without affirmative consent from users. This followed a few years after Facebook’s consent decree with the FTC that demanded the company not share user data without their permission.

There’s just one problem: whatever limitations this consent decree may have placed upon Facebook, the reality is that the company is a self-contained ecosystem: prohibiting the permissionless sharing of personal information in fact entrenches Facebook’s position. Take, for example, Europe’s vaunted GDPR law: as I explained in the Daily Update, data portability that, for privacy reasons, excludes the social graph (because your friends didn’t give you permission to share their information with other services) makes it that much harder for competition to arise.

So it was with the FTC’s restrictions around the WhatsApp deal: the agency reiterated that Facebook couldn’t violate user’s privacy, and completely ignored that the easiest away around privacy restrictions is to simply own all of a user’s social interactions.

Understanding Social Networks

Perhaps the most fanciful regulatory document of all, though, is not from the FTC, but rather the United Kingdom’s Office of Fair Trading. Its review of the Instagram deal rested on its analysis of Facebook Camera, an app that no longer exists.

There are several relatively strong competitors to Instagram in the supply of camera and photo editing apps, and those competitors appear at present to be a stronger constraint on Instagram than Facebook’s new app. The majority of third parties did not believe that photo apps are attractive to advertisers on a stand-alone basis, but that they are complementary to social networks. The OFT therefore does not believe that the transaction gives rise to a realistic prospect of a substantial lessening of competition in the supply of photo apps.

“The supply of photo apps.” What a stunningly ignorant evaluation of what Instagram already was: not simply a photo filter app but a social network in its own right. The part about revenue generation, though, was even more amazing:

The parties’ revenue models are also very different. While Facebook generates revenue from advertising and users purchasing virtual and digital goods via Facebook, Instagram does not generate any revenue.

This bit, five year on, still leaves me speechless: Instagram didn’t generate advertising revenue because that’s not how social networks work. As Mark Zuckerberg frequently explains, there is a formula for monetization: first grow users, then increase engagement, next attract businesses, and finally sell ads. Just because Instagram, at the time of this acquisition, was still in Stage 1, did not preclude the possibility of Stage 4; the problem is that the Office of Fair Trading simply had no idea how this world worked.

The issue is straightforward: networks are the monopoly makers of the Internet era. To build one is extremely difficult, but, once built, nearly impregnable. The only possible antidote is another network that draws away the one scarce resource: attention. To that end, when it comes to the Internet, the single most effective tool in antitrust regulation is keeping social networks in separate competitive companies. That the FTC and Office of Fair Trading failed to do so in the case of Instagram and WhatsApp is to the detriment of everyone.

Facebook and tbh

This is the context for Facebook’s tbh acquisition. The app, new as it is, is attacking greenspace in the Social-Communication Map:

tbh is hardly the only contender: Secret and Yik Yak were others. Secret failed due to the lack of an organizational mechanic and negativity; Yik Yak fixed the former by utilizing location, but suffered from the same negativity problem. tbh has clearly learned lessons from both: the app leverages both location and your address book as an organizing mechanic, and is engineered from the ground-up to be focused on positivity.

Moreover, it’s easy to see how it could be super-charged by Facebook: the social graph is probably even more powerful than the address book in terms of building a network, and provides multiple outlets for connections established on tbh. Just as importantly, Facebook can in the short term fund tbh and, in the long run, simply graft the service onto its cross-app sales engine. It’s a great move for both parties.

What is much more questionable, though, is whether this is a great deal for society. tbh is, by definition, winning share in the zero sum competition for attention in the ultra-desirable teenage demographic in particular, and that’s good news for any would-be Facebook competitors. Why should it be ok for Facebook to simply swallow up another app, small thoughh it may currently be? Again, simply looking at narrowly-defined marketshare estimations or non-existent revenue streams is to fundamentally misunderstand how social networks work.

Indeed, I’ve already made my position clear — social networks should not be allowed to acquire other social networks:

Facebook should not be allowed to buy another network-based app; I would go further and make it prima facie anticompetitive for one social network to buy another. Network effects are just too powerful to allow them to be combined. For example, the current environment would look a lot different if Facebook didn’t own Instagram or WhatsApp (and, should Facebook ever lose an antitrust lawsuit, the remedy would almost certainly be spinning off Instagram and WhatsApp).

The FTC dropped the ball with Instagram and WhatsApp; absent a time machine, the best time to do the right thing is right now.

Or, perhaps, Facebook should be allowed to proceed — but with conditions. My second demand is about the social graph:

All social networks should be required to enable social graph portability — the ability to export your lists of friends from one network to another. Again Instagram is the perfect example: the one-time photo-filtering app launched its network off the back of Twitter by enabling the wholesale import of your Twitter social graph. And, after it was acquired by Facebook, Instagram has only accelerated its growth by continually importing your Facebook network. Today all social networks have long since made this impossible, making it that much more difficult for competitors to arise.

Requiring Facebook to offer its social graph to any would-be competitor as a condition of acquiring tbh would be a good outcome; unfortunately, it is perhaps the most unlikely, given the FTC’s commitment to unfettered privacy (without a consideration of the impact on competition).

What shouldn’t be allowed is what Facebook clearly hopes — and suggests — will happen: no regulatory review at all. The FTC has the power, and it’s time to use it.

  1. Perceived is a critical point: Twitter and Instagram, for example, are permanent, but are perceived by most as being ephemeral (arguably Twitter’s has shifted in the public conscious as being something that is more permanent) []
  2. The most noteworthy exceptions being the United States (mixed), China (WeChat), South Korea (Kakao), Japan, Thailand, and Taiwan (LINE) []
  3. As an aside, for all of Snapchat’s troubles to justify its $24 billion IPO, keep in mind that the vast majority of the commentariat insisted Spiegel was irrational to turn down $3 billion; it’s a reminder that few understand exponential growth curves []

Goodbye Gatekeepers

I’d be remiss in not stating the obvious: Harvey Weinstein is a despicable human being, who did evil things. It’s worth noting, though, the structure of Hollywood that made it possible for him to do so much evil with such frequency for so long.

The Structure of Hollywood


There has always been a large “supply” of movie actors, directors, script writers, etc.; Los Angeles is famous for being a city of transplants, particularly young men and women eager to make a go of it in show business, certain their breakthrough opportunity is the next audition, the next script, the next movie pitch.

The supply of movies, though, is limited. These two charts from Stephen Follows tell the story. First, the number of feature films:

Then, the number of studio versus non-studio films:

Back in 1980, shortly after the creation of Weinstein’s Miramax production company, there were just over 100 movies release in US cinemas a year; in 2016, there were 736, but for “wide Studio releases” — Weinstein’s territory — there were only 93. Suppose there are five meaningful acting jobs per movie: that means there are only about 500 meaningful acting jobs a year. And Weinstein not only decided who filled many of those 500 roles, he had an outsized ability to affect who filled the rest by making or breaking reputations.

Weinstein was a gatekeeper, presented with virtually unlimited supply while controlling limited distribution: those that wished to reach consumers had to accede to his demands, no matter how criminally perverse they may have been. Lauren O’Connor, an employee at the Weinstein Company, summed up the power differential that resulted in an internal memo uncovered by the New York Times:

I am a 28 year old woman trying to make a living and a career. Harvey Weinstein is a 64 year old, world famous man and this is his company. The balance of power is me: 0, Harvey Weinstein: 10.

What made Hollywood’s structure particularly nefarious was the fact that selecting actors is such a subjective process. Movies are art — what appeals to one person may not appeal to another — making people like Weinstein cultural curators. If he were to not select an actor, or purposely damaged their reputation through his extensive contacts with the press, they wouldn’t have a chance in Hollywood. After all, there were many others to choose from, and no other routes to making movies.

All the News That’s Fit to Print

Jim Rutenberg, the New York Times’ media columnist, highlighted Weinstein’s press contacts in a follow-up piece entitled Harvey Weinstein’s Media Enablers:

The real story didn’t surface until now because too many people in the intertwined news and entertainment industries had too much to gain from Mr. Weinstein for too long. Across a run of more than 30 years, he had the power to mint stars, to launch careers, to feed the ever-famished content beast. And he did so with quality films that won statuettes and made a whole lot of money for a whole lot of people.

Sharon Waxman, a former reporter for the New York Times, said on The Wrap that the New York Times itself belonged on that list:

I simply gagged when I read Jim Rutenberg’s sanctimonious piece on Saturday about the “media enablers” who kept this story from the public for decades…That’s right, Jim. No one — including The New York Times. In 2004, I was still a fairly new reporter at The New York Times when I got the green light to look into oft-repeated allegations of sexual misconduct by Weinstein…The story I reported never ran.

After intense pressure from Weinstein, which included having Matt Damon and Russell Crowe call me directly to vouch for Lombardo and unknown discussions well above my head at the Times, the story was gutted. I was told at the time that Weinstein had visited the newsroom in person to make his displeasure known. I knew he was a major advertiser in the Times, and that he was a powerful person overall.

Weinstein’s alleged pressuring of the New York Times — and his ability to influence the media generally — rested on the fact that the media is also a gatekeeper. The New York Times still brags as such in its print edition:

“All the News That’s Fit to Print” is rather clear about how the New York Times’ views itself: the arbiter — that is gatekeeper — of what news ought to be consumed by the public. In truth, though, by 2004 that gatekeeper role was already breaking down; perhaps the most famous example involved another set of allegations of sexual misconduct, when in 1998 the Drudge Report reported the news that Newsweek wouldn’t:

The gate could not hold.

The Structure of Newspapers

After Waxman’s post, New York Times’ editor-in-chief Dean Baquet argued that “it is unimaginable” that her story was killed due to pressure from Weinstein; in fact, though, an examination of the structure of the newspaper business suggests it is quite imaginable.

In 2004, the New York Times had $3.3 billion in revenue, up 2.4% year-over-year. That increase, though, belied deeper problems: circulation had dropped a percentage point year-over-year; revenue growth came from a 6% increase in adverting rates. Advertising was the New York Times’ primary revenue source, accounting for 66% of total revenue, and given that in 2003 the average Hollywood movie spent an average of $34.8 million in advertising, some portion of that undoubtedly came from Weinstein specifically.

The reason that circulation decline suggested a problem is that the ability of the New York Times and other newspapers to command advertising depended on being a gatekeeper: advertisers didn’t take out newspaper ads because they loved newspapers, they took out newspaper ads because it was an effective way to reach potential customers:

“Gatekeeper” is another way to say “integrator”, and as I have explained previously, the key to the newspaper business model was controlling distribution and integrating editorial content and ads. In 2004, though, that integration was the verge of falling apart; the Internet meant advertisers would reach customers directly. It had already happened with Craigslist and classifieds, and first ad networks and then social networks would do the same to display ads, causing newspaper advertising revenue to plummet to levels not seen since the 1950s:

2004 came after that first Craigslist-inspired decline, and it’s all too easy to imagine Weinstein’s threats having their intended effect.

Journalism Worth Paying For

The ultimate credit for the New York Times story goes first and foremost to the women willing to go on the record, and then to Jodi Kantor and Megan Twohey, the reporters who investigated and wrote it. If Waxman’s allegations are true, though, then it’s worth pointing out that the New York Times is in a very different place than it was in 2004.

Last year the New York Times had $1.6 billion in revenue, a 53% decrease from 2004. Critically, though, the source of that revenue had flipped on its head: advertising accounted for only 37% of revenue, while circulation was 57%, up from 54% in 2015, and only 27% in 2004; by all account circulation is up significantly more in 2017.

That image is from the company’s 2020 strategy report, which declared that the editorial product should align with the company’s focus on subscriptions; Baquet told Recode that it was his job “to do as many ‘Amazons’ as possible”, referring to the paper’s investigative report on Amazon’s working conditions. Certainly this Weinstein piece fits: whatever expenses the New York Times spent reporting this story will be more than made up in the burnishing of the company’s reputation for journalism that is worth paying for.

Admittedly, “Journalism worth paying for” doesn’t have the same ring as “All the News That’s Fit to Print”, but it is a far better descriptor of the New York Times’ new business model:

In a world where the default news source is the Facebook News Feed, the New York Times is breaking out of the inevitable modularization and commodification entailed in supplying the “news” to the feed. That, in turn, requires building a direct relationship with customers: they are the ones in charge, not the gatekeepers of old — even they must now go direct.

YouTube and the Movies

In the aftermath of the New York Times report (and another from The New Yorker), various stories have alluded to the fact that Weinstein has less power than he used to. I can’t say I know enough about the particulars of Hollywood to know whether that it true in a relative sense, but there’s no question movies are less important than ever before. Indeed, the industry looks a lot like newspapers in 2004; revenue is increasing due to higher prices, even as the number of movie-goers steadily declines (graph from The Numbers):

Meanwhile, more and more cachet — and star power — is flowing to serialized television, particularly distributors like Netflix and HBO that go directly to customers. And don’t forget YouTube: video is a zero sum activity — time spent watching one source of video is time not spent watching another — and YouTube showed over a billion hours of video worldwide every day in 2016.

YouTube represents something else that is just as important: the complete lack of gatekeepers. Google CEO Sundar Pichai said on an earnings’ call earlier this year that “Every single day, over 1,000 creators reached the milestone of having 1,000 channel subscribers.” That is an astounding number in its own right; what is even more remarkable is that while Hollywood has only ~3,500 acting slots a year (including all movies, not just major studios), YouTube creates 100 times as many “stars” over the same time period.

The End of Gatekeepers

It is easy to see the downsides of the destruction of gatekeepers; in 2016, before the election, I explained how the collapse of media gatekeepers meant the collapse of political gatekeepers. From The Voters Decide:

There is no one dominant force when it comes to the dispersal of political information, and that includes the parties described in the previous section. Remember, in a Facebook world, information suppliers are modularized and commoditized as most people get their news from their feed. This has two implications:

  • All news sources are competing on an equal footing; those controlled or bought by a party are not inherently privileged
  • The likelihood any particular message will “break out” is based not on who is propagating said message but on how many users are receptive to hearing it. The power has shifted from the supply side to the demand side

This is a big problem for the parties as described in The Party Decides. Remember, in Noel and company’s description party actors care more about their policy preferences than they do voter preferences, but in an aggregated world it is voters aka users who decide which issues get traction and which don’t. And, by extension, the most successful politicians in an aggregated world are not those who serve the party but rather those who tell voters what they most want to hear.

I can imagine there are many that long for the days when the media — and by extension the parties — could effectively determine presidential nominees. The Weinstein case, though, is a reminder of just how rotten gatekeepers can be. Their very structure is ripe for abuse by those in power, and suppression of those wishing to break through; consumers, meanwhile, are taken for granted.

For my part, I’m thankful such structures are increasingly untenable: perhaps the New York Times didn’t spike that 2004 story because of pressure from Weinstein, but there’s no doubt that for decades “All the News That’s Fit to Print” was shamefully deficient in reporting about news and groups that weren’t on the radar of New York newspaper editors. And, selfishly, I wouldn’t have the career I do without the absence of gatekeepers: anyone can set up a website and send an email and instantly compete with the New York Times and everyone else for attention and subscription dollars.

Most importantly, though, the end of gatekeepers is inevitable: the Internet provides abundance, not scarcity, and power flows from discovery, not distribution.1 We can regret the change or relish it, but we cannot halt it: best to get on with making it work for far more people than gatekeepers ever helped — or harassed.

  1. And fortunately, to date, those that own distribution — the aggregators — have tried to be neutral; that’s a good thing []