Facebook and the Cost of Monopoly

The shamelessness was breathtaking.

Having told a few jokes, summarized his manifesto, and acknowledged the victim of the so-called “Facebook-killer” in Cleveland, Facebook founder and CEO Mark Zuckerberg opened his keynote presentation at the company’s F8 developer conference like this:

You may have noticed that we rolled out some cameras across our apps recently. That was Act One. Photos and videos are becoming more central to how we share than text. So the camera needs to be more central than the text box in all of our apps. Today we’re going to talk about Act Two, and where we go from here, and it’s tied to this broader technological trend that we’ve talked about before: augmented reality.

If that seems familiar, it’s because it is the Explain Like I’m Five summary of Snap’s S-1:

In the way that the flashing cursor became the starting point for most products on desktop computers, we believe that the camera screen will be the starting point for most products on smartphones. This is because images created by smartphone cameras contain more context and richer information than other forms of input like text entered on a keyboard. This means that we are willing to take risks in an attempt to create innovative and different camera products that are better able to reflect and improve our life experiences.

Snap may have declared itself a camera company; Zuckerberg dismissed it as “Act One”, making it clear that Facebook intended to not simply adopt one of Snapchat’s headline features but its entire vision.

Facebook and Microsoft

Shortly after Snap’s S-1 came out, I wrote in Snap’s Apple Strategy that the company was like Apple; unfortunately, the Apple I was referring to was not the iPhone-making juggernaut we are familiar with today, but rather the Macintosh-creating weakling that was smushed by Microsoft, which is where Facebook comes in.

Today, if Snap is Apple, then Facebook is Microsoft. Just as Microsoft succeeded not because of product superiority but by leveraging the opportunity presented by the IBM PC, riding Big Blue’s coattails to ecosystem dominance, Facebook has succeeded not just on product features but by digitizing offline relationships, leveraging the desire of people everywhere to connect with friends and family. And, much like Microsoft vis-à-vis Apple, Facebook has had The Audacity of Copying Well.

I wrote The Audacity of Copying Well when Instagram launched Instagram stories; what was brilliant about the product is that Facebook didn’t try to re-invent the wheel. Instagram Stories — and now Facebook Stories and WhatsApp Stories and Messenger Day — are straight rip-offs of Snapchat Stories, which is not only not a problem, it’s actually the exact optimal strategy: Instagram’s point of differentiation was not features, but rather its network. By making Instagram Stories identical to Snapchat Stories, Facebook reduced the competition to who had the stronger network, and it worked.

Microsoft and Monopoly

Microsoft, of course, was found to be a monopoly, and, as I wrote a couple of months ago in Manifestos and Monopolies, it is increasingly difficult to not think the same about Facebook. That, though, is exactly what you would expect for an aggregator. From Antitrust and Aggregation:

The first key antitrust implication of Aggregation Theory is that, thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers. This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience.

This self-selection, particularly onto a “free” platform, makes it very difficult to calculate what cost, if any, Facebook’s seeming monopoly exacts on society. Consider the Econ 101 explanation of why monopolies are problematic:

  • In a perfectly competitive market the price of a good is set at the intersection of demand and supply, the latter being determined by the marginal cost of producing that good:1

    stratechery Year One - 336

  • The “Consumer Surplus”, what consumers would have paid for a product minus what they actually paid, is the area that is under the demand curve but over the price point; the “Producer Surplus”, what producers sold a product for minus the marginal cost of producing that product, is the area above the marginal cost/supply curve and below the price point:

    stratechery Year One - 339

  • In a monopoly situation, there is no competition; therefore, the monopoly provider makes decisions based on profit maximization. That means instead of considering the demand curve, the monopoly provider considers the marginal revenue (price minus marginal cost) that is gained from selling additional items, and sets the price where marginal revenue equals marginal cost. Crucially, though, the price is set according to the demand curve:

    stratechery Year One - 332

  • The result of monopoly pricing is that consumer surplus is reduced and producer surplus is increased; the reason we care as a society, though, is the part in brown: that is deadweight loss. Some amount of demand that would be served by a competitive market is being ignored, which means there is no surplus of any kind being generated:

    stratechery Year One - 337

The problem with using this sort of analysis for Facebook should be obvious: the marginal cost for Facebook of serving an additional customer is zero! That means the graph looks like this:

stratechery Year One - 327

So sure, Facebook may have a monopoly in social networking, and while that may be a problem for Snap or any other would be networks, Facebook would surely argue that the lack of deadweight loss means that society as a whole shouldn’t be too bothered.

Facebook and Content Providers

The problem is that Facebook isn’t simply a social network: the service is a three-sided market — users, content providers, and advertisers — and while the basis of Facebook’s dominance is in the network effects that come from connecting all of those users, said dominance has seeped to those other sides.

Content providers are an obvious example: Facebook passed Google as the top traffic driver back in 2015, and as of last fall drove over 40% of traffic for the average news site, even after an algorithm change that reduced publisher reach.

So is that a monopoly when it comes to the content provider market? I would argue yes, thanks to the monopoly framework above.

Note that once again we are in a situation where there is not a clear price: no content provider pays Facebook to post a link (although they can obviously make said link into an advertisement). However, Facebook does, at least indirectly, make money from that content: the more users find said content engaging, the more time they will spend on Facebook, which means the more ads they will see.

This is why Facebook Instant Articles seemed like such a brilliant idea: on the one side, readers would have a better experience reading content, which would keep them on Facebook longer. On the other side, Facebook’s proposal to help publishers monetize — publishers could sell their own ads or, enticingly, Facebook could sell them for a 30% commission — would not only support the content providers that are one side of Facebook’s three-sided market, but also lock them into Facebook with revenue they couldn’t get elsewhere. The market I envisioned would have looked something like this:

stratechery Year One - 338

However, Instant Articles haven’t turned out the way I expected: the consumer benefits are there, but Facebook has completely dropped the ball when it comes to monetizing the publishers using them. That is not to say that Facebook isn’t monetizing as a whole, thanks in part to that content, but rather that the company wasn’t motivated to share. Or, to put it another way, Facebook kept most of the surplus for itself:

stratechery Year One - 327

In this case, it’s not that Facebook is setting a higher price to maximize their profits; rather, they are sharing less of their revenue; the outcome, though, is the same — maximized profits. Keep in mind this approach isn’t possible in competitive markets: were there truly competitors for Facebook when it came to placing content, Facebook would have to share more revenue to ensure said content was on its platform. In truth, though, Facebook is so dominant when it comes to attention that it doesn’t have to do anything for publishers at all (and, if said publishers leave Instant Articles, well, they will still place links, and the users aren’t going anywhere regardless).

Facebook and Advertisers

There may be similar evidence — that Facebook is able to reduce supply in a way that increases price and thus profits — emerging in advertising. In a perfectly competitive market the cost of advertising would look like this:

stratechery Year One - 337

Facebook, though, will soon be limiting quantity, or at least limiting its growth. On last November’s earnings call CFO Dave Wehner said that Facebook would stop increasing ad load in the summer of 2017 (i.e. Facebook has been increasing the number of ads relative to content in the News Feed for a long time, but would stop doing so). What was unclear — and as I noted at the time, Wehner was quite evasive in answering this — was whether or not that would cause the price per ad to rise.

There are two possible reasons for Wehner to have been evasive:

  • Prices will not rise, which would be a bad sign for Facebook: it would mean that despite all of Facebook’s data, their ads are not differentiated, and that money that would have been spent on Facebook will simply be spent elsewhere
  • Prices will rise, which would mean that Facebook’s ads are differentiated such that Facebook can potentially increase profits by restricting supply

To put the second possibility in graph form:

stratechery Year One - 332

Note that Facebook has already said that revenue growth will slow because of this change; that, though, is not inconsistent with having monopoly power. Monopolists seek to maximize profit, not revenue. Alternately, it could simply be that Facebook is worried about the user experience; it will be fascinating to see how the company’s bottom line shifts with these changes.

Monopolies and Innovation

Still, even if Facebook does have monopoly power when it comes to content discovery and distribution and in digital advertising, is that really a problem for users? Might it even be a good thing?

Facebook board member Peter Thiel certainly thinks so. In Zero to One Thiel not only makes the obvious point that businesses that are monopolies are ideal, but says that models like the ones I used above aren’t useful because they presume a static world.

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you…But the world we live in is dynamic: it’s possible to invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

The dynamism of new monopolies itself explains why old monopolies don’t strangle innovation. With Apple’s iOS at the forefront, the rise of mobile computing has dramatically reduced Microsoft’s decades-long operating system dominance. Before that, IBM’s hardware monopoly of the ’60s and ’70s was overtaken by Microsoft’s software monopoly. AT&T had a monopoly on telephone service for most of the 20th century, but now anyone can get a cheap cell phone plan from any number of providers. If the tendency of monopoly businesses were to hold back progress, they would be dangerous and we’d be right to oppose them. But the history of progress is a history of better monopoly businesses replacing incumbents. Monopolies drive progress because the promise of years or even decades of monopoly profits provides a powerful incentive to innovate. Then monopolies can keep innovating because profits enable them to make the long-term plans and to finance the ambitious research projects that firms locked in competition can’t dream of.

The problem is that Thiel’s examples refute his own case: decades-long monopolies like those of AT&T, IBM, and Microsoft sure seem like a bad thing to me! Sure, they were eventually toppled, but not after extracting rents and, more distressingly, stifling innovation for years. Think about Microsoft: the company spent billions of dollars on R&D and gave endless demos of futuristic tech; the most successful product that actually shipped (Kinect) ended up harming the product it was supposed to help.2

Indeed, it’s hard to think of any examples where established monopolies produced technology that wouldn’t have been produced by the free market; Thiel wrongly conflates the drive of new companies to create new monopolies with the right of old monopolies to do as they please.

That is why Facebook’s theft of not just Snapchat features but its entire vision bums me out, even if it makes good business sense. I do think leveraging the company’s network monopoly in this way hurts innovation, and the same monopoly graphs explain why. In a competitive market the return from innovation meets the demand for customers to determine how much innovation happens — and who reaps its benefits:

stratechery Year One - 330

A monopoly, though, doesn’t need that drive to innovate — or, more accurately, doesn’t need to derive a profit from innovation, which leads to lazy spending and prioritizing tech demos over shipping products. After all, the monopoly can simply take others’ innovation and earn even more profit than they would otherwise:

stratechery Year One - 334

This, ultimately, is why yesterday’s keynote was so disappointing. Last year, before Facebook realized it could just leverage its network to squash Snap, Mark Zuckerberg spent most of his presentation laying out a long-term vision for all the areas in which Facebook wanted to innovate. This year couldn’t have been more different: there was no vision, just the wholesale adoption of Snap’s, plus a whole bunch of tech demos that never bothered to tell a story of why they actually mattered for Facebook’s users. It will work, at least for a while, but make no mistake, Facebook is the only winner.

  1. If any individual firm’s marginal costs are higher, they will go out of business; if they are lower they will temporarily dominate the market until new competitors enter. Yes, this is all theoretical! []
  2. I’m referring to the fact the Xbox One had a higher price and lower specs than the PS4, thanks in large part to the bundled Kinect []

The Walt Mossberg Brand

It is a momentous day not just for those of us who write about the tech industry, but anyone who has paid any attention at all to consumer products for the last 26 years. From Walt Mossberg, at The Verge:

It was a June day when I began my career as a national journalist. I stepped into the Detroit Bureau of The Wall Street Journal and started on what would be a long, varied, rewarding career. I was 23 years old, and the year was 1970. That’s not a typo.

So it seems fitting to me that I’ll be retiring this coming June, almost exactly 47 years later. I’ll be hanging it up shortly after the 2017 edition of the Code Conference, a wonderful event I co-founded in 2003 and which I could never have imagined back then in Detroit…

In the best professional decision of my life, I converted myself into a tech columnist in 1991. As a result, I got to bear witness to a historic parade of exciting, revolutionary innovation — from slow, clumsy, ancient PCs to sleek, speedy smartphones; from CompuServe and early AOL to the mobile web, apps, and social media. My column has run weekly in a variety of places over the years, most recently on The Verge and Recode under the Vox Media umbrella, where I’ve been quite happy and have added a podcast of which I’m proud.

So I see retirement as just another of these reinventions, another chance to do new things and be a new version of myself.

Mossberg undersells himself: a necessary prerequisite to “convert[ing him]self into a tech columnist” was inventing the very concept. That I had to make such an observation — was there really a time in recent history in which major publications did not have someone focused on technology? — is itself a testament to Mossberg’s vision.

Mossberg and the Birth of Consumer Technology

What made Mossberg unique was tied up into his job description: there were certainly tech journalists and reviewers and publications like PC Magazine, but they were writing for people who already cared about technology, whether because they worked in the industry or because it was their hobby (like it had been Mossberg’s in his time as a reporter for the Wall Street Journal). Mossberg, as he told The New Yorker for this 2007 profile, had a different audience in mind:

The Journal may have an élite business audience, but, as Mossberg puts it, “I write my column for the average person.” He adds, “That’s one of the reasons I write about it as a class war” — techies vs. consumers…

In a seven-page, single-spaced prospectus that Mossberg sent to [then-Wall Street Journal managing editor Norman] Pearlstine on May 1, 1991, he wrote:

If it works as I envision it, this column…would be the voice, the champion, of the individual person actually faced with buying and using the core hi-tech devices — the customer whom industry calls the “end user.”

When the new job was settled, and Mossberg told [Secretary of State James A.] Baker and [Assistant Secretary of State Margaret] Tutwiler that he was leaving the national-security beat, Baker was baffled. “To this day,” Tutwiler told me, “Jim Baker has never owned or operated a computer, or a BlackBerry, or a cell phone.”

In fact, Baker’s obliviousness to technology, at least in 1991, was pretty normal: computers were increasingly prevalent in businesses, but still, there were only 18 million personal computers sold that year (as a point of comparison, there were about 18 million smartphone sold every four days in 2016), and the majority didn’t even have a graphical user interface (Windows 3.0 had come out the year before, but DOS was dominant until Windows 95).

Moreover, there wasn’t much of a consumer market at all, in part because many of the apps we associate with consumer usage barely existed: Microsoft Word and Excel had launched, but trailed the market leaders — WordPerfect and Lotus 1-2-3, respectively — while Adobe Photoshop had launched the year before. id Software, perhaps the company most responsible for making the PC into a gaming device, was founded in 1991, but its first game, Wolfenstein 3D, wouldn’t come out until the following year.

However, it turned out that Mossberg’s timing was far more momentous than he probably knew when he sent that prospectus to Pearlstine: it was around the same time, in January 1991, that Tim Berners-Lee switched on the servers that hosted the first ever web page; in other words, Mossberg invented the position of technology columnist right when the technology that would ensure the industry’s impact was felt by every single person on earth was invented.

Mossberg and the Evolution of Media

As I’ve noted on multiple occasions, including in a recent appearance at the Code Media conference (itself a creation of Mossberg and Kara Swisher), the industry that has most dramatically felt the impact of the Internet is the media, and the arc of Mossberg’s career as a technology columnist reflects that.

Mossberg’s first column, How to Stop Worrying And Get the Most From Your PC, was only available in print — remember, the World Wide Web had only been invented a few months prior.

OB-QJ335_ptech1_G_20111101115026

The role of the Wall Street Journal in Mossberg’s rise to prominence, though, went deeper than just owning printing presses and delivery trucks: it was the Journal’s brand name and status in the world that gave Mossberg credibility right off the bat. From a 2004 Mossberg profile in Wired:

When Mossberg “launched “Personal Technology,” Pearlstine wanted him to move to Silicon Valley. Mossberg refused to uproot his family. “How will you see all the new products?” Pearlstine asked. “I’ll go there a few times a year,” Mossberg responded, “but they’ll come to me whether I’m in Juneau or Fargo, because I’m The Wall Street Journal.”

Indeed, a big reason Pearlstine even gave Mossberg the opportunity to launch his Personal Technology column, over the objections of many at Dow Jones, was that Mossberg had long since proven his value over the 18 years he had spent at the Journal as a reporter: at least when Personal Technology started, the power flowed from the masthead, and it took Mossberg nearly two decades to earn the right to wield it.

It didn’t take long, though, for Mossberg to make that power his own; Personal Technology was immediately a hit, both amongst Journal subscribers and, just as importantly, the company’s ad salespeople. And, five years later, when “The Wall Street Journal Interactive Edition” (i.e. the online version of the Wall Street Journal1) launched, Mossberg’s influence only increased as his column was now available to everyone in the world. Along the way, as noted in The New Yorker profile, something interesting happened:

Eric Schmidt suggests that, while the Internet may yield enormous amounts of information, it is easy to drown in it. So consumers, Schmidt says, “go to brands they trust.” He adds, “Walt is a brand.”

For the next decade Mossberg was, as that Wired profile is titled, the “Kingmaker.” Mossberg is credited with helping AOL overtake Prodigy, for killing Microsoft’s abusive and intrusive Smart Tags, and, perhaps most of all, for chronicling the rise of Apple.

Mossberg and Apple

There have always been grumblings that Mossberg is “biased” towards Apple. In fact, though, while Mossberg did by and large favor Apple products — Apple made five of Mossberg’s 12 most influential products — the bias, such as it was, was right there in his first column:

Personal computers are just too hard to use, and it’s not your fault.

Mossberg was Steve Jobs’ favorite columnist — and Mossberg a frequent admirer of Apple’s products — because both had the same vision: bringing these geeky, impenetrable, and rather ugly boxes of wires and chips and disks called personal computers to normal people, convinced said computers could, if only made accessible, fundamentally transform a user’s life.

What always made Apple different from other PC manufacturers, to its detriment in the 80s and 90s, and its tremendous benefit this century, was its resolute focus on the user experience, even at the expense of business-focused priorities like compatibility or extensibility. The payoff was a computer that was actually approachable for normal people, which is what always mattered to Mossberg. Mossberg wrote in a lovely column after Jobs’ death:

This quality was on display when Apple opened its first retail store. It happened to be in the Washington, D.C., suburbs, near my home. He conducted a press tour for journalists, as proud of the store as a father is of his first child. I commented that, surely, there’d only be a few stores, and asked what Apple knew about retailing. He looked at me like I was crazy, said there’d be many, many stores, and that the company had spent a year tweaking the layout of the stores, using a mockup at a secret location. I teased him by asking if he, personally, despite his hard duties as CEO, had approved tiny details like the translucency of the glass and the color of the wood. He said he had, of course.

That mattered to Mossberg just as much as it did to Jobs, and if caring about the entire experience meant he was biased towards Apple, then I rather wish not just every tech writer but also every product manager and CEO would be biased as well.

Mossberg and the Internet

Probably the ultimate manifestation of the Mossberg brand was the “D: All Things Digital” conference he started in partnership with Kara Swisher, then a reporter for the Wall Street Journal covering the tech industry. Steve Jobs was an annual guest, from the first iteration of the conference in 2003 on, and nearly every major executive in the industry sat in those famous red chairs at one time or another, including a memorable joint appearance by Steve Jobs and Bill Gates:

2048px-Steve_Jobs_and_Bill_Gates_(522695099)

Ten years later Mossberg and Swisher would take the conference and the tech-focused news website they had built around it independent, rebranding it from All Things D to Recode. By all accounts the conference (and its various offshoots, including the aforementioned Code Media conference) continue to be a success, but the website struggled, drawing only around 2.5 million unique visitors a month 18 months after launch, leading Mossberg and Swisher to sell their new company to Vox Media.

It was tempting after the sale to presume that an individual brand can only take you so far, that you need a big media company behind you, but I think that’s a mistake; as a counter-example, consider John Gruber’s Daring Fireball, which as of 2011 had over 4 million visits a month. Granted, many of those are repeat visitors — Daring Fireball’s unique visitors were about a fifth of that — but that’s kind of the point: if you care about Apple, for example, would you rather read Mossberg once a week or Gruber once a day?

Obviously this point is personal: three months after Recode launched, Stratechery added the Daily Update, a subscription offering for people that wanted daily content about the business and strategy of technology. Note the narrowing: Mossberg was the arbiter of all consumer products; my goal is to not really cover products at all. Not only do I not have Mossberg’s eye, I am also cognizant of the fact there are a multitude of sites and hundreds if not thousands of writers focused on nothing else but doing what Mossberg did alone way back in 1991. Consumer technology used to be niche, and on the Internet, niche is powerful; now it’s a commodity, and the economics reflect that.

Mossberg and Trailblazing

Still, that shouldn’t take away from the fact that Mossberg is just as much of a trailblazer as the companies and products he covered: that writers like myself can build businesses and brands independent of established publications is simply the natural evolution of how Mossberg built a brand bigger than the Wall Street Journal, fueled by the Internet and its atomitization of media. Mossberg told Wired in that profile:

As PC sales skyrocketed in the early ’90s, he sensed a historic shift: “I believed that the tech market was about to broaden and democratize, and the column could catch the wave.”

Catch the wave Mossberg did, and in the process, created the blueprint for another. That’s a pretty good career.

  1. My favorite tidbit from a New York Times story about the launch:

    The news will be posted on the Web site about midnight, said Neil Budde, the Interactive Edition’s editor in chief. A few articles will be withheld until 2 or 3 A.M., he said, so that competing newspapers will not be able to see them until the day’s editions are printed.

    Apparently it didn’t occur to anyone that the stories could be posted to other websites! []

The Arrival of Artificial Intelligence

Chris Dixon opened a truly wonderful piece in the Atlantic entitled How Aristotle Created the Computer like this:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Dixon goes on to describe the creation of Boolean logic (which has only two values: TRUE and FALSE, represented as 1 and 0 respectively), and the insight by Claude E. Shannon that those two variables could be represented by a circuit, which itself has only two states: open and closed.1 Dixon writes:

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Dixon is being modest: the distinction may be obvious to computer scientists, but it is precisely the clear articulation of said distinction that undergirds Dixon’s remarkable essay; obviously “computers” as popularly conceptualized were not invented by Aristotle, but he created the means by which they would work (or, more accurately, set humanity down that path).

Moreover, you could characterize Shannon’s insight in the opposite direction: distinguishing the logical and the physical layers depends on the realization that they can be two pieces of a whole. That is, Shannon identified how the logical and the physical could be fused into what we now know as a computer.

To that end, the dramatic improvement in the physical design of circuits (first and foremost the invention of the transistor and the subsequent application of Moore’s Law) by definition meant a dramatic increase in the speed with which logic could be applied. Or, to put it in human terms, how quickly computers could think.

50 Years of AI

Earlier this week U.S. Treasury Secretary Steve Mnuchin, in the words of Dan Primack, “breezily dismissed the notion that AI and machine learning will soon replace wide swathes of workers, saying that ‘it’s not even on our radar screen’ because it’s an issue that is ’50 or 100 years’ away.”

Naturally most of the tech industry was aghast: doesn’t Mnuchin read the seemingly endless announcement of artificial intelligence initiatives and startups on Techcrunch?

Then again, maybe Mnuchin’s view makes more sense than you might think; just read this piece by Maureen Dowd in Vanity Fair entitled Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse:

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

The rest of the article is pre-occupied with the question of what might happen if computers are smarter than humans; Dowd quotes Stuart Russell to explain why she is documenting the debate now:

“In 50 years, this 18-month period we’re in now will be seen as being crucial for the future of the A.I. community,” Russell told me. “It’s when the A.I. community finally woke up and took itself seriously and thought about what to do to make the future better.”

50 years: that’s the same timeline as Mnuchin; perhaps he is worried about the same things as Elon Musk? And, frankly, should the Treasury Secretary concern himself with such things?

The problem is obvious: it’s not clear what “artificial intelligence” means.

Defining Artificial Intelligence

Artificial intelligence is very difficult to define for a few reasons. First, there are two types of artificial intelligence: the artificial intelligence described in that Vanity Fair article is Artificial General Intelligence, that is, a computer capable of doing anything a human can. That is in contrast to Artificial Narrow Intelligence, in which a computer does what a human can do, but only within narrow bounds. For example, specialized AI can play chess, while a different specialized AI can play Go.

What is kind of amusing — and telling — is that as John McCarthy, who invented the name “Artificial Intelligence”, noted, the definition of specialized AI is changing all of the time. Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.

That makes it especially hard to tell where computers end and artificial intelligence begins. After all, accounting used to be done by hand:

3753191500_c28898135a_o

Within a decade this picture was obsolete, replaced by an IBM mainframe. A computer was doing what a human could do, albeit within narrow bounds. Was it artificial intelligence?

Technology and Humanity

In fact, we already have a better word for this kind of innovation: technology. Technology, to use Merriam-Webster’s definition, is “the practical application of knowledge especially in a particular area.” The story of technology is the story of humanity: the ability to control fire, the wheel, clubs for fighting — all are technology. All transformed the human race, thanks to our ability to learn and transmit knowledge; once one human could control fire, it was only a matter of time until all humans could.

It is technology that transformed homo sapiens from hunter-gatherers to farmers, and it was technology that transformed farming such that an ever smaller percentage of the population could support the rest. Many millennia later, it was technology that led to the creation of tools like the flying shuttle, which doubled the output of weavers, driving up the demand for spinners, which drove its own innovation like the roller spinning frame, powered by water. For the first time humans were leveraging non-human and non-animal forms of energy to drive their technological inventions, setting off the industrial revolution.

You can see the parallels between the industrial revolution and the invention of the computer: the former brought external energy to bear in a systematic way on physical activities formerly done by humans; the latter brings external energy to bear in a systematic way on mental activities formerly done by humans. Recall the analogy made by Steve Jobs:

I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth, how many kilocalories did they expend to get from point A to point B. And the condor came in at the top of the list, it surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle. The human riding a bicycle blew away the condor, all the way off the top of the list, and it it made a really big impression on me that we humans are tool builders, and we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me, a computer has always been a bicycle of the mind.

In short, while Dixon traced the logic of computers back to Aristotle, the very idea of technology — of which, without question, computers are a part — goes back even further. Creating tools that do what we could do ourselves, but better and more efficiently, is what makes us human.

Machine Learning

That definition, you’ll note, is remarkably similar to that of artificial intelligence; indeed, it’s tempting to argue that artificial intelligence, at least the narrow variety, is simply technology by a different name. Just as we designed the cotton gin, so we designed accounting software, and automated manufacturing. And, in fact, those are all related: all involved overt design, in which a human anticipated the functionality and built a machine that could execute that functionality on a repeatable basis.

That, though, is why today is different.

Recall that while logic was developed over thousands of years, it was only part way through the 20th century that said logic was fused with physical circuits. Once that happened the application of that logic progressed unbelievably quickly.

Technology, meanwhile, has been developed even longer than logic has. However, just as the application of logic was long bound by the human mind, the development of technology has had the same limitations, and that includes the first half-century of the computer era. Accounting software is in the same genre as the spinning frame: deliberately designed by humans to solve a specific problem.

Machine learning is different.2 Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms.3 It is still Artificial Narrow Intelligence — the computer is bound by the data and goal given to it by humans — but machine learning is, in my mind, meaningfully different from what has come before. Just as Shannon fused the physical with the logical to make the computer, machine learning fuses the development of tools with computers themselves to make (narrow) artificial intelligence.

This is not to overhype machine learning: the applications are still highly bound and often worse than human-designed systems, and we are far, far away from Artificial General Intelligence. It seems clear to me, though, that we are firmly in Artificial Narrow Intelligence territory: the truth is that humans have made machines to replace their own labor from the beginning of time; it is only now that the machines are creating themselves, at least to a degree.4

Life and Meaning

The reason this matters is that pure technology is hard enough to manage: the price we pay for technology progress is all of the humans that are no longer necessary. The Industrial Revolution benefitted humanity in the long run, but in the short run there was tremendous suffering, interspersed with wars that were far more destructive thanks to technology.

What then are the implications of machine learning, that is, the (relatively speaking) fantastically fast creation of algorithms that can replace a huge number of jobs that generate data (data being the key ingredient to creating said algorithms)? To date automation has displaced blue collar workers; are we prepared for machine learning to displace huge numbers of white collar ones?

This is why Mnuchin’s comment was so disturbing; it also, though, is why the obsession of so many technologists with Artificial General Intelligence is just as frustrating. I get the worry that computers far more intelligent than any human will kill us all; more, though, should be concerned about the imminent creation of a world that makes huge swathes of people redundant. How many will care if artificial intelligence destroys life if it has already destroyed meaning?

  1. This is only Part 1! Definitely read the whole thing []
  2. Not, to be clear, re-named analytics software []
  3. Albeit guided by human-devised algorithms []
  4. And, by extension, there is at least a plausible path to general intelligence []

Ad Agencies and Accountability

It’s never a good thing when a news story begins with the phrase “summoned before the government.” That, though, is exactly what happened to Google last week in a case of what most seem to presume is the latest episode of tech companies behaving badly.

From The Times:

Google is to be summoned before the government to explain why taxpayers are unwittingly funding extremists through advertising, The Times can reveal. The Cabinet Office joined some of the world’s largest brands last night in pulling millions of pounds in marketing from YouTube after an investigation showed that rape apologists, anti-Semites and banned hate preachers were receiving payouts from publicly subsidised adverts on the internet company’s video platform.

David Duke, the American white nationalist, Michael Savage, a homophobic “shock-jock”, and Steven Anderson, a pastor who praised the killing of 49 people in a gay nightclub, all have videos variously carrying advertising from the Home Office, the Royal Navy, the Royal Air Force, Transport For London and the BBC.

Mr Anderson, who was banned from entering Britain last year after repeatedly calling homosexuals “sodomites, queers and faggots”, has YouTube videos with adverts for Channel 4, Visit Scotland, the Financial Conduct Authority (FCA), Argos, Honda, Sandals, The Guardian and Sainsbury’s.

Let me start out with what I hope is an obvious caveat:

  • I believe that free speech is a critical right, and that includes speech with which I strongly disagree (that’s the entire point)
  • That said, a right to free speech does not include a right to be heard, much less a right to monetize; anyone can host their own site and sell their own ads, but there is no right to Google’s or Facebook’s platforms or ad networks
  • To that end, it is perfectly legitimate to be upset at the fact proponents of hate speech or fake news or any other type of objectionable content are monetizing that content on YouTube or through DoubleClick (Google’s ad display network)

What is more interesting, in my opinion, is with whom should you be upset?

Google’s Responsibility

At first glance this seems like a natural place to extend my criticism of Google from two weeks ago after The Outline detailed how some of Google’s “featured snippets” contained blatantly wrong and often harmful information:

The reality of Internet services is such that Google will never become an effective answer machine without going through this messy phase. The company, though, should take more responsibility; Google told The Outline:

“The Featured Snippets feature is an automatic and algorithmic match to the search query, and the content comes for third-party sites. We’re always working to improve our algorithms, and we welcome feedback on incorrect information, which users may share through the ‘Feedback’ button at the bottom right of the Featured Snippet.”

Frankly, that’s not good enough. Algorithms have consequences, particularly when giving answers to those actually searching for the truth. I grant that Google needs the space to iterate, but said space does not entail the abandonment of responsibility; indeed, the exact opposite is the case: Google should be investing far more in catching its own shortcomings, not relying on a barely visible link that fails to even cover their own rear end.

Algorithms are certainly responsible for what is reported in The Times: ads are purchased on one side, and algorithmically placed against content on the other. So, bad Google, right?

To a degree, yes, but not completely; consider this paragraph at the end of The Times’ article:

The brands contacted by The Times all said that they had no idea that their adverts were placed next to extremist content. Those that did not immediately pull their advertising implemented an immediate review after expressing serious concern.

Were I one of these brands I would be concerned too; in fact, my concern would extend far beyond a few extremist videos to the entire way in which their ads are placed in the first place.

Ad Agencies and the Internet

Few advertisers actually buy ads, at least not directly. Way back in 1841, Volney B. Palmer, the first ad agency, was opened in Philadelphia. In place of having to take out ads with multiple newspapers, an advertiser could deal directly with the ad agency, vastly simplifying the process of taking out ads. The ad agency, meanwhile, could leverage its relationships with all of those newspapers by serving multiple clients:

IMG_0142

It’s a classic example of how being in the middle can be a really great business opportunity, and the utility of ad agencies only increased as more advertising formats like radio and TV became available. Particularly in the case of TV, advertisers not only needed to place ads, but also needed a lot more help in making ads; ad agencies invested in ad-making expertise because they could scale said expertise across multiple clients.

At the same time, the advertisers were rapidly expanding their geographic footprints, particularly after the Second World War; naturally, ad agencies increased their footprint at the same time, often through M&A. The overarching business opportunity, though, was the same: give advertisers a one-stop shop for all of their advertising needs.

When the Internet came along, the ad agencies presumed this would simply be another justification for the commission they kept on their clients’ ad spend: more channels is more complexity that the ad agencies could abstract away for their clients, and the Internet has an effectively infinite number of channels!

That abundance of channels, though, meant that discovery was far more important than distribution. Increasingly users congregated on two discovery platforms: Google for things for which they were actively looking, and Facebook for something to fill the time. I described the impact this had on publishers in Popping the Publishing Bubble:

  • Editorial and ads were unbundled; the latter was replaced by ad networks that targeted users across multiple sites
  • However, this model makes for a terrible user experience and, more pertinently, it doesn’t work nearly as well on mobile, in part because the ads are worse, but also because it’s hard to track users via cookies
  • Google and Facebook, on the other hand, track users via identity, have superior ad units (especially Facebook on mobile), and have highly invested in advertiser tools that are far superior to anyone else’s

This is why I wrote in The Reality of Missing Out that Google and Facebook would take all of the digital advertising dollars:

Both companies, particularly Facebook, have dominant strategic positions; they are superior to other digital platforms on every single vector: effectiveness, reach, and ROI. Small wonder that the smaller players I listed above — LinkedIn, Yelp, Yahoo, Twitter — are all struggling…

Digital is subject to the effects of Aggregation Theory, a key component of which is winner-take-all dynamics, and Facebook and Google are indeed taking it all. I expect this trend to accelerate: first, in digital advertising, it is exceptionally difficult to see anyone outside of Facebook and Google achieving meaningful growth…Everyone else will have an uphill battle to show why they are worth advertisers’ time.

This is exactly what has happened. Just last week the Wall Street Journal reported on eMarketer’s forecast on digital advertising:

Total digital ad spending in the U.S. will increase 16% this year to $83 billion, led by Google’s continued dominance of the search ad market and Facebook’s growing share of display and mobile ads, according to eMarketer’s latest forecast. Google’s U.S. revenue from digital ads is expected to increase about 15% this year, while Facebook’s will jump 32%, more than previously expected, according to the market research company’s latest forecast report.

Snapchat is expected to grow from its small base, but everyone else will shrink: in other words, there are really only two options for the sort of digital advertising that reaches every person an advertiser might want to reach:

IMG_0141

That’s a problem for the ad agencies: when there are only two places an advertiser might want to buy ads, the fees paid to agencies to abstract complexity becomes a lot harder to justify.

Accountability and Logistics

Again, as I noted above, there are reasonable debates that can be had about hate speech being on Google’s and Facebook’s platforms at all; what is indisputable, though, is that the logistics of policing this content are mind-boggling.

Take YouTube as the most obvious example: there are 400 hours of video uploaded to YouTube every minute; that’s 24,000 hours an hour, 576,000 hours a day, over 4 million hours a week, and over 210 billion hours a year — and the rate is accelerating. To watch every minute of every video uploaded in a week would require over 100,000 people working full-time (40 hours). The exact same logistical problem applies to ads served by DoubleClick as well as the massive amount of content uploaded to Facebook’s various properties; when both companies state they are working on using machine learning to police content it’s not an excuse: it’s the only viable approach.

Don’t tell that to the ad agencies though. WPP Group CEO Martin Sorrell told CNBC:

“They can’t just say look we’re a technology company, we have nothing to do with the content that is appearing on our digital pages,” Sorrell said. He added that, as far as placing advertisements was concerned, they have to be held to the same standards as traditional media organizations…

“The big issue for Google and Facebook is whether they are going to have human editing at this point … of course they have the profitability. They have the margins to enable them to do it. And this is going to be the big issue — how far are they prepared to go?” Sorrell said, adding they needed to go “significantly far” to arrest these concerns.

It really is a quite convenient framing for Sorrell (then again, he is the advertising expert): if only Google and Facebook wouldn’t be greedy and just spend a tiny bit of their cash windfall to make sure ads are in the right spot, why, everything would be just the way it used to be! What is convenient is that this excuses WPP from any responsibility: it’s all Google’s and Facebook’s fault.

Here’s the question, though: if Google and Facebook have all of the responsibility, then shouldn’t they also be getting all of the money? What exactly are WPP’s fees being used for? There are only two places to buy ads, so it’s not as if agencies are helping advertiser purchase across multiple outlets as they did in the past. And while there is certainly an art to digital ads, the cost and complexity is also less than TV, with the added benefit that it is far easier to use a scalable scientific approach to figuring out what works (as opposed to relying on Don Draper-like creative geniuses). Policing the placement of a specific advertising buy is also a much more human-scale problem than analyzing the entire corpus of content monetized by Google and Facebook.

Google Versus the Ad Agencies

It’s clear that Google knows that it is the agencies who are actually implicated by The Times’ report. In a blog post entitled Improving Our Brand Safety Controls the managing director of Google U.K. writes (emphasis mine):

We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content. While we have a wide variety of tools to give advertisers and agencies control over where their ads appear, such as topic exclusions and site category exclusions, we can do a better job of addressing the small number of inappropriately monetized videos and content. We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.

The message is loud-and-clear: brands, if you don’t want your ads to appear against objectionable content, then get your agencies to actually do their job.

Make no mistake, the agencies know it too: there has been a lot talk about a boycott of Google, but read between the lines about what is actually going on. For example, from Bloomberg:

France’s Havas SA, the world’s sixth-largest advertising and marketing company, pulled its U.K. clients’ ads from Google and YouTube on Friday after failing to get assurances from Google that the ads wouldn’t appear next to offensive material. Those clients include wireless carrier O2, Royal Mail Plc, government-owned British Broadcasting Corp., Domino’s Pizza and Hyundai Kia, Havas said in a statement.

“Our position will remain until we are confident in the YouTube platform and Google Display Network’s ability to deliver the standards we and our clients expect,” said Paul Frampton, chief executive officer and country manager for Havas Media Group UK.

Later, the parent company Havas said it would not take any action outside the U.K., and called its U.K. unit’s decision “a temporary move.”

“The Havas Group will not be undertaking such measures on a global basis,” a Havas spokeswoman wrote in an email. “We are working with Google to resolve the issues so that we can return to using this valuable platform in the U.K.”

This boycott is not about hurting Google, because the reality is that the ad agencies can do no such thing: Google and Facebook control the users, and that means the advertisers have no choice but to be on their platforms. If Havas actually had power they would pull their ads globally, and make it clear that the boycott was permanent absent significant changes; the reality is that the ad agencies are running a PR campaign for the benefit of their clients who are rightly upset — and, as noted above, were until now completely oblivious.

Taking Responsibility

To be clear, I’m not seeking to absolve Google and Facebook of responsibility, even as I recognize the complexities of the challenges they face. Moreover, one could very easily use this article to make an argument about monopoly power, which is another reason for Google and Facebook to address this problem before governments do more than summon them.

Advertisers and ad agencies, though, should be accountable as well. If ad agencies want to be relevant in digital advertising, then they need to generate value independent of managing creative and ad placement: policing their clients’ ads would be an excellent place to start. If The Times can do it so can WPP and the Havas Group.

Big brands, meanwhile, should expect more from their agencies: paying fees so an agency can take out an ad on Google or Facebook without taking the time to do it right is a waste of money — and, when agencies are asleep at the wheel as The Times demonstrated, said spend is actually harmful.

Above all, what Sorrell and so many others get so wrong is this: the Internet is nothing like traditional media. The scale is different, the opportunities are different, and the threats are different. Demanding that Google (and Facebook) act like traditional media companies is at root nostalgia for a world drifting away like the smoke from a Don Draper cigarette, and it is just as deadly.

Editor’s Note: In the original version of this article I conflated media buying and creative agencies and their associated fees; the article has been amended. AdAge has a useful overview of commissions and fees here

Intel, Mobileye, and Smiling Curves

There is a fascinating paragraph in this Wall Street Journal article about Intel’s purchase of Mobileye NV,1 an Advanced Driver Assistance System primarily focused on camera-based collision avoidance and, going forward, autonomous driving:

The considerable premium Intel is willing to pay for Mobileye reflects the enormous value tech companies see in the automation of cars and trucks, said Mike Ramsay, research director at Gartner Inc. It would have been inconceivable a few years ago — it is more than double what the private-equity firm Cerberus Capital Management LLC paid for Chrysler LLC in 2007.

Chrysler LLC is, of course, a car manufacturer, since merged with Fiat; the price it was sold for is about as relevant to Mobileye as the value of a computer OEM to, well, Intel.

The Smiling Curve and PCs

I wrote about The Smiling Curve back in 2014; it is a concept that was coined by one of those computer OEMs, Stan Shih of Acer, in the early 1990s, as a means of explaining why Acer needed to change its business model.

stratechery Year One - 320

Shih explained the curve in his book Me-Too Is Not My Style:

The basic structure of the value-added curve, from left to right on the horizontal axis, are the upper, middle and down stream of an industry, that is, the component production, product assembly and distribution. The vertical axis represents the level of value-added. In terms of market competition type, the left side of the curve is worldwide competition whose success depends on technology, manufacturing and economy of scales. On the right side of the curve is regional competition. Its success depends on brand name, marketing channel and logistic capability.

Every industry has its own value-added curve. Different curves are derived according to different levels of value-added. The major factors in determining the level of value-added are entry barrier and accumulation of capability. In other words, with a higher entry barrier and greater accumulation of capabilities, the value-added will be higher.

Take the computer industry as an example. Microprocessor manufacturing or the establishment of brand name business comes with a higher entry barrier, and requires many years of strength accumulation to achieve progress. However, computer assembly is very easy. That is why no-brand computers are found everywhere in electronic shopping malls.

When Shih coined the concept in 1992, “microprocessor manufacturing” meant Intel: outside of the occasional challenge from AMD, Intel provided one of two parts (along with Windows) of the personal computer that couldn’t be bought elsewhere; the result was one of the most profitable companies of all time.

Note, though, that while a core piece of Intel’s competitive advantage (particularly relative to AMD) was, as Shih noted, the entry barrier of fabrication, Intel’s close connection to Windows — to software — was just as critical. It is operating systems that provide network effects and the tremendous profitability that follows, and operating systems are based on software. In other words, the PC smiling curve looked more like this:

stratechery Year One - 319

Windows and x86 processors were effectively a bundle, and Microsoft and Intel split the profits. Remember, bundling is how you make money, and in this case Intel-based hardware provided Microsoft a vehicle to profit from licensing Windows, while Windows built an unassailable moat for both — at least in PCs.

The Smiling Curve and Phones

Obviously things went much differently for both Microsoft and Intel — and Acer — when it came to smartphones. The overall structure of the industry still fit the smiling curve, but the software was layered on completely differently:

stratechery Year One - 318

Apple used software to bundle together manufacturing (done under contract) and the final product marketed to consumers; over time the company also added components, specifically microprocessors, to the bundle (also built under contract). The result was the most successful product of all time.

Google, meanwhile, made Android free; the bundle, such as there was, was between the operating system and Google’s cloud. The rest of the ecosystem was left to fight it out: distribution and marketing helped Samsung profit on the right, while R&D and manufacturing prowess meant profits for ARM, Samsung, and Qualcomm, along with a host of specialized component suppliers, on the left. Still, no one was as profitable as Intel was in the PC era, because no one had the software bundle.

That said, the role of software was critical: Intel, for example, started out the smartphone race at a performance disadvantage; while the company caught up the ecosystems had already moved on, because too much software was incompatible with x86.

The Smiling Curve and Servers

Intel has done better in the cloud:

stratechery Year One - 317

The cloud took the commodification wave that hit PCs to a new extreme: major cloud providers, armed with massive scale and their own reference designs, hired Asian manufacturers directly. The one exception was Intel and its Xeon chips (which themselves undercut purpose-built server processors from companies like Sun and IBM), which continue to be the most important contributors to Intel’s bottom-line.2 Still, the real value in the cloud is on the right, where the software is: Facebook, Google, AWS.

Cars and the Future

A little over a year ago I explained in Cars and the Future that there were three changes happening in the personal transportation industry simultaneously:

  • Drivetrains were changing from the internal combustion engine to electric
  • Car operation was moving from human-based to computer-based (i.e. self-driving cars)
  • Ownership was shifting from individuals to fleets, dispatched by ride-sharing services

As I noted in that piece each of these developments is in some respects independent of the other:

  • Tesla has led the way in electric vehicles, building an amazing brand along the way, but traditional car companies are not far behind. That’s because the drivetrain is a sustaining technology, not a disruptive one: the business model is by and large the same.3

  • Multiple players are working on self-driving cars, including Mobileye; more on them (and Intel) in a bit. Other interested parties are Apple and Google, as well as the traditional car manufacturers — who also have rather mixed motivations. For now, limited self-driving functionality is a high-margin add-on; in the future, it could be their demise (more on this in a moment too)

  • Uber is the biggest player in ride-sharing, at least in most Western countries, although Lyft is lurking should Uber implode; Didi is dominant in China, while Southeast Asia has a number of smaller competitors. The ride-sharing business is a better one than many critics think; in developed markets rides are profitable on a unit basis, and there is negative churn: customers use the services more over time, not less. Competition is fierce, although the lowered customer acquisition costs of being the dominant player are under-appreciated, as well as the impact that has on drawing drivers.

What is interesting is that these three factors can be fit into the smiling curve framework:

stratechery Year One - 316

This underscores the reality that all three are still very interconnected. More usefully, the smiling curve framework, particularly the lessons learned from the PC, smartphone, and server, also gives insight into how the transportation market may evolve, and explain why Intel made this purchase.

Car Manufacturers: The Bottom of the Smiling Curve

First, while the individual ownership model made it possible to bundle manufacturing and selling to end users (a la Apple in smartphones), said model doesn’t make sense going forward. The truth is that individual car ownership is a massive waste of resources, particularly for the individual: thousands of dollars are spent on an asset that is only used a single-digit percentage of the time and that depreciates rapidly (whether driven or not). The only reason we have such a model is that before the smartphone no other was possible (and the convenience factor of owning one’s own car was so great).

That, though, is no longer the case: in the future self-driving cars, owned and serviced by fleets, can be intelligently dispatched by ride-sharing services, resulting in utilization rates closer to 100% than to zero. Yes, humans will likely still move en masse in the morning and afternoon, but there will be packages and whatnot in the intervening time periods.

Moreover, self-driving cars will be built expressly for said utilization rates; yes, they will wear out, but a focus on longevity and serviceability over comfort and luxury will reduce manufacturers to commodity providers selling to bulk purchasers, not dissimilar to the companies building servers for today’s cloud giants.

That leaves the value for the ends.

Ride-Sharing Networks: The Right of the Smiling Curve

I’ve already written, both in Cars and the Future and Google, Uber, and the Evolution of Transportation-as-a-Service, that Uber’s position (again, barring implosion) is stronger than it seems. Yes, were, say, Waymo (Google’s self-driving car unit) able to instantly deploy self-driving cars at volume the ride-sharing company would be in big trouble, but in reality, even if Waymo decides to field a competitor, building routing capability (a related but still different problem than mapping) and, more importantly, gaining consumer traction will take time — time that Uber has to catch up in self-driving (certainly Waymo’s suit against Uber-acquisition Otto for stealing trade secrets looms large here; I’ll cover this more tomorrow).

The broader point, though, is that the winner (or winners) will look a lot like Uber looks today: most riders will use the same app, because whichever network has the most riders will be able to acquire the most cars, increasing liquidity and thus attracting more riders; indeed, the effects of Aggregation Theory will be even more pronounced when supply is completely commoditized per the point above.

Autonomous Driving Suppliers: The Left of the Smiling Curve

Remember, though, that while consumer-facing products and services get all of the attention, there is a lot of money to be made in components, particularly in an industry governed by the Smiling Curve. What is fascinating about this space is that it is an open question about which components will actually matter:

Hardware: For one, it’s not clear which sensing solution will prove superior: Mobileye’s camera-based approach (which Tesla, after ending its relationship with Mobileye after last year’s fatal car crash, is reproducing), or the Waymo-favored LIDAR approach (also used — allegedly stolen — by Uber). Perhaps it will be both.

Maps: Mapping is particularly critical for Waymo: its self-driving technology relies on super-detailed maps; if your objection is that producing said maps will be difficult, well, imagine telling yourself 15 years ago about Google Street View. Many car manufacturers, meanwhile, are increasingly casting their lot in with HERE, the former Nokia mapping unit (more on this in a moment as well).

Chips: Mobileye makes its own System-on-a-Chip called the EyeQ; selling a camera is meaningless without the capability of determining what is happening in the image. However, the EyeQ specifically and Mobileye generally cannot really compete with Nvidia, the real monster in this space. Nvidia realized a few years ago that its graphics capability, which emphasizes parallel processing, lends itself remarkably well to machine learning and neural network applications. Those, of course, are at the frontier of modern artificial intelligence research — including the sort of artificial intelligence necessary to drive cars. That is why Nvidia’s PX2 chip is in Tesla’s newest vehicles, along with those from a host of other manufacturers.

The real open question is software: Google is writing its own, Apple is apparently writing its own, Tesla is writing its own, Uber is writing its own, and Mobileye is writing its own. The car companies? It’s a mixed bag — and fair to question how good they’ll be at it.

Intel + Mobileye

This is the context for Intel’s acquisition. First off, yes, it is ridiculously expensive. The purchase price of $14.71 billion (once you back out Mobileye’s cash-on-hand) equates to an EBITDA multiple of 118; it helps that Intel is paying with overseas cash, which the company hasn’t paid taxes on. And second, I’ve long argued that society would be better off if companies would simply milk their company-defining products and return the cash to shareholders to invest in new up-and-comers (with the caveat that Intel had one of the greatest pivots of all-time, from memory to microprocessors).

That said, there is a lot to like about this deal for Intel (from Mobileye’s perspective, accepting a 34% premium is a no-brainer). Obviously Intel has chip expertise (although its graphics division lags far behind Nvidia’s); with Mobileye the company adds hardware expertise. It goes deeper than that though: Mobileye and Intel have actually been working together already, with HERE.

In fact, that is understating the situation: Intel bought a 15% ownership stake in HERE earlier this year, and Intel and Mobileye made a deal with BMW last year to build a self-driving car. In short, Intel is assembling the pieces to be a real player in autonomous cars: hardware, maps, chips, software, and strong relationships with car manufacturers.

Indeed, with this acquisition Intel’s greatest strength and greatest weakness is its dominant position with established manufactures: there is the outline of a grand alliance between car manufacturers, HERE maps, and Intel/Mobileye; the only hang-up is that the future of transportation is one in which the car manufacturers are the biggest losers. Companies like Uber or Google, meanwhile, have nothing to lose (well, Uber does, but they seem to grasp the threat).

Regardless, it’s a worthwhile bet for Intel: the company seems determined to not repeat its mistakes in smartphones, and given that the structure of self-driving cars looks more like servers than anything else, it’s a worthwhile space to splurge in.

  1. What a great name. Seriously! []
  2. Yes, there continues to be noise about ARM in the data center, most recently Microsoft’s announced commitment to use ARM in its data centers; for now Intel is dominant, and it will take more than a vaporware announcement to change that []
  3. Although it should be noted that Teslas are not sold through dealerships []

The Uber Conflation

The latest Uber scandal — yes, it’s getting hard to keep track — is Greyballing. From the New York Times:

Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was resisted by law enforcement or, in some instances, had been banned. The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea…

At a time when Uber is already under scrutiny for its boundary-pushing workplace culture, its use of the Greyball tool underscores the lengths to which the company will go to dominate its market.

Note the easy conflation: avoiding regulators, allegedly tolerating sexual harassment, it’s all the same thing. Well, I disagree.

Uber’s Three Questions

The first thing to understand about not just the current Uber controversy (controversies), but all Uber controversies is that while they are not usually articulated as such, in fact multiple questions are being debated.

  • Question 1: Is Uber a viable business that can one day go public, make a profit, and return the unprecedented amount of capital it has raised?
  • Question 2: Is Uber’s approach to regulation wrong?
  • Question 3: Is Uber wrong with regards to the specific issue at the center of this controversy?

I and many others have spent plenty of time on the first question; it’s not the focus of today’s article. Rather, it’s the distinction between questions 2 and 3 — that easy conflation made by the New York Times — that I find illuminating.

Uber and Regulation

There is no disputing that Uber has operated in the gray zone, perhaps adhering to the letter of the law but certainly not the spirit. For example, in The Upstarts, a new book about the founding stories of Uber and Airbnb, Brad Stone explains Uber’s initial service in San Francisco:

In the summer of 2010, [San Francisco Metropolitan Taxi Agency director Christiane] Hayashi’s phone started ringing off the hook, and it wouldn’t stop for four years. Taxi drivers were incensed; a new app called UberCab allowed rival limo drivers to act like taxis. By law only taxis could pick up passengers who hailed them on the street, and cabs were required to use the fare-calculating meter that was tested and certified by the government. Limos and town cars, however, had to be “prearranged” by passengers, typically by a phone call to a driver or a central dispatch. Uber didn’t just blur this distinction, it wiped it out entirely with electronic hails and by using the iPhone as a fare meter. Every time Hayashi picked up the phone, another driver or fleet owner was screaming, This is illegal! Why are you allowing it? What are you doing about this?

Ultimately, Hayashi could do nothing: Uber drivers did not pick up passengers who hailed them on the street, but were dispatched via the Uber app. UberCab — despite the name, which was soon changed — was not a taxi service, even if the service offered was taxi-like.

That right there is enough for many observers to cry foul: getting off on a technicality does not mean a business is okay. Those cries have only grown louder as Uber has entered more and more cities with services like UberX that are even more murky from a regulatory perspective; now the questions are not just about hailing and dispatch, but licensing, insurance, and background checks, along with the ever present questions about the employee/contractor status of Uber’s drivers. Every technicality that Uber takes advantage of,1 or every new law it gets passed by leveraging lobbyists and by bringing its users to bear on local politicians, is taken by many to be more evidence of a company that considers itself above the law.

The reason this question matters is because if one takes this viewpoint, then the latest allegations against Uber are not independent events, but rather manifestations of a problem that is endemic to the company. And, in that light, I can understand the calls for Kalanick’s removal at a minimum: I will do said position the respect of not arguing against it.

On the flipside, I, for one, view Uber’s regulatory maneuvering in a much more positive light. After all, thinking about the “spirit of the law” can lead to a very different conclusion: the purpose of taxi regulation, at least in theory, was not to entrench local monopolies but rather to ensure safety. If those goals can be met through technology — GPS tracking, reputation scoring, and the greater availability of transportation options, particularly late at night — then it is the taxi companies and captured regulators violating said spirit.2 Moreover, the fact remains that both Uber riders and drivers continue to vote with their feet: Uber has gone far beyond displacing taxis to generating entirely new demand, and when necessary, leveraging said riders and drivers to shift regulation in its favor. I think it is naive to think that said changes — changes that benefit not just Uber but drivers, riders, and local businesses — would have come about simply by asking nicely.

The Uber Conflation

But I digress; I know many of you disagree with me on these points, and that’s okay — having this debate is important. The reason to point this question out, though, was perhaps best exemplified by the #DeleteUber campaign that kicked off Uber’s terrible month. As you may recall the campaign sprang up on social media after Uber was accused of strikebreaking for having disabled surge pricing while taxi drivers protested against President Trump’s executive order banning immigration from seven countries. As I pointed out in a Daily Update:

Uber was definitely in a tough position here: the company likely would have been criticized for price-gouging had surge-pricing sky-rocketed, while restricting drivers from visiting JFK would have entailed Uber acting as a direct employer for drivers, as opposed to a neutral platform (this point is in contention in courts all over the U.S.). And, I think it’s safe to say, a lot of the folks pushing the #DeleteUber campaign were probably not very inclined to like Uber in the first place.

That last sentence captures what I’m driving at, and why separating these questions is so clarifying (and, by the way, surge pricing is another reason why a not insignificant number of people feel that Uber is evil).

Kalanick’s Real Mistake

#DeleteUber was more significant than it might seem: it was the first time that an Uber controversy actually affected demand in an externally visible way; given that controlling demand is the key to Uber’s competitive advantage, that is a very big deal indeed.

However, the real bombshell was an explosive blog post from a (former) female engineer named Susan Fowler Rigetti alleging sexual harassment that was not only tolerated by Uber HR but actually used against the accuser. Said allegations, if true (and I have no reason to believe they are not), are ipso facto unacceptable and heads should roll — up to and including Kalanick if he was aware of the case in question. And Rigetti deserves praise: sadly, the novelty of her allegations may very well be her willingness to go public; based on conversations with multiple friends it’s often perceived as being easier to put up with sexual harassment than run the risk of being blacklisted.

The thornier issue is if Kalanick did not know; surely he has ultimate responsibility for creating a culture that allegedly tolerated such behavior? Indeed, he does. That’s why I drew a line from Kalanick’s refusal to fire an executive that allegedly threatened a journalist to the behavior alleged in that blog post: culture is the accumulation of decisions, reinforced by success, and Uber has collectively made a lot of decisions that push the line and been amply rewarded.

That, though, is why I drew the distinctions in this post: Kalanick’s mistake was in not clearly defining, communicating, and enforcing accountability on actions that pushed the line but had nothing to do with the company’s regulatory fight. In fact, it was even more critical for Uber than for just about any other company to have its own house in order; the very nature of the company’s business created the conditions for living above the law to become culturally acceptable — praised even.

To that end, those who already disapprove of Uber’s regulatory approach, that see the latest events as being part and parcel of what makes Uber Uber, well, that may be an unfair conflation, but Kalanick has only himself to blame: pushing the line on regulations didn’t necessarily need to equate to pushing the line internally, but to Kalanick it was all one-and-the-same. The conflation started at the top.

Taking Responsibility

Even if you agree with me about Uber and regulation, it’s completely reasonable to still argue that the company needs a change in leadership for the exact reasons I just laid out; I thought long and hard about making that exact argument. Moreover, if Uber’s scandals start impacting demand for the service, or end up impacting the company’s ability to retain and hire employees, there may not be a choice in the matter.

Still, it’s worth keeping in mind that many of Uber’s scandals implicate not just Uber but tech as a whole. The industry’s problem when it comes to hiring and retaining women is very well documented, and sexual harassment is hardly limited to Uber. Moreover, one of Uber’s other “scandals” — the fact that Kalanick asked Amit Singhal to step down as Senior Vice President of Engineering after not disclosing a sexual harassment claim at Google — reflected far worse on Google than Uber: if Singhal committed a fireable offense the search giant should have fired the man who rewrote their search engine; instead someone in the know dribbled out allegations that happened to damage a company they view as a threat. And while Google’s allegations about Uber-acquisition Otto having stolen intellectual property are very serious, it’s worth remembering that the entire industry is basically built on theft — including Google’s keyword advertising.3

Indeed, more than anything, what gives me pause in this entire Uber affair is the general sordidness of all of Silicon Valley when it comes to market opportunities the size of Uber’s. The sad truth is that for too many this is the first case of sexual harassment they’ve cared about, not because of the victim, but because of the potential for taking Uber down.

The fact of the matter is that we as an industry are responsible for Uber too. We’ve created a world that simultaneously celebrates rule-breaking and undervalues women (and minorities), full of investors and companies that are utterly ruthless when money is on the line, while cloaking said ambition in fluff about changing the world.

That’s the sad irony of the situation: changing the world is exactly what Uber is doing; for all his mistakes Kalanick has been one of the most effective CEOs tech has ever seen. Maybe Kalanick has finally seen the light and can change — I think he deserves the chance, even as I understand the skepticism — and if he cannot then by all means show him the door; in the meantime we can all certainly look in the mirror.

  1. Or Lyft: remember, it was Lyft that pioneered “ride-sharing”; Uber laid back because the company thought it was illegal! []
  2. Uber absolutely needs to accelerate the roll-out of its accessibility services []
  3. To be clear, downloading blueprints is on a different scale; again, if Uber is implicated Kalanick should be held accountable []

Twitter, Live, and Luck

There really is nothing like live, as the past calendar year has shown. Many of those moments have been, as you might expect, sports related: LeBron James blocking Andre Iguodala at the end of Game 7 of the NBA finals, the Cubs coming back to win the World Series in Game 7 after a rain delay, a historic comeback in the Super Bowl. Last night’s Academy Awards ceremony provided drama that was itself worthy of an award:

In case you don’t yet know what happened, you can read the New York Times story here about how the wrong ‘Best Picture’ winner was announced; you can’t, though, truly re-live the experience.

What, though, is “the experience”? There is the actual viewing — once upon a time you could only see something happen once — although the fact I embedded a video of last night’s Academy Award moment reinforces that this is no longer a differentiator. More important is the sheer shock: that can never be reproduced, and said shock — and the associated potential — is very much what drives live sports viewing. What is perhaps surprising, though, is that the reactions of those you care to follow is just as fleeting.

Twitter’s “Live” Strategy

One year ago Twitter committed to a “live” strategy; management wrote in a letter to shareholders:

We’re focused now on what Twitter does best: live. Twitter is live: live commentary, live connections, live conversations. Whether it’s breaking news, entertainment, sports, or everyday topics, hearing about and watching a live event unfold is the fastest way to understand the power of Twitter. Twitter has always been considered a “second screen” for what’s happening in the world and we believe we can become the first screen for everything that’s happening now. And by doing so, we believe we can build the planet’s largest daily connected audience. A connected audience is one that watches together, and can talk with one another in real-time. It’s what Twitter has provided for close to 10 years, and it’s what we will continue to drive in the future.

I call out this strategy in the context of last night’s Oscar screw-up because it really highlights what I and so many others mean when we bemoan Twitter’s product stagnation, and how said stagnation so severely limited the company’s long-tem prospects — and, on the flipside, how to think about innovation and the disruption of what came before.

Internet-Enabled Businesses

I’ve long maintained that Twitter was, paradoxically, handicapped by how good its initial idea was. Back in 2014 I quoted Marc Andreessen’s famous blog post on product-market fit and added:

I think this actually gets to the problem with Twitter: the initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit. It just happened, and they’ve been riding that match for going on eight years.

The problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company-equivalent of a lottery winner who never actually learns how to make money, and now they are starting to pay the price.

The shareholder letter above is an example of exactly what I mean; Twitter is still selling the exact same value the service offered back in 2006 — “live commentary, live connections, live conversations” — and the only product ideas are to do what old media like television does, but worse: becoming the first screen for what is happening now means a screen that is smaller, more laggy, and, critically, in the way of seeing the actual tweets one might care about.

It’s also an example of the worst sort of product thinking: simply doing what was done before, but digitally. The classic example is banner ads: back when we viewed content on paper, the only place to put advertisements was, well, on the paper, next to the content. And so, when the web came along, folks just mimicked newspapers, putting advertisements next to content; the result was web pages that suck and an industry in crisis.

Facebook, meanwhile, thanks to mobile, discovered that advertisements in a feed are far more effective: they take over the whole screen, engaging the user’s attention in a way ads off to the side never did, and the miracle of never-ending content and ever-present data connections means the feed never grows stale. In-feed advertisements — just like Google’s search advertisements — are uniquely enabled by the Internet; it should come as no surprise that said uniqueness is strongly correlated with actually making money from advertising.

This is a pattern you see repeatedly from the successful technology companies: both the products and the business model are uniquely enabled by the Internet. Netflix, for example, commoditized time: the company’s entire catalog is available to any subscriber at any time, in a way that was never possible on linear TV. Airbnb commoditized trust, elevating beds, apartments, and homes to the same playing field as traditional hotels. Amazon commoditized product distribution, creating a storefront with infinite shelf-space and unbeatable prices.

Moreover, all of these companies are evolving (or have evolved) their original offering in a way that takes ever more advantage of the Internet’s unique capabilities: Amazon used to predominantly hold inventory like a traditional retailer, but today an ever-increasing portion of sales come from 3rd-party merchants using Amazon as a platform. Netflix’s value used to be not unlike Amazon’s: the infinite shelf space of the Internet meant the service had any DVD you wished to rent; today the company is the inverse, differentiated by its own exclusive content. Airbnb is earlier in its transition to a full-on experience provider, of which lodging is just one piece, but critically, it is evolving.

Commoditizing “Live”

“Evolving” is a word that has never really applied to Twitter. Consider the Oscars: according to Twitter’s statement of strategy the ideal outcome for Twitter apparently would be live-streaming the Oscars, much as the service live-streamed a few NFL games and the Presidential debates, making the service the “first-screen” instead of the second. In other words, Twitter wants to make a better banner ad (that, as noted above, will in reality be worse). What makes this so frustrating is that Twitter’s goal of owning “live” could mean so much more: how might the product evolve if Twitter had the sort of product mindset found at companies like Amazon, Netflix, or Airbnb?

Consider the observation I made at the beginning about last night’s Academy Awards gaffe: what made it special in the moment was not just seeing it happen (one can replay it forever), and not just the shock (which truly is unique to “live”), but also the incredulous reaction on Twitter (and the host of jokes that followed). That reaction, though, is completely lost to time.

Imagine a Twitter app that, instead of a generic Moment that is little more than Twitter’s version of a thousand re-blogs, let you replay your Twitter stream from any particular moment in time. Miss the Oscars gaffe? Not only can you watch the video, you can read the reactions as they happen, from the people you actually care enough to follow. Or maybe see the reactions through someone else’s eyes: choose any other user on Twitter, and see what they saw as the gaffe happened.

What is so powerful about this seemingly simple feature is that it would commoditize “live” in a way that is only possibly digitally, and that would uniquely benefit the company: now the experience of “live” (except for the shock value) would be available at any time, from any perspective, and only on Twitter. That such a feature does not exist — indeed, that the company’s stated goal is to become more like old media, instead of uniquely leveraging digital — is as good an explanation for why the company has foundered as any.


More broadly, a foundational principle of Stratechery — one I laid out once again last week — is that the Internet is fundamentally changing the rules of business:

Today the fundamental impact of the Internet is to make distribution itself a cheap commodity — or in the case of digital content, completely free. And that, by extension, is why I have long argued that the Internet Revolution is as momentous as the Industrial Revolution: it is transforming how and where economic value is generated, and thus where power resides. In this brave new world, power comes not from production, not from distribution, but from controlling consumption: all markets will be demand-driven; the extent to which they already are is a function of how digitized they have become.

The companies that thrive in this new world are those that build new businesses uniquely enabled by the Internet; those that struggle are those with businesses built on old limitations — like, for example, the idea that “live” can only be experienced, well, “live.” That Twitter would seek to leverage its only-on-the-Internet initial product insight — the fact that anyone anywhere can read the musings of anyone else, and broadcast in turn — into an old-world business (“live” when live) is the best evidence yet that the company was the product of more luck than insight.