On Exponent, the weekly podcast I host with James Allworth, we discuss The Arrival of Artificial Intelligence.
Listen to it here.
On the business, strategy, and impact of technology.
On Exponent, the weekly podcast I host with James Allworth, we discuss The Arrival of Artificial Intelligence.
Listen to it here.
Chris Dixon opened a truly wonderful piece in the Atlantic entitled How Aristotle Created the Computer like this:
The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.
Dixon goes on to describe the creation of Boolean logic (which has only two values: TRUE and FALSE, represented as 1 and 0 respectively), and the insight by Claude E. Shannon that those two variables could be represented by a circuit, which itself has only two states: open and closed.1 Dixon writes:
Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)
Dixon is being modest: the distinction may be obvious to computer scientists, but it is precisely the clear articulation of said distinction that undergirds Dixon’s remarkable essay; obviously “computers” as popularly conceptualized were not invented by Aristotle, but he created the means by which they would work (or, more accurately, set humanity down that path).
Moreover, you could characterize Shannon’s insight in the opposite direction: distinguishing the logical and the physical layers depends on the realization that they can be two pieces of a whole. That is, Shannon identified how the logical and the physical could be fused into what we now know as a computer.
To that end, the dramatic improvement in the physical design of circuits (first and foremost the invention of the transistor and the subsequent application of Moore’s Law) by definition meant a dramatic increase in the speed with which logic could be applied. Or, to put it in human terms, how quickly computers could think.
Earlier this week U.S. Treasury Secretary Steve Mnuchin, in the words of Dan Primack, “breezily dismissed the notion that AI and machine learning will soon replace wide swathes of workers, saying that ‘it’s not even on our radar screen’ because it’s an issue that is ’50 or 100 years’ away.”
Naturally most of the tech industry was aghast: doesn’t Mnuchin read the seemingly endless announcement of artificial intelligence initiatives and startups on Techcrunch?
Then again, maybe Mnuchin’s view makes more sense than you might think; just read this piece by Maureen Dowd in Vanity Fair entitled Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse:
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
The rest of the article is pre-occupied with the question of what might happen if computers are smarter than humans; Dowd quotes Stuart Russell to explain why she is documenting the debate now:
“In 50 years, this 18-month period we’re in now will be seen as being crucial for the future of the A.I. community,” Russell told me. “It’s when the A.I. community finally woke up and took itself seriously and thought about what to do to make the future better.”
50 years: that’s the same timeline as Mnuchin; perhaps he is worried about the same things as Elon Musk? And, frankly, should the Treasury Secretary concern himself with such things?
The problem is obvious: it’s not clear what “artificial intelligence” means.
Artificial intelligence is very difficult to define for a few reasons. First, there are two types of artificial intelligence: the artificial intelligence described in that Vanity Fair article is Artificial General Intelligence, that is, a computer capable of doing anything a human can. That is in contrast to Artificial Narrow Intelligence, in which a computer does what a human can do, but only within narrow bounds. For example, specialized AI can play chess, while a different specialized AI can play Go.
What is kind of amusing — and telling — is that as John McCarthy, who invented the name “Artificial Intelligence”, noted, the definition of specialized AI is changing all of the time. Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.
That makes it especially hard to tell where computers end and artificial intelligence begins. After all, accounting used to be done by hand:
Within a decade this picture was obsolete, replaced by an IBM mainframe. A computer was doing what a human could do, albeit within narrow bounds. Was it artificial intelligence?
In fact, we already have a better word for this kind of innovation: technology. Technology, to use Merriam-Webster’s definition, is “the practical application of knowledge especially in a particular area.” The story of technology is the story of humanity: the ability to control fire, the wheel, clubs for fighting — all are technology. All transformed the human race, thanks to our ability to learn and transmit knowledge; once one human could control fire, it was only a matter of time until all humans could.
It is technology that transformed homo sapiens from hunter-gatherers to farmers, and it was technology that transformed farming such that an ever smaller percentage of the population could support the rest. Many millennia later, it was technology that led to the creation of tools like the flying shuttle, which doubled the output of weavers, driving up the demand for spinners, which drove its own innovation like the roller spinning frame, powered by water. For the first time humans were leveraging non-human and non-animal forms of energy to drive their technological inventions, setting off the industrial revolution.
You can see the parallels between the industrial revolution and the invention of the computer: the former brought external energy to bear in a systematic way on physical activities formerly done by humans; the latter brings external energy to bear in a systematic way on mental activities formerly done by humans. Recall the analogy made by Steve Jobs:
I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet Earth, how many kilocalories did they expend to get from point A to point B. And the condor came in at the top of the list, it surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.
But somebody there had the imagination to test the efficiency of a human riding a bicycle. The human riding a bicycle blew away the condor, all the way off the top of the list, and it it made a really big impression on me that we humans are tool builders, and we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me, a computer has always been a bicycle of the mind.
In short, while Dixon traced the logic of computers back to Aristotle, the very idea of technology — of which, without question, computers are a part — goes back even further. Creating tools that do what we could do ourselves, but better and more efficiently, is what makes us human.
That definition, you’ll note, is remarkably similar to that of artificial intelligence; indeed, it’s tempting to argue that artificial intelligence, at least the narrow variety, is simply technology by a different name. Just as we designed the cotton gin, so we designed accounting software, and automated manufacturing. And, in fact, those are all related: all involved overt design, in which a human anticipated the functionality and built a machine that could execute that functionality on a repeatable basis.
That, though, is why today is different.
Recall that while logic was developed over thousands of years, it was only part way through the 20th century that said logic was fused with physical circuits. Once that happened the application of that logic progressed unbelievably quickly.
Technology, meanwhile, has been developed even longer than logic has. However, just as the application of logic was long bound by the human mind, the development of technology has had the same limitations, and that includes the first half-century of the computer era. Accounting software is in the same genre as the spinning frame: deliberately designed by humans to solve a specific problem.
Machine learning is different.2 Now, instead of humans designing algorithms to be executed by a computer, the computer is designing the algorithms.3 It is still Artificial Narrow Intelligence — the computer is bound by the data and goal given to it by humans — but machine learning is, in my mind, meaningfully different from what has come before. Just as Shannon fused the physical with the logical to make the computer, machine learning fuses the development of tools with computers themselves to make (narrow) artificial intelligence.
This is not to overhype machine learning: the applications are still highly bound and often worse than human-designed systems, and we are far, far away from Artificial General Intelligence. It seems clear to me, though, that we are firmly in Artificial Narrow Intelligence territory: the truth is that humans have made machines to replace their own labor from the beginning of time; it is only now that the machines are creating themselves, at least to a degree.4
The reason this matters is that pure technology is hard enough to manage: the price we pay for technology progress is all of the humans that are no longer necessary. The Industrial Revolution benefitted humanity in the long run, but in the short run there was tremendous suffering, interspersed with wars that were far more destructive thanks to technology.
What then are the implications of machine learning, that is, the (relatively speaking) fantastically fast creation of algorithms that can replace a huge number of jobs that generate data (data being the key ingredient to creating said algorithms)? To date automation has displaced blue collar workers; are we prepared for machine learning to displace huge numbers of white collar ones?
This is why Mnuchin’s comment was so disturbing; it also, though, is why the obsession of so many technologists with Artificial General Intelligence is just as frustrating. I get the worry that computers far more intelligent than any human will kill us all; more, though, should be concerned about the imminent creation of a world that makes huge swathes of people redundant. How many will care if artificial intelligence destroys life if it has already destroyed meaning?
This is only Part 1! Definitely read the whole thing ↩
Not, to be clear, re-named analytics software ↩
Albeit guided by human-devised algorithms ↩
And, by extension, there is at least a plausible path to general intelligence ↩
On Exponent, the weekly podcast I host with James Allworth, we discuss Intel, Mobileye, and Smiling Curves.
Listen to it here.
It’s never a good thing when a news story begins with the phrase “summoned before the government.” That, though, is exactly what happened to Google last week in a case of what most seem to presume is the latest episode of tech companies behaving badly.
From The Times:
Google is to be summoned before the government to explain why taxpayers are unwittingly funding extremists through advertising, The Times can reveal. The Cabinet Office joined some of the world’s largest brands last night in pulling millions of pounds in marketing from YouTube after an investigation showed that rape apologists, anti-Semites and banned hate preachers were receiving payouts from publicly subsidised adverts on the internet company’s video platform.
David Duke, the American white nationalist, Michael Savage, a homophobic “shock-jock”, and Steven Anderson, a pastor who praised the killing of 49 people in a gay nightclub, all have videos variously carrying advertising from the Home Office, the Royal Navy, the Royal Air Force, Transport For London and the BBC.
Mr Anderson, who was banned from entering Britain last year after repeatedly calling homosexuals “sodomites, queers and faggots”, has YouTube videos with adverts for Channel 4, Visit Scotland, the Financial Conduct Authority (FCA), Argos, Honda, Sandals, The Guardian and Sainsbury’s.
Let me start out with what I hope is an obvious caveat:
What is more interesting, in my opinion, is with whom should you be upset?
At first glance this seems like a natural place to extend my criticism of Google from two weeks ago after The Outline detailed how some of Google’s “featured snippets” contained blatantly wrong and often harmful information:
The reality of Internet services is such that Google will never become an effective answer machine without going through this messy phase. The company, though, should take more responsibility; Google told The Outline:
“The Featured Snippets feature is an automatic and algorithmic match to the search query, and the content comes for third-party sites. We’re always working to improve our algorithms, and we welcome feedback on incorrect information, which users may share through the ‘Feedback’ button at the bottom right of the Featured Snippet.”
Frankly, that’s not good enough. Algorithms have consequences, particularly when giving answers to those actually searching for the truth. I grant that Google needs the space to iterate, but said space does not entail the abandonment of responsibility; indeed, the exact opposite is the case: Google should be investing far more in catching its own shortcomings, not relying on a barely visible link that fails to even cover their own rear end.
Algorithms are certainly responsible for what is reported in The Times: ads are purchased on one side, and algorithmically placed against content on the other. So, bad Google, right?
To a degree, yes, but not completely; consider this paragraph at the end of The Times’ article:
The brands contacted by The Times all said that they had no idea that their adverts were placed next to extremist content. Those that did not immediately pull their advertising implemented an immediate review after expressing serious concern.
Were I one of these brands I would be concerned too; in fact, my concern would extend far beyond a few extremist videos to the entire way in which their ads are placed in the first place.
Few advertisers actually buy ads, at least not directly. Way back in 1841, Volney B. Palmer, the first ad agency, was opened in Philadelphia. In place of having to take out ads with multiple newspapers, an advertiser could deal directly with the ad agency, vastly simplifying the process of taking out ads. The ad agency, meanwhile, could leverage its relationships with all of those newspapers by serving multiple clients:
It’s a classic example of how being in the middle can be a really great business opportunity, and the utility of ad agencies only increased as more advertising formats like radio and TV became available. Particularly in the case of TV, advertisers not only needed to place ads, but also needed a lot more help in making ads; ad agencies invested in ad-making expertise because they could scale said expertise across multiple clients.
At the same time, the advertisers were rapidly expanding their geographic footprints, particularly after the Second World War; naturally, ad agencies increased their footprint at the same time, often through M&A. The overarching business opportunity, though, was the same: give advertisers a one-stop shop for all of their advertising needs.
When the Internet came along, the ad agencies presumed this would simply be another justification for the commission they kept on their clients’ ad spend: more channels is more complexity that the ad agencies could abstract away for their clients, and the Internet has an effectively infinite number of channels!
That abundance of channels, though, meant that discovery was far more important than distribution. Increasingly users congregated on two discovery platforms: Google for things for which they were actively looking, and Facebook for something to fill the time. I described the impact this had on publishers in Popping the Publishing Bubble:
This is why I wrote in The Reality of Missing Out that Google and Facebook would take all of the digital advertising dollars:
Both companies, particularly Facebook, have dominant strategic positions; they are superior to other digital platforms on every single vector: effectiveness, reach, and ROI. Small wonder that the smaller players I listed above — LinkedIn, Yelp, Yahoo, Twitter — are all struggling…
Digital is subject to the effects of Aggregation Theory, a key component of which is winner-take-all dynamics, and Facebook and Google are indeed taking it all. I expect this trend to accelerate: first, in digital advertising, it is exceptionally difficult to see anyone outside of Facebook and Google achieving meaningful growth…Everyone else will have an uphill battle to show why they are worth advertisers’ time.
This is exactly what has happened. Just last week the Wall Street Journal reported on eMarketer’s forecast on digital advertising:
Total digital ad spending in the U.S. will increase 16% this year to $83 billion, led by Google’s continued dominance of the search ad market and Facebook’s growing share of display and mobile ads, according to eMarketer’s latest forecast. Google’s U.S. revenue from digital ads is expected to increase about 15% this year, while Facebook’s will jump 32%, more than previously expected, according to the market research company’s latest forecast report.
Snapchat is expected to grow from its small base, but everyone else will shrink: in other words, there are really only two options for the sort of digital advertising that reaches every person an advertiser might want to reach:
That’s a problem for the ad agencies: when there are only two places an advertiser might want to buy ads, the fees paid to agencies to abstract complexity becomes a lot harder to justify.
Again, as I noted above, there are reasonable debates that can be had about hate speech being on Google’s and Facebook’s platforms at all; what is indisputable, though, is that the logistics of policing this content are mind-boggling.
Take YouTube as the most obvious example: there are 400 hours of video uploaded to YouTube every minute; that’s 24,000 hours an hour, 576,000 hours a day, over 4 million hours a week, and over 210 billion hours a year — and the rate is accelerating. To watch every minute of every video uploaded in a week would require over 100,000 people working full-time (40 hours). The exact same logistical problem applies to ads served by DoubleClick as well as the massive amount of content uploaded to Facebook’s various properties; when both companies state they are working on using machine learning to police content it’s not an excuse: it’s the only viable approach.
Don’t tell that to the ad agencies though. WPP Group CEO Martin Sorrell told CNBC:
“They can’t just say look we’re a technology company, we have nothing to do with the content that is appearing on our digital pages,” Sorrell said. He added that, as far as placing advertisements was concerned, they have to be held to the same standards as traditional media organizations…
“The big issue for Google and Facebook is whether they are going to have human editing at this point … of course they have the profitability. They have the margins to enable them to do it. And this is going to be the big issue — how far are they prepared to go?” Sorrell said, adding they needed to go “significantly far” to arrest these concerns.
It really is a quite convenient framing for Sorrell (then again, he is the advertising expert): if only Google and Facebook wouldn’t be greedy and just spend a tiny bit of their cash windfall to make sure ads are in the right spot, why, everything would be just the way it used to be! What is convenient is that this excuses WPP from any responsibility: it’s all Google’s and Facebook’s fault.
Here’s the question, though: if Google and Facebook have all of the responsibility, then shouldn’t they also be getting all of the money? What exactly are WPP’s fees being used for? There are only two places to buy ads, so it’s not as if agencies are helping advertiser purchase across multiple outlets as they did in the past. And while there is certainly an art to digital ads, the cost and complexity is also less than TV, with the added benefit that it is far easier to use a scalable scientific approach to figuring out what works (as opposed to relying on Don Draper-like creative geniuses). Policing the placement of a specific advertising buy is also a much more human-scale problem than analyzing the entire corpus of content monetized by Google and Facebook.
It’s clear that Google knows that it is the agencies who are actually implicated by The Times’ report. In a blog post entitled Improving Our Brand Safety Controls the managing director of Google U.K. writes (emphasis mine):
We’ve heard from our advertisers and agencies loud and clear that we can provide simpler, more robust ways to stop their ads from showing against controversial content. While we have a wide variety of tools to give advertisers and agencies control over where their ads appear, such as topic exclusions and site category exclusions, we can do a better job of addressing the small number of inappropriately monetized videos and content. We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.
The message is loud-and-clear: brands, if you don’t want your ads to appear against objectionable content, then get your agencies to actually do their job.
Make no mistake, the agencies know it too: there has been a lot talk about a boycott of Google, but read between the lines about what is actually going on. For example, from Bloomberg:
France’s Havas SA, the world’s sixth-largest advertising and marketing company, pulled its U.K. clients’ ads from Google and YouTube on Friday after failing to get assurances from Google that the ads wouldn’t appear next to offensive material. Those clients include wireless carrier O2, Royal Mail Plc, government-owned British Broadcasting Corp., Domino’s Pizza and Hyundai Kia, Havas said in a statement.
“Our position will remain until we are confident in the YouTube platform and Google Display Network’s ability to deliver the standards we and our clients expect,” said Paul Frampton, chief executive officer and country manager for Havas Media Group UK.
Later, the parent company Havas said it would not take any action outside the U.K., and called its U.K. unit’s decision “a temporary move.”
“The Havas Group will not be undertaking such measures on a global basis,” a Havas spokeswoman wrote in an email. “We are working with Google to resolve the issues so that we can return to using this valuable platform in the U.K.”
This boycott is not about hurting Google, because the reality is that the ad agencies can do no such thing: Google and Facebook control the users, and that means the advertisers have no choice but to be on their platforms. If Havas actually had power they would pull their ads globally, and make it clear that the boycott was permanent absent significant changes; the reality is that the ad agencies are running a PR campaign for the benefit of their clients who are rightly upset — and, as noted above, were until now completely oblivious.
To be clear, I’m not seeking to absolve Google and Facebook of responsibility, even as I recognize the complexities of the challenges they face. Moreover, one could very easily use this article to make an argument about monopoly power, which is another reason for Google and Facebook to address this problem before governments do more than summon them.
Advertisers and ad agencies, though, should be accountable as well. If ad agencies want to be relevant in digital advertising, then they need to generate value independent of managing creative and ad placement: policing their clients’ ads would be an excellent place to start. If The Times can do it so can WPP and the Havas Group.
Big brands, meanwhile, should expect more from their agencies: paying fees so an agency can take out an ad on Google or Facebook without taking the time to do it right is a waste of money — and, when agencies are asleep at the wheel as The Times demonstrated, said spend is actually harmful.
Above all, what Sorrell and so many others get so wrong is this: the Internet is nothing like traditional media. The scale is different, the opportunities are different, and the threats are different. Demanding that Google (and Facebook) act like traditional media companies is at root nostalgia for a world drifting away like the smoke from a Don Draper cigarette, and it is just as deadly.
Editor’s Note: In the original version of this article I conflated media buying and creative agencies and their associated fees; the article has been amended. AdAge has a useful overview of commissions and fees here
There is a fascinating paragraph in this Wall Street Journal article about Intel’s purchase of Mobileye NV,1 an Advanced Driver Assistance System primarily focused on camera-based collision avoidance and, going forward, autonomous driving:
The considerable premium Intel is willing to pay for Mobileye reflects the enormous value tech companies see in the automation of cars and trucks, said Mike Ramsay, research director at Gartner Inc. It would have been inconceivable a few years ago — it is more than double what the private-equity firm Cerberus Capital Management LLC paid for Chrysler LLC in 2007.
Chrysler LLC is, of course, a car manufacturer, since merged with Fiat; the price it was sold for is about as relevant to Mobileye as the value of a computer OEM to, well, Intel.
I wrote about The Smiling Curve back in 2014; it is a concept that was coined by one of those computer OEMs, Stan Shih of Acer, in the early 1990s, as a means of explaining why Acer needed to change its business model.
Shih explained the curve in his book Me-Too Is Not My Style:
The basic structure of the value-added curve, from left to right on the horizontal axis, are the upper, middle and down stream of an industry, that is, the component production, product assembly and distribution. The vertical axis represents the level of value-added. In terms of market competition type, the left side of the curve is worldwide competition whose success depends on technology, manufacturing and economy of scales. On the right side of the curve is regional competition. Its success depends on brand name, marketing channel and logistic capability.
Every industry has its own value-added curve. Different curves are derived according to different levels of value-added. The major factors in determining the level of value-added are entry barrier and accumulation of capability. In other words, with a higher entry barrier and greater accumulation of capabilities, the value-added will be higher.
Take the computer industry as an example. Microprocessor manufacturing or the establishment of brand name business comes with a higher entry barrier, and requires many years of strength accumulation to achieve progress. However, computer assembly is very easy. That is why no-brand computers are found everywhere in electronic shopping malls.
When Shih coined the concept in 1992, “microprocessor manufacturing” meant Intel: outside of the occasional challenge from AMD, Intel provided one of two parts (along with Windows) of the personal computer that couldn’t be bought elsewhere; the result was one of the most profitable companies of all time.
Note, though, that while a core piece of Intel’s competitive advantage (particularly relative to AMD) was, as Shih noted, the entry barrier of fabrication, Intel’s close connection to Windows — to software — was just as critical. It is operating systems that provide network effects and the tremendous profitability that follows, and operating systems are based on software. In other words, the PC smiling curve looked more like this:
Windows and x86 processors were effectively a bundle, and Microsoft and Intel split the profits. Remember, bundling is how you make money, and in this case Intel-based hardware provided Microsoft a vehicle to profit from licensing Windows, while Windows built an unassailable moat for both — at least in PCs.
Obviously things went much differently for both Microsoft and Intel — and Acer — when it came to smartphones. The overall structure of the industry still fit the smiling curve, but the software was layered on completely differently:
Apple used software to bundle together manufacturing (done under contract) and the final product marketed to consumers; over time the company also added components, specifically microprocessors, to the bundle (also built under contract). The result was the most successful product of all time.
Google, meanwhile, made Android free; the bundle, such as there was, was between the operating system and Google’s cloud. The rest of the ecosystem was left to fight it out: distribution and marketing helped Samsung profit on the right, while R&D and manufacturing prowess meant profits for ARM, Samsung, and Qualcomm, along with a host of specialized component suppliers, on the left. Still, no one was as profitable as Intel was in the PC era, because no one had the software bundle.
That said, the role of software was critical: Intel, for example, started out the smartphone race at a performance disadvantage; while the company caught up the ecosystems had already moved on, because too much software was incompatible with x86.
Intel has done better in the cloud:
The cloud took the commodification wave that hit PCs to a new extreme: major cloud providers, armed with massive scale and their own reference designs, hired Asian manufacturers directly. The one exception was Intel and its Xeon chips (which themselves undercut purpose-built server processors from companies like Sun and IBM), which continue to be the most important contributors to Intel’s bottom-line.2 Still, the real value in the cloud is on the right, where the software is: Facebook, Google, AWS.
A little over a year ago I explained in Cars and the Future that there were three changes happening in the personal transportation industry simultaneously:
As I noted in that piece each of these developments is in some respects independent of the other:
Multiple players are working on self-driving cars, including Mobileye; more on them (and Intel) in a bit. Other interested parties are Apple and Google, as well as the traditional car manufacturers — who also have rather mixed motivations. For now, limited self-driving functionality is a high-margin add-on; in the future, it could be their demise (more on this in a moment too)
Uber is the biggest player in ride-sharing, at least in most Western countries, although Lyft is lurking should Uber implode; Didi is dominant in China, while Southeast Asia has a number of smaller competitors. The ride-sharing business is a better one than many critics think; in developed markets rides are profitable on a unit basis, and there is negative churn: customers use the services more over time, not less. Competition is fierce, although the lowered customer acquisition costs of being the dominant player are under-appreciated, as well as the impact that has on drawing drivers.
What is interesting is that these three factors can be fit into the smiling curve framework:
This underscores the reality that all three are still very interconnected. More usefully, the smiling curve framework, particularly the lessons learned from the PC, smartphone, and server, also gives insight into how the transportation market may evolve, and explain why Intel made this purchase.
First, while the individual ownership model made it possible to bundle manufacturing and selling to end users (a la Apple in smartphones), said model doesn’t make sense going forward. The truth is that individual car ownership is a massive waste of resources, particularly for the individual: thousands of dollars are spent on an asset that is only used a single-digit percentage of the time and that depreciates rapidly (whether driven or not). The only reason we have such a model is that before the smartphone no other was possible (and the convenience factor of owning one’s own car was so great).
That, though, is no longer the case: in the future self-driving cars, owned and serviced by fleets, can be intelligently dispatched by ride-sharing services, resulting in utilization rates closer to 100% than to zero. Yes, humans will likely still move en masse in the morning and afternoon, but there will be packages and whatnot in the intervening time periods.
Moreover, self-driving cars will be built expressly for said utilization rates; yes, they will wear out, but a focus on longevity and serviceability over comfort and luxury will reduce manufacturers to commodity providers selling to bulk purchasers, not dissimilar to the companies building servers for today’s cloud giants.
That leaves the value for the ends.
I’ve already written, both in Cars and the Future and Google, Uber, and the Evolution of Transportation-as-a-Service, that Uber’s position (again, barring implosion) is stronger than it seems. Yes, were, say, Waymo (Google’s self-driving car unit) able to instantly deploy self-driving cars at volume the ride-sharing company would be in big trouble, but in reality, even if Waymo decides to field a competitor, building routing capability (a related but still different problem than mapping) and, more importantly, gaining consumer traction will take time — time that Uber has to catch up in self-driving (certainly Waymo’s suit against Uber-acquisition Otto for stealing trade secrets looms large here; I’ll cover this more tomorrow).
The broader point, though, is that the winner (or winners) will look a lot like Uber looks today: most riders will use the same app, because whichever network has the most riders will be able to acquire the most cars, increasing liquidity and thus attracting more riders; indeed, the effects of Aggregation Theory will be even more pronounced when supply is completely commoditized per the point above.
Remember, though, that while consumer-facing products and services get all of the attention, there is a lot of money to be made in components, particularly in an industry governed by the Smiling Curve. What is fascinating about this space is that it is an open question about which components will actually matter:
Hardware: For one, it’s not clear which sensing solution will prove superior: Mobileye’s camera-based approach (which Tesla, after ending its relationship with Mobileye after last year’s fatal car crash, is reproducing), or the Waymo-favored LIDAR approach (also used — allegedly stolen — by Uber). Perhaps it will be both.
Maps: Mapping is particularly critical for Waymo: its self-driving technology relies on super-detailed maps; if your objection is that producing said maps will be difficult, well, imagine telling yourself 15 years ago about Google Street View. Many car manufacturers, meanwhile, are increasingly casting their lot in with HERE, the former Nokia mapping unit (more on this in a moment as well).
Chips: Mobileye makes its own System-on-a-Chip called the EyeQ; selling a camera is meaningless without the capability of determining what is happening in the image. However, the EyeQ specifically and Mobileye generally cannot really compete with Nvidia, the real monster in this space. Nvidia realized a few years ago that its graphics capability, which emphasizes parallel processing, lends itself remarkably well to machine learning and neural network applications. Those, of course, are at the frontier of modern artificial intelligence research — including the sort of artificial intelligence necessary to drive cars. That is why Nvidia’s PX2 chip is in Tesla’s newest vehicles, along with those from a host of other manufacturers.
The real open question is software: Google is writing its own, Apple is apparently writing its own, Tesla is writing its own, Uber is writing its own, and Mobileye is writing its own. The car companies? It’s a mixed bag — and fair to question how good they’ll be at it.
This is the context for Intel’s acquisition. First off, yes, it is ridiculously expensive. The purchase price of $14.71 billion (once you back out Mobileye’s cash-on-hand) equates to an EBITDA multiple of 118; it helps that Intel is paying with overseas cash, which the company hasn’t paid taxes on. And second, I’ve long argued that society would be better off if companies would simply milk their company-defining products and return the cash to shareholders to invest in new up-and-comers (with the caveat that Intel had one of the greatest pivots of all-time, from memory to microprocessors).
That said, there is a lot to like about this deal for Intel (from Mobileye’s perspective, accepting a 34% premium is a no-brainer). Obviously Intel has chip expertise (although its graphics division lags far behind Nvidia’s); with Mobileye the company adds hardware expertise. It goes deeper than that though: Mobileye and Intel have actually been working together already, with HERE.
In fact, that is understating the situation: Intel bought a 15% ownership stake in HERE earlier this year, and Intel and Mobileye made a deal with BMW last year to build a self-driving car. In short, Intel is assembling the pieces to be a real player in autonomous cars: hardware, maps, chips, software, and strong relationships with car manufacturers.
Indeed, with this acquisition Intel’s greatest strength and greatest weakness is its dominant position with established manufactures: there is the outline of a grand alliance between car manufacturers, HERE maps, and Intel/Mobileye; the only hang-up is that the future of transportation is one in which the car manufacturers are the biggest losers. Companies like Uber or Google, meanwhile, have nothing to lose (well, Uber does, but they seem to grasp the threat).
Regardless, it’s a worthwhile bet for Intel: the company seems determined to not repeat its mistakes in smartphones, and given that the structure of self-driving cars looks more like servers than anything else, it’s a worthwhile space to splurge in.
What a great name. Seriously! ↩
Yes, there continues to be noise about ARM in the data center, most recently Microsoft’s announced commitment to use ARM in its data centers; for now Intel is dominant, and it will take more than a vaporware announcement to change that ↩
Although it should be noted that Teslas are not sold through dealerships ↩
On Exponent, the weekly podcast I host with James Allworth, we discuss The Uber Conflation.
Listen to it here.
The latest Uber scandal — yes, it’s getting hard to keep track — is Greyballing. From the New York Times:
Uber has for years engaged in a worldwide program to deceive the authorities in markets where its low-cost ride-hailing service was resisted by law enforcement or, in some instances, had been banned. The program, involving a tool called Greyball, uses data collected from the Uber app and other techniques to identify and circumvent officials who were trying to clamp down on the ride-hailing service. Uber used these methods to evade the authorities in cities like Boston, Paris and Las Vegas, and in countries like Australia, China and South Korea…
At a time when Uber is already under scrutiny for its boundary-pushing workplace culture, its use of the Greyball tool underscores the lengths to which the company will go to dominate its market.
Note the easy conflation: avoiding regulators, allegedly tolerating sexual harassment, it’s all the same thing. Well, I disagree.
The first thing to understand about not just the current Uber controversy (controversies), but all Uber controversies is that while they are not usually articulated as such, in fact multiple questions are being debated.
I and many others have spent plenty of time on the first question; it’s not the focus of today’s article. Rather, it’s the distinction between questions 2 and 3 — that easy conflation made by the New York Times — that I find illuminating.
There is no disputing that Uber has operated in the gray zone, perhaps adhering to the letter of the law but certainly not the spirit. For example, in The Upstarts, a new book about the founding stories of Uber and Airbnb, Brad Stone explains Uber’s initial service in San Francisco:
In the summer of 2010, [San Francisco Metropolitan Taxi Agency director Christiane] Hayashi’s phone started ringing off the hook, and it wouldn’t stop for four years. Taxi drivers were incensed; a new app called UberCab allowed rival limo drivers to act like taxis. By law only taxis could pick up passengers who hailed them on the street, and cabs were required to use the fare-calculating meter that was tested and certified by the government. Limos and town cars, however, had to be “prearranged” by passengers, typically by a phone call to a driver or a central dispatch. Uber didn’t just blur this distinction, it wiped it out entirely with electronic hails and by using the iPhone as a fare meter. Every time Hayashi picked up the phone, another driver or fleet owner was screaming, This is illegal! Why are you allowing it? What are you doing about this?
Ultimately, Hayashi could do nothing: Uber drivers did not pick up passengers who hailed them on the street, but were dispatched via the Uber app. UberCab — despite the name, which was soon changed — was not a taxi service, even if the service offered was taxi-like.
That right there is enough for many observers to cry foul: getting off on a technicality does not mean a business is okay. Those cries have only grown louder as Uber has entered more and more cities with services like UberX that are even more murky from a regulatory perspective; now the questions are not just about hailing and dispatch, but licensing, insurance, and background checks, along with the ever present questions about the employee/contractor status of Uber’s drivers. Every technicality that Uber takes advantage of,1 or every new law it gets passed by leveraging lobbyists and by bringing its users to bear on local politicians, is taken by many to be more evidence of a company that considers itself above the law.
The reason this question matters is because if one takes this viewpoint, then the latest allegations against Uber are not independent events, but rather manifestations of a problem that is endemic to the company. And, in that light, I can understand the calls for Kalanick’s removal at a minimum: I will do said position the respect of not arguing against it.
On the flipside, I, for one, view Uber’s regulatory maneuvering in a much more positive light. After all, thinking about the “spirit of the law” can lead to a very different conclusion: the purpose of taxi regulation, at least in theory, was not to entrench local monopolies but rather to ensure safety. If those goals can be met through technology — GPS tracking, reputation scoring, and the greater availability of transportation options, particularly late at night — then it is the taxi companies and captured regulators violating said spirit.2 Moreover, the fact remains that both Uber riders and drivers continue to vote with their feet: Uber has gone far beyond displacing taxis to generating entirely new demand, and when necessary, leveraging said riders and drivers to shift regulation in its favor. I think it is naive to think that said changes — changes that benefit not just Uber but drivers, riders, and local businesses — would have come about simply by asking nicely.
But I digress; I know many of you disagree with me on these points, and that’s okay — having this debate is important. The reason to point this question out, though, was perhaps best exemplified by the #DeleteUber campaign that kicked off Uber’s terrible month. As you may recall the campaign sprang up on social media after Uber was accused of strikebreaking for having disabled surge pricing while taxi drivers protested against President Trump’s executive order banning immigration from seven countries. As I pointed out in a Daily Update:
Uber was definitely in a tough position here: the company likely would have been criticized for price-gouging had surge-pricing sky-rocketed, while restricting drivers from visiting JFK would have entailed Uber acting as a direct employer for drivers, as opposed to a neutral platform (this point is in contention in courts all over the U.S.). And, I think it’s safe to say, a lot of the folks pushing the #DeleteUber campaign were probably not very inclined to like Uber in the first place.
That last sentence captures what I’m driving at, and why separating these questions is so clarifying (and, by the way, surge pricing is another reason why a not insignificant number of people feel that Uber is evil).
#DeleteUber was more significant than it might seem: it was the first time that an Uber controversy actually affected demand in an externally visible way; given that controlling demand is the key to Uber’s competitive advantage, that is a very big deal indeed.
However, the real bombshell was an explosive blog post from a (former) female engineer named Susan Fowler Rigetti alleging sexual harassment that was not only tolerated by Uber HR but actually used against the accuser. Said allegations, if true (and I have no reason to believe they are not), are ipso facto unacceptable and heads should roll — up to and including Kalanick if he was aware of the case in question. And Rigetti deserves praise: sadly, the novelty of her allegations may very well be her willingness to go public; based on conversations with multiple friends it’s often perceived as being easier to put up with sexual harassment than run the risk of being blacklisted.
The thornier issue is if Kalanick did not know; surely he has ultimate responsibility for creating a culture that allegedly tolerated such behavior? Indeed, he does. That’s why I drew a line from Kalanick’s refusal to fire an executive that allegedly threatened a journalist to the behavior alleged in that blog post: culture is the accumulation of decisions, reinforced by success, and Uber has collectively made a lot of decisions that push the line and been amply rewarded.
That, though, is why I drew the distinctions in this post: Kalanick’s mistake was in not clearly defining, communicating, and enforcing accountability on actions that pushed the line but had nothing to do with the company’s regulatory fight. In fact, it was even more critical for Uber than for just about any other company to have its own house in order; the very nature of the company’s business created the conditions for living above the law to become culturally acceptable — praised even.
To that end, those who already disapprove of Uber’s regulatory approach, that see the latest events as being part and parcel of what makes Uber Uber, well, that may be an unfair conflation, but Kalanick has only himself to blame: pushing the line on regulations didn’t necessarily need to equate to pushing the line internally, but to Kalanick it was all one-and-the-same. The conflation started at the top.
Even if you agree with me about Uber and regulation, it’s completely reasonable to still argue that the company needs a change in leadership for the exact reasons I just laid out; I thought long and hard about making that exact argument. Moreover, if Uber’s scandals start impacting demand for the service, or end up impacting the company’s ability to retain and hire employees, there may not be a choice in the matter.
Still, it’s worth keeping in mind that many of Uber’s scandals implicate not just Uber but tech as a whole. The industry’s problem when it comes to hiring and retaining women is very well documented, and sexual harassment is hardly limited to Uber. Moreover, one of Uber’s other “scandals” — the fact that Kalanick asked Amit Singhal to step down as Senior Vice President of Engineering after not disclosing a sexual harassment claim at Google — reflected far worse on Google than Uber: if Singhal committed a fireable offense the search giant should have fired the man who rewrote their search engine; instead someone in the know dribbled out allegations that happened to damage a company they view as a threat. And while Google’s allegations about Uber-acquisition Otto having stolen intellectual property are very serious, it’s worth remembering that the entire industry is basically built on theft — including Google’s keyword advertising.3
Indeed, more than anything, what gives me pause in this entire Uber affair is the general sordidness of all of Silicon Valley when it comes to market opportunities the size of Uber’s. The sad truth is that for too many this is the first case of sexual harassment they’ve cared about, not because of the victim, but because of the potential for taking Uber down.
The fact of the matter is that we as an industry are responsible for Uber too. We’ve created a world that simultaneously celebrates rule-breaking and undervalues women (and minorities), full of investors and companies that are utterly ruthless when money is on the line, while cloaking said ambition in fluff about changing the world.
That’s the sad irony of the situation: changing the world is exactly what Uber is doing; for all his mistakes Kalanick has been one of the most effective CEOs tech has ever seen. Maybe Kalanick has finally seen the light and can change — I think it is at least worth waiting for the conclusion of the ongoing investigations — and if he cannot then by all means show him the door; in the meantime we can all certainly look in the mirror.
Or Lyft: remember, it was Lyft that pioneered “ride-sharing”; Uber laid back because the company thought it was illegal! ↩
Uber absolutely needs to accelerate the roll-out of its accessibility services ↩
To be clear, downloading blueprints is on a different scale; again, if Uber is implicated Kalanick should be held accountable ↩
There really is nothing like live, as the past calendar year has shown. Many of those moments have been, as you might expect, sports related: LeBron James blocking Andre Iguodala at the end of Game 7 of the NBA finals, the Cubs coming back to win the World Series in Game 7 after a rain delay, a historic comeback in the Super Bowl. Last night’s Academy Awards ceremony provided drama that was itself worthy of an award:
WATCH: 'La La Land' announced as #Oscars Best Picture winner, but only until a mistake is realized with 'Moonlight' being the real winner. pic.twitter.com/wYsUngcdwe
— ABC News (@ABC) February 27, 2017
In case you don’t yet know what happened, you can read the New York Times story here about how the wrong ‘Best Picture’ winner was announced; you can’t, though, truly re-live the experience.
What, though, is “the experience”? There is the actual viewing — once upon a time you could only see something happen once — although the fact I embedded a video of last night’s Academy Award moment reinforces that this is no longer a differentiator. More important is the sheer shock: that can never be reproduced, and said shock — and the associated potential — is very much what drives live sports viewing. What is perhaps surprising, though, is that the reactions of those you care to follow is just as fleeting.
One year ago Twitter committed to a “live” strategy; management wrote in a letter to shareholders:
We’re focused now on what Twitter does best: live. Twitter is live: live commentary, live connections, live conversations. Whether it’s breaking news, entertainment, sports, or everyday topics, hearing about and watching a live event unfold is the fastest way to understand the power of Twitter. Twitter has always been considered a “second screen” for what’s happening in the world and we believe we can become the first screen for everything that’s happening now. And by doing so, we believe we can build the planet’s largest daily connected audience. A connected audience is one that watches together, and can talk with one another in real-time. It’s what Twitter has provided for close to 10 years, and it’s what we will continue to drive in the future.
I call out this strategy in the context of last night’s Oscar screw-up because it really highlights what I and so many others mean when we bemoan Twitter’s product stagnation, and how said stagnation so severely limited the company’s long-tem prospects — and, on the flipside, how to think about innovation and the disruption of what came before.
I’ve long maintained that Twitter was, paradoxically, handicapped by how good its initial idea was. Back in 2014 I quoted Marc Andreessen’s famous blog post on product-market fit and added:
I think this actually gets to the problem with Twitter: the initial concept was so good, and so perfectly fit such a large market, that they never needed to go through the process of achieving product market fit. It just happened, and they’ve been riding that match for going on eight years.
The problem, though, was that by skipping over the wrenching process of finding a market, Twitter still has no idea what their market actually is, and how they might expand it. Twitter is the company-equivalent of a lottery winner who never actually learns how to make money, and now they are starting to pay the price.
The shareholder letter above is an example of exactly what I mean; Twitter is still selling the exact same value the service offered back in 2006 — “live commentary, live connections, live conversations” — and the only product ideas are to do what old media like television does, but worse: becoming the first screen for what is happening now means a screen that is smaller, more laggy, and, critically, in the way of seeing the actual tweets one might care about.
It’s also an example of the worst sort of product thinking: simply doing what was done before, but digitally. The classic example is banner ads: back when we viewed content on paper, the only place to put advertisements was, well, on the paper, next to the content. And so, when the web came along, folks just mimicked newspapers, putting advertisements next to content; the result was web pages that suck and an industry in crisis.
Facebook, meanwhile, thanks to mobile, discovered that advertisements in a feed are far more effective: they take over the whole screen, engaging the user’s attention in a way ads off to the side never did, and the miracle of never-ending content and ever-present data connections means the feed never grows stale. In-feed advertisements — just like Google’s search advertisements — are uniquely enabled by the Internet; it should come as no surprise that said uniqueness is strongly correlated with actually making money from advertising.
This is a pattern you see repeatedly from the successful technology companies: both the products and the business model are uniquely enabled by the Internet. Netflix, for example, commoditized time: the company’s entire catalog is available to any subscriber at any time, in a way that was never possible on linear TV. Airbnb commoditized trust, elevating beds, apartments, and homes to the same playing field as traditional hotels. Amazon commoditized product distribution, creating a storefront with infinite shelf-space and unbeatable prices.
Moreover, all of these companies are evolving (or have evolved) their original offering in a way that takes ever more advantage of the Internet’s unique capabilities: Amazon used to predominantly hold inventory like a traditional retailer, but today an ever-increasing portion of sales come from 3rd-party merchants using Amazon as a platform. Netflix’s value used to be not unlike Amazon’s: the infinite shelf space of the Internet meant the service had any DVD you wished to rent; today the company is the inverse, differentiated by its own exclusive content. Airbnb is earlier in its transition to a full-on experience provider, of which lodging is just one piece, but critically, it is evolving.
“Evolving” is a word that has never really applied to Twitter. Consider the Oscars: according to Twitter’s statement of strategy the ideal outcome for Twitter apparently would be live-streaming the Oscars, much as the service live-streamed a few NFL games and the Presidential debates, making the service the “first-screen” instead of the second. In other words, Twitter wants to make a better banner ad (that, as noted above, will in reality be worse). What makes this so frustrating is that Twitter’s goal of owning “live” could mean so much more: how might the product evolve if Twitter had the sort of product mindset found at companies like Amazon, Netflix, or Airbnb?
Consider the observation I made at the beginning about last night’s Academy Awards gaffe: what made it special in the moment was not just seeing it happen (one can replay it forever), and not just the shock (which truly is unique to “live”), but also the incredulous reaction on Twitter (and the host of jokes that followed). That reaction, though, is completely lost to time.
Imagine a Twitter app that, instead of a generic Moment that is little more than Twitter’s version of a thousand re-blogs, let you replay your Twitter stream from any particular moment in time. Miss the Oscars gaffe? Not only can you watch the video, you can read the reactions as they happen, from the people you actually care enough to follow. Or maybe see the reactions through someone else’s eyes: choose any other user on Twitter, and see what they saw as the gaffe happened.
What is so powerful about this seemingly simple feature is that it would commoditize “live” in a way that is only possibly digitally, and that would uniquely benefit the company: now the experience of “live” (except for the shock value) would be available at any time, from any perspective, and only on Twitter. That such a feature does not exist — indeed, that the company’s stated goal is to become more like old media, instead of uniquely leveraging digital — is as good an explanation for why the company has foundered as any.
More broadly, a foundational principle of Stratechery — one I laid out once again last week — is that the Internet is fundamentally changing the rules of business:
Today the fundamental impact of the Internet is to make distribution itself a cheap commodity — or in the case of digital content, completely free. And that, by extension, is why I have long argued that the Internet Revolution is as momentous as the Industrial Revolution: it is transforming how and where economic value is generated, and thus where power resides. In this brave new world, power comes not from production, not from distribution, but from controlling consumption: all markets will be demand-driven; the extent to which they already are is a function of how digitized they have become.
The companies that thrive in this new world are those that build new businesses uniquely enabled by the Internet; those that struggle are those with businesses built on old limitations — like, for example, the idea that “live” can only be experienced, well, “live.” That Twitter would seek to leverage its only-on-the-Internet initial product insight — the fact that anyone anywhere can read the musings of anyone else, and broadcast in turn — into an old-world business (“live” when live) is the best evidence yet that the company was the product of more luck than insight.
On Exponent, the weekly podcast I host with James Allworth, we discuss Manifestos and Monopolies.
Listen to it here.
It is certainly possible that, as per recent speculation, Facebook CEO Mark Zuckerberg is preparing to run for President. It is also possible that Facebook is on the verge of failing “just like MySpace”. And while I’m here, it’s possible that UFOs exist. I doubt it, though.
The reality is that Facebook is one of the most powerful companies the tech industry — and arguably, the world — has ever seen. True, everything posted on Facebook is put there for free, either by individuals or professional content creators;1 and true, Facebook isn’t really irreplaceable when it comes to the generation of economic value;2 and it is also true that there are all kinds of alternatives when it comes to communication. However, to take these truths as evidence that Facebook is fragile requires a view of the world that is increasingly archaic.
Start with production: there certainly was a point in human history when economic power was derived through the control of resources and the production of scarce goods:
However, for most products this has not been the case for well over a century; first the industrial revolution and then the advent of the assembly-line method of manufacturing resulted in an abundance of products. The new source of economic power became distribution: the ability to get those mass-produced products in front of customers who were inclined to buy them:
Today the fundamental impact of the Internet is to make distribution itself a cheap commodity — or in the case of digital content, completely free. And that, by extension, is why I have long argued that the Internet Revolution is as momentous as the Industrial Revolution: it is transforming how and where economic value is generated, and thus where power resides:
In this brave new world, power comes not from production, not from distribution, but from controlling consumption: all markets will be demand-driven; the extent to which they already are is a function of how digitized they have become.
This is why most Facebook-fail-fundamentalists so badly miss the point: that the company pays nothing for its content is not a weakness, it is a reflection of the fundamental reality that the supply of content (and increasingly goods) is infinite, and thus worthless; that the company is not essential to the distribution of products is not a measure of its economic importance, or lack thereof, but a reflection that distribution is no longer a differentiator. And last of all, the fact that communication is possible on other platforms is to ignore the fact that communication will always be easiest on Facebook, because they own the social graph. Combine that with the fact that controlling consumption is about controlling billions of individual consumers, all of whom will, all things being equal, choose the easy option, and you start to appreciate just how dominant Facebook is.
Given this reality, why would Zuckerberg want to be President? He is not only the CEO of Facebook, he is the dominant shareholder as well, answerable to no one. His power and ability to influence is greater than any President subject to political reality and check-and-balances, and besides, as Zuckerberg made clear last week, his concern is not a mere country but rather the entire world.
The argument that Facebook is more powerful than most realize is not a new one on Stratechery; in 2015 I wrote The Facebook Epoch that made similar points about just how underrated Facebook was, particularly in Silicon Valley. In my role as an analyst I can’t help but be impressed: I have probably written more positive pieces about Facebook than just about any other company, and frankly, still will.
And yet, if you were to take a military-type approach to analysis — evaluating Facebook based on capabilities, not intent — the company is, for the exact same reasons, rather terrifying. Last year in The Voters Decide I wrote:
Given their power over what users see Facebook could, if it chose, be the most potent political force in the world. Until, of course, said meddling was uncovered, at which point the service, having so significantly betrayed trust, would lose a substantial number of users and thus its lucrative and privileged place in advertising, leading to a plunge in market value. In short, there are no incentives for Facebook to explicitly favor any type of content beyond that which drives deeper engagement; all evidence suggests that is exactly what the service does.
The furor last May over Facebook’s alleged tampering with the Trending Topics box — and Facebook’s overwrought reaction to even the suggestion of explicit bias — seemed to confirm that Facebook’s incentives were such that the company would never become overtly political. To be sure, algorithms are written by humans, which means they will always have implicit bias, and the focus on engagement has its own harms, particularly the creation of filter bubbles and fake news, but I have long viewed Facebook’s use for explicit political ends to be the greatest danger of all.
This is why I read Zuckerberg’s manifesto, Building a Global Community, with such alarm. Zuckerberg not only gives his perspective on how the world is changing — and, at least in passing, some small admission that Facebook’s focus on engagement may have driven things like filter bubbles and fake news — but for the first time explicitly commits Facebook to playing a central role in effecting that change in a manner that aligns with Zuckerberg’s personal views on the world. Zuckerberg writes:
This is a time when many of us around the world are reflecting on how we can have the most positive impact. I am reminded of my favorite saying about technology: “We always overestimate what we can do in two years, and we underestimate what we can do in ten years.” We may not have the power to create the world we want immediately, but we can all start working on the long term today. In times like these, the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.
For the past decade, Facebook has focused on connecting friends and families. With that foundation, our next focus will be developing the social infrastructure for community — for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all.
It all sounds so benign, and given Zuckerberg’s framing of the disintegration of institutions that held society together, helpful, even. And one can even argue that just as the industrial revolution shifted political power from localized fiefdoms and cities to centralized nation-states, the Internet revolution will, perhaps, require a shift in political power to global entities. That seems to be Zuckerberg’s position:
Our greatest opportunities are now global — like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses — like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community.
There’s just one problem: first, Zuckerberg may be wrong; it’s just as plausible to argue that the ultimate end-state of the Internet Revolution is a devolution of power to smaller more responsive self-selected entities. And, even if Zuckerberg is right, is there anyone who believes that a private company run by an unaccountable all-powerful person that tracks your every move for the purpose of selling advertising is the best possible form said global governance should take?
My deep-rooted suspicion of Zuckerberg’s manifesto has nothing to do with Facebook or Zuckerberg; I suspect that we agree on more political goals than not. Rather, my discomfort arises from my strong belief that centralized power is both inefficient and dangerous: no one person, or company, can figure out optimal solutions for everyone on their own, and history is riddled with examples of central planners ostensibly acting with the best of intentions — at least in their own minds — resulting in the most horrific of consequences; those consequences sometimes take the form of overt costs, both economic and humanitarian, and sometimes those costs are foregone opportunities and innovations. Usually it’s both.
Facebook is already problematic for society when it comes to opportunity costs. While the Internet — specifically, the removal of distribution as a bottleneck — is the cause of journalism’s woes, it is Facebook that has gobbled up all of the profits in publishing. Twitter, a service I believe is both unique and essential, was squashed by Facebook; I suspect the company’s struggles for viability are at the root of the service’s inability to evolve or deal with abuse. Even Snapchat, led by the most visionary product person tech has seen in years, has serious questions about its long-term viability. Facebook is too dominant: its network effects are too strong, and its data on every user on the Internet too compelling to the advertisers other consumer-serving businesses need to be viable entities.3
I don’t necessarily begrudge Facebook this dominance; as I alluded to above I myself have benefited from chronicling it. Zuckerberg identified a market opportunity, ruthlessly exploited it with superior execution, had the humility to buy when necessary and the audacity to copy well, and has deservedly profited in the face of continual skepticism. And further, as I noted, as long as Facebook was governed by the profit-maximization incentive, I was willing to tolerate the company’s unintended consequences: whatever steps would be necessary to undo the company’s dominance, particularly if initiated by governments, would have their own unintended consequences. And besides, as we saw with IBM and Windows, markets are far more effective than governments at tearing down the ecosystem-based monopolies they enable — in part because the pursuit of profit-maximizing strategies is a key ingredient of disruption.
That, though, is why for me this manifesto crosses the line: contra Spider-Man, Facebook’s great power does not entail great responsibility; said power ought to entail the refusal to apply it, no matter how altruistic the aims, and barring that, it is on the rest of us to act in opposition.
Of course it is one thing to point out the problems with Facebook’s dominance, but it’s quite another to come up with a strategy for dealing with it; too many of the solutions — including demands that Zuckerberg use Facebook for political ends — are less concerned with the abuse of power and more with securing said power for the “right” causes. And, from the opposite side, it’s not clear that a traditional antitrust is even possible for companies governed by Aggregation Theory, as I explained last year in Antitrust and Aggregation:
To briefly recap, Aggregation Theory is about how business works in a world with zero distribution costs and zero transaction costs; consumers are attracted to an aggregator through the delivery of a superior experience, which attracts modular suppliers, which improves the experience and thus attracts more consumers, and thus more suppliers in the aforementioned virtuous cycle…
The first key antitrust implication of Aggregation Theory is that, thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers.
This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience.
Facebook is a particularly thorny case, because the company has multiple lock-ins: on one hand, as per Aggregation Theory, Facebook has completely modularized and commoditized content suppliers desperate to reach Facebook’s massive user base; it’s a two-sided market in which suppliers are completely powerless. But so are users, thanks to Facebook’s network effects: the number one feature of any social network is whether or not your friends or family are using it, and everyone uses Facebook (even if they also use another social network as well).
To that end, Facebook should not be allowed to buy another network-based app; I would go further and make it prima facie anticompetitive for one social network to buy another. Network effects are just too powerful to allow them to be combined. For example, the current environment would look a lot different if Facebook didn’t own Instagram or WhatsApp (and, should Facebook ever lose an antitrust lawsuit, the remedy would almost certainly be spinning off Instagram and WhatsApp).
Secondly, all social networks should be required to enable social graph portability — the ability to export your lists of friends from one network to another. Again Instagram is the perfect example: the one-time photo-filtering app launched its network off the back of Twitter by enabling the wholesale import of your Twitter social graph. And, after it was acquired by Facebook, Instagram has only accelerated its growth by continually importing your Facebook network. Today all social networks have long since made this impossible, making it that much more difficult for competitors to arise.
Third, serious attention should be given to Facebook’s data collection on individuals. As a rule I don’t have any problem with advertising, or even data collection, but Facebook is so pervasive that it is all but impossible for individuals to opt-out in any meaningful way, which further solidifies Facebook’s growing dominance of digital advertising.4
Anyone who has read Stratechery for any length of time knows I have great reservations about regulation; the benefits are easy to measure, but the opportunity costs are both invisible and often far greater. That, though, is why I am also concerned about Facebook’s dominance: there are significant opportunity costs to the social network’s dominance. Even then, my trepidation about any sort of intervention is vast, and that leads me back to Zuckerberg’s manifesto: it’s bad enough for Facebook to have so much power, but the very suggestion that Zuckerberg might utilize it for political ends raises the costs of inaction from not just opportunity costs to overt ones.
Moreover, my proposals are in line with Zuckerberg’s proclaimed goals: if the Facebook CEO truly wants to foster new kinds of communities, then he ought to unleash the force that can best build the tools those disparate communities might need. That, of course, is the market, and Facebook’s social graph is the key. That Zuckerberg believes Facebook can do it alone is evidence enough that for Zuckerberg, saving the world is at best a close second to saving Facebook; the last thing we need are unaccountable leaders who put their personal interests above those they purport to govern.
Plus, of course, the content Facebook pays for to seed initiatives like live video and dedicated content for the new video tab ↩
To be clear, economic value is generated on Facebook, but the role Facebook plays, whether that be advertising, small business sites, buy-and-sell groups, etc., could be done by alternatives ↩
Social networks must be free ↩
Google is a separate topic ↩