Stratechery Plus Update

  • Dollar Shave Club and the Disruption of Everything

    Probably the most important fact when it comes to analyzing Unilever’s purchase of Dollar Shave Club is the $1 billion price: in the world of consumer packaged goods (CPG) it is shockingly low. After all, only eleven years ago Procter & Gamble (P&G) bought Gillette, the market leader in shaving,1 for a staggering $57 billion.

    To be sure Gillette is still dominant — the brand controls 70 percent of the global blades and razors market — but there is little question that Dollar Shave Club is a much better deal, in every sense of the word. Understanding why Dollar Shave Club was cheap means understanding why its blades are cheap, and understanding that means understanding just how precarious the position of P&G specifically and incumbents generally are in the emerging Internet economy.

    The P&G Formula

    No great company — and P&G is one of the greatest of all time — is built on only one competitive advantage. Rather, the seemingly unassailable profits and ceaseless growth enjoyed by P&G throughout its history — amazingly, the company basically doubled its revenue every decade from 1950 to 2010 — was driven through multiple interlocking advantages that created a whole even greater than the sum of its impressive parts.

    • Research and Development: P&G has long lived by the maxim articulated by former CEO Bob McDonald: “Promotions may win quarters, innovation wins decades.” To that end P&G has always outspent the competition when it comes to R&D: $2 billion in 2014, double Unilever, their next closest competitor, and the company employs over 1,000 Ph.D.’s and a host of ethnographic researchers. This has allowed P&G to consistently come up with new products and brand extensions and charge a premium for them.
    • Branding and Advertising: As inspiring as that McDonald quote may be, P&G also dominates advertising: in 2014 the company spent $10.1 billion in global advertising, 37% more than second-place Unilever. This is hardly a new trend: the company invented soap operas in 1933 to help hawk the cleaning products it was built on, and invented the idea of a brand manager who had a holistic view of products from research to creation to advertising to distribution.
    • Distribution and Retail: P&G’s huge collection of brands and products not only gave the company massive scale efficiencies in manufacturing, but more importantly led to a dominant position in retail. P&G built strong relationships with retailers that let them dominate finite shelf space, the scarcest resource for an industry producing relatively bulky inexpensive products.

    P&G leveraged these resources in a simple formula that led to repeated success:

    • Spend significant resources on developing new products (more blades!) that can command a price premium
    • Spend even more resources on advertising the new product (mostly on TV) to create consumer awareness and demand
    • Spend yet more resources to ensure the new product is front-and-center in retail locations everywhere

    In a world of scarcity this approach paid off time and again: P&G grew not only because its markets grew, but also because it continually justified price increases due to its innovations.

    The Gillette Distillation

    Small wonder the company was willing to pay a fortune for Gillette; “More blades for more money” was perhaps the purest distillation of P&G’s growth strategy, and Gillette opened the door to the men’s market that P&G had to that point largely ignored.

    To be sure, that distillation was easy-to-mock; in 2004 The Onion famously wrote an article entitled Fuck Everything, We’re Doing Five Blades:

    The market? Listen, we make the market. All we have to do is put her out there with a little jingle. It’s as easy as, “Hey, shaving with anything less than five blades is like scraping your beard off with a dull hatchet.” Or “You’ll be so smooth, I could snort lines off of your chin.” Try “Your neck is going to be so friggin’ soft, someone’s gonna walk up and tie a goddamn Cub Scout kerchief under it.”

    I know what you’re thinking now: What’ll people say? Mew mew mew. Oh, no, what will people say?! Grow the fuck up. When you’re on top, people talk. That’s the price you pay for being on top. Which Gillette is, always has been, and forever shall be, Amen, five blades, sweet Jesus in heaven.

    That’s exactly what had happened with the Mach 3, Gillette’s previous top-of-the-line model: Gillette increased blade and razor revenue by nearly 50% with basically no change in underlying demand, easily making back the $750 million it cost to research and develop the razor, simply through its ability to charge a premium for new technology, create awareness and demand through advertising, and capture consumers through retail shelf dominance.

    Surprisingly, though, when the Onion’s satire became reality — Gillette launched the five blade Fusion with a 40% price premium in 2006, after being acquired — sales were slower than expected: many customers decided that three blades were good enough. Still, things weren’t that bad for Gillette and P&G: customers just kept buying the Mach 3. No business model worth $57 billion falls apart just because one component hits a soft spot!

    The Dollar Shave Club Disruption

    There was another product launch in 2006 that I’m sure no one at P&G even noticed: Amazon Web Services. Even if they did notice, I doubt the executives focused on the Fusion launch appreciated that P&G’s seemingly unassailable advantages were on the verge of declining precipitously.

    AWS made it easy and cheap to start an online company; YouTube, launched a year earlier, made it cheap and easy to share video; Facebook, launched in 2004, made it possible to spread said video to millions of people. All three came together with the 2011 founding of Dollar Shave Club and its 2012 launch with one of the best introductory videos of all time:

    Do watch if you haven’t — it’s really that good — but also look carefully at exactly what founder Michael Dubin is saying:

    I’m Mike, founder of DollarShaveClub.com. What is DollarShaveClub.com? Well, for a dollar a month we send high quality razors right to your door. Yeah! A dollar! Are the blades any good? No, our blades are fucking great.

    Gillette’s model and P&G’s formula generally cost a lot of money: R&D cost money, TV advertising cost money, and wholesalers and retailers had to earn a margin as well, and that’s before P&G realized the return on their investment. The result was that cartridges that cost less than a quarter to manufacture and package were sold for $4 or more. That worked as long as P&G’s other advantages in technical superiority, advertising, and distribution held, but were they ever to falter, it was eminently viable to sell cartridges for less and still make a healthy margin.

    Each razor has stainless steel blades and [an] aloe vera lubricating strip and a pivot head so gentle a toddler could use it. And do you like spending $20/month on brand name razors? $19 go to Roger Federer! I’m good at tennis. And do you think your razor needs a vibrating handle, a flashlight, a back-scratcher, and ten blades? Your handsome-ass grandfather had one blade AND polio. Looking good Pop-pop!

    This is a direct attack on Gillette having over-served the shaving market: P&G’s first advantage, their willingness to spend money on research and development, was neutralized because razors were already good enough.

    Stop paying for shave tech you don’t need. And stop forgetting to buy your blades every month. Alejandra and I are going to ship them right to you…

    AWS and Amazon itself, having both normalized e-commerce amongst consumers and incentivized the creation of fulfillment networks, made the creation of standalone e-commerce companies more viable than ever before. This meant that Dollar Shave Club, hosted on AWS servers, could neutralize P&G’s distribution advantage: on the Internet, shelf space is unlimited. More than that, an e-commerce model meant that Dollar Shave Club could not only be cheaper but also better: having your blades shipped to you automatically was a big advantage over going to the store.

    That left advertising, and this is why this video is so seminal: for basically no money Dollar Shave Club reached 20 million people. Some number of those people became customers, and through responsive customer service and an ongoing focus on social media marketing, Dollar Shave Club created an army of brand ambassadors who did for free what P&G had to pay billions for on TV: tell people that their razors were worth buying for a whole lot less money than Gillette was charging.

    The net result is that thanks to the Internet every P&G advantage, save inertia, was neutralized, leading to Dollar Shave Club capturing 15% of U.S. cartridge share last year.

    Value Destruction

    Note that metric: cartridge share. According to the traditional way of measuring marketshare Dollar Shave Club only has 5% of the U.S.; the discrepancy is due to the massive price difference between Dollar Shave Club and Gillette. And yet, the price difference is the entire point: in a world with good enough products (Dollar Shave Club imports their blades from Korean manufacture Dorco) that can be bought on zero marginal cost websites and shipped to your home directly there is no reason to charge more.

    The implications of this go far beyond P&G: fewer Gillette razors also mean less TV advertising and no margin to be made for retailers, who themselves are big advertisers; this is why I argued last month that the entire TV edifice is not only threatened by services like Netflix, but also the disruption of its advertisers, of which P&G is chief.

    More broadly, while razors with their huge gross margins and high replacement rate were a particularly good match for the Dollar Shave Club subscription model,2 I suspect this sort of disruption will not be a one-off: the Internet (and e-commerce) has so profoundly changed the economics of business that it is only a matter of time before other product categories are impacted, with all the second order effects that entails.

    Perhaps the biggest of these second order effects is on value, and that’s where I come back to this purchase price: the tech community is celebrating the massive return for Dollar Shave Club’s investors, but $1 billion for a 16% unit share of a market dominated by a brand that cost $57 billion is startlingly small. Indeed, that’s why buying Dollar Shave Club was never an option for P&G: even if their model is superior P&G’s shareholders would never permit the abandonment of what made the company so successful for so long; a company so intently focused on growing revenue is incapable of slicing one of their most profitable lines by half or more.

    For their part, Unilever is fortunate they don’t have a shaving business to protect, because being an incumbent is going to increasingly be the worst place to be. Dollar Shave Club’s motto may be “Shave Money Shave Time,” but just how many shareholders and policy makers are prepared for the shaving of value that this acquisition suggests is coming sooner rather than later?


    1. Schick is the other major CPG brand; I won’t mention them again although they face the same issues as Gillette 

    2. I’ll write more about why Dollar Shaving Club wasn’t eaten by Amazon like so many other e-commerce companies in tomorrow’s Daily Update  


  • A Technical Glitch

    One week ago, moments after her boyfriend Philando Castile was shot by a police officer during a routine traffic stop, Diamond Reynolds flipped on Facebook’s live streaming feature. The resultant video, with Reynolds documenting what had happened, as well as her interaction with the police officer, immediately started to spread like wildfire.

    And then it was gone.

    Approximately an hour later, the video was back, this time with a “Warning — Graphic Video” label attached:

    Screen Shot 2016-07-13 at 4.17.47 AM

    When asked why the video had temporarily disappeared, Facebook simply said “It was down to a technical glitch.” The company had no further comment on the matter.

    Facebook Versus Journalism

    One needn’t travel far on the Internet to find a think piece bemoaning how Facebook has destroyed journalism, with a whiff of nostalgia for a time when The New York Times decided what news was fit to print and Walter Cronkite declared nightly “That’s the way it is.” It’s a viewpoint that is problematic in two regards.

    First, the destruction of journalism is about the destruction of journalism’s business model, which was predicated on scarcity. In the case of newspapers, printing presses, delivery trucks, and a healthy subscriber base made them the lowest common denominator when it came to advertising, right down to four line classified ads that represented some of the most expensive copy on a per-letter basis in the world.

    TV news, meanwhile, in large part existed to fulfill broadcaster obligations under the Fairness Doctrine, which required licensors of publicly-owned radio frequencies to devote airtime to matters of public interest, and to air opposing views of those matters. The Fairness Doctrine was revoked in 1987, for reasons that were the canary in the coal mine for news’ business model. The New York Times reported at the time:

    In explaining the conclusion that its fairness rules were “no longer necessary to achieve diversity of viewpoint,” Ms. Killory, the commission’s counsel, noted the major growth of broadcast outlets in recent years.

    There are now more than 1,300 television stations and more than 10,000 radio stations in the United States — in contrast to 1,700 daily newspapers — and 95 percent of viewers receive five or more television signals. Radio listeners in the biggest 25 markets receive an average of 59 radio stations.

    Two decades later the average American home received 189 TV channels, and thanks to the Internet, an effectively infinite number of news websites. Scarcity was gone, and the publishing bubble is popping as a result. That Facebook has been the most effective service in collecting and funneling attention to the abundance of news on the Internet is a separate story.

    More importantly, the nostalgia for a world of journalistic gatekeepers is nostalgia for a world where the death of Philando Castile would be little more than a one paragraph snippet in the Minneapolis Star Tribune that would have sounded a lot like the initial police report that dryly noted “shots were fired”, and that would have been that.

    Crucially, though, it’s not that, thanks to Facebook. On the conservative site Daily Caller Matt Lewis wrote:

    In the era of Facebook Live and smart phones, it’s hard to come to any conclusion other than the fact that police brutality toward African-Americans is a pervasive problem that has been going on for generations. Seriously, absent video proof, how many innocent African-Americans have been beaten or killed over the last hundred years by the police—with little or no media coverage or scrutiny?

    Those old business models were great for journalists; they weren’t so great for those not deemed worth covering. Those nostalgic for the “good old days” are likely wishing for far more problems than they realize.

    Launching Facebook Live

    On April 6, the day that Facebook Live launched for everyone, BuzzFeed ran a feature that included an interview with Facebook CEO Mark Zuckerberg:

    “Because it’s live, there is no way it can be curated,” [Zuckerberg] said. “And because of that it frees people up to be themselves. It’s live; it can’t possibly be perfectly planned out ahead of time. Somewhat counterintuitively, it’s a great medium for sharing raw and visceral content.”

    A week later, during the opening keynote of Facebook’s F8 developer conference, Zuckerberg enthused:

    Just the other week I saw a live video of a woman and her kids skiing down a hill. It was just mesmerizing! I watched it for a few minutes because I was like ‘I just want to make sure these kids get down this hill.’ There’s usually people who are playing music or dancing in there, but every once in a while there’s something that is really important and special happening. Like a couple of days ago a woman named Lena commented on one of my posts to tell me that when her mother was sick in the hospital she streamed her wedding on live so her mother and her friends across the country could not only see it but could be there with them. Now that’s pretty meaningful.

    Raw, visceral, meaningful. That’s a pretty good way of describing Reynolds’ video. Newsworthy is another, and that’s where things get a whole lot more complicated for Facebook.

    Facebook the Journalism Company

    I noted above that Facebook is not necessarily to blame for the destruction of journalism’s business model, but with live video the social network has moved from feasting on what remains of publishing to becoming a journalistic company in their own right: Facebook’s 1.6 billion users have been deputized to not only chronicle their ski trips and weddings but also killings by police and, a day later, the killings of police.

    In retrospect, given this reality, what is so striking about the aforementioned BuzzFeed feature and all of Facebook’s public comments about live video is how little thought seems to have been given to this use case. There is talk about recruiting engineers (150 in a week), all of the features that had to be built, the huge technical problems involved, and of course the potential payoff for Facebook:

    Live solves a lot of problems for Facebook. It gives people an easy way to create video content that doesn’t require scripting or much production. Which in turn creates more content for Facebook. Live also helps the company tap into real-time events, an area where it’s struggled compared to Twitter…

    One recent trend in social media has been a move away from highly produced content, particularly video…This is precisely what Snapchat is so good at, and why it has become such a threat to Facebook. And it’s clearly something that’s been on Zuckerberg’s mind as well.

    “People look at live video and they think this is a lot of pressure because it’s live; it takes a lot of courage to go live and put yourself out there. But what we’re finding is the opposite,” Zuckerberg said in a phone interview the day before the Live relaunch. “A lot of the biggest innovations have been things that take some of the pressure out of posting a photo or video.”

    I wrote after this year’s F8 about how Facebook from the very beginning had always been about projecting your best self online; given that, I wondered if the focus on Live Video might ultimately prove to be a distraction from what Facebook was good at (owning identity online). This last week is validating that concern in a far more profound way than I appreciated.

    The risk is this: Facebook’s control over what the vast majority of people see online — news included — is overwhelming. Before the advent of Live Video, though, Facebook could more easily claim to be a neutral provider, simply serving up 3rd-party stories via an allegedly objective algorithm that was ultimately directed by the user itself, and using that user direction to build the best identity repository in the world to sell ads against. And while the reality of Facebook’s News Feed is in fact not objective at all — algorithms are designed by people — actually creating the news will, I suspect, change the conversation about Facebook’s journalistic role in a way that the company may not like.

    Facebook and the Fairness Doctrine

    Back in 1949, when the Fairness Doctrine was established, the FCC wrote in a report entitled In the Matter of Editorializing by Broadcast Licensees:

    We do not believe, however, that the licensee’s obligations to serve the public interest can be met merely through the adoption of a general policy of not refusing to broadcast opposing views where a demand is made of the station for broadcast time. If, as we believe to be the case, the public interest is best served in a democracy through the ability of the people to hear expositions of the various positions taken by responsible groups and individuals on particular topics and to choose between them, it is evident that broadcast licensees have an affirmative duty generally to encourage and implement the broadcast of all sides of controversial public issues over their facilities, over and beyond their obligation to make available on demand opportunities for the expression of opposing views. It is clear that any approximation of fairness in the presentation of any controversy will be difficult if not impossible of achievement unless the licensee plays a conscious and positive role in bringing about balanced presentation of the opposing viewpoints.

    Facebook is not a broadcaster: they don’t depend on a government-granted monopoly over radio frequencies that comes with strings attached. And frankly, even were I inclined to agree that the end of the Fairness Doctrine contributed in some way to the United States’ increased polarization, the clear free speech issues inherent in its application, combined with the explosion in media outlets, lead me to believe the FCC was right to revoke it.

    That said, Facebook’s influence over what most people see quite clearly rivals that of television broadcasters circa 1949, and the vast majority of jurisdictions in which Facebook operates have much less absolute free speech laws than the United States. The more that Facebook is perceived as a media entity, not simply a neutral platform, the more likely it is that the company will face calls for regulation of the News Feed in particular, in language that will likely sound a lot like the Fairness Doctrine.

    Facebook and Transparency

    Two weeks ago Facebook took an important step in dealing with the increased scrutiny it will inevitably face, posting a document detailing “News Feed Values”. For the first time Facebook offered a hint of transparency about how its algorithm works, making clear that “friends and family come first”, but also that “your feed should inform” and “your feed should entertain.”

    To be sure the document does nothing to address the question of providing both sides of an issue; quite the opposite, in fact. The document states:

    We are not in the business of picking which issues the world should read about. We are in the business of connecting people and ideas — and matching people with the stories they find most meaningful. Our integrity depends on being inclusive of all perspectives and view points, and using ranking to connect people with the stories and sources they find the most meaningful and engaging.

    We don’t favor specific kinds of sources — or ideas. Our aim is to deliver the types of stories we’ve gotten feedback that an individual person most wants to see. We do this not only because we believe it’s the right thing but also because it’s good for our business. When people see content they are interested in, they are more likely to spend time on News Feed and enjoy their experience.

    You may think this is problematic for society (as I do), but at least Facebook is being honest about it; transparency is the company’s best tool to remain free of regulation.

    It’s also why the “technical glitch” was so disappointing. The reasons why Reynolds’ video was taken down are probably innocuous — I suspect the video was flagged for graphic content by a Facebook user and removed by a contracted content reviewer (like these in the Philippines), and then restored by someone at Facebook headquarters — and the company is probably both embarrassed that it happened and shy about revealing the degree to which it farms out content review. The most powerful journalistic entity in the world, though, doesn’t get the luxury of sweeping such significant editorial decisions under the rug: that rug will be pulled back at some point, and it would be far better for society and for Facebook were they to do so themselves.

    One thing is for sure: this won’t be the last time something truly raw, visceral, and meaningful happens on Facebook Live. Zuckerberg has gotten his wish, even if the implications will ultimately be more than he bargained for: all of the eyes on those live videos will only increase the number of eyes on Facebook itself. It’s a classic case of unintended consequences: Facebook’s attempt to capture Snapchat’s private gestalt has only solidified its position as a public platform with the added component of a newsmaker in its own right, and while that carries clear benefits for society, society will expect more transparency from Facebook, willingly delivered or not.


  • The Brexit Possibility

    The TV upfronts that I wrote about last week may seem like an odd entry point to discuss Brexit and its relationship to technology, but the core insight in that piece is critical. From my follow-up in The Daily Update:

    While it is fine and useful to look at industries like TV or transportation or consumer packaged goods or retail in isolation, if you step back far enough all of these industries are interconnected and symbiotic. TV and our modern transportation system and big consumer packaged goods conglomerates and brick-and-mortar retail all came of age in the post World War II era, and all were built with the same assumptions like the importance of scale, controlling distribution, and crucially, that each other existed. There were positive feedback loops driving the growth of all of them together (and many other industries as well).

    The implication of this symbiosis is that just as these different industries rose together, they will assuredly fall together as well, and indeed that is slowly but surely happening for all the reasons I detailed last week. For now, though, leave these particulars to the side; I’ll return to them later.

    The key takeaway, and my starting point, is the realization that no single issue or company or industry or country stands alone: everything operates in systems, and both influences and is influenced by the system within which it operates. By extension, any change to one part of the system must impact and change other parts of the system: the greater the change, the greater the upheaval until the system can return to equilibrium. Sometimes, though, the change destroys the system completely.

    The Old System

    During the 20th century, particularly the post World War II era, the United States led the formation of a multinational system that balanced the government, large corporations, and labor.

    stratechery Year One - 283

    The U.S. focused its foreign policy on the interrelated goals of containing communism, preventing inter-European wars, and creating markets for the massive industries that had sprung up during World War II and now needed to accommodate millions of returning soldiers. In Europe the headlining effort was the Marshall Plan that combined aid used primarily to buy American-produced goods with an insistence on reducing trade barriers; the General Agreement on Tariffs and Trade came a year later. The Marshall Plan was administered by the Organisation for European Economic Co-operation, one of the first pan-European bodies that started the continent on the long road to the European Union (a road that was paved with U.S. government money1). This dual mission of peace through bureaucracy paid for with trade has endured.

    For their part, increasingly massive corporations built out the U.S.’s military power, manufactured most of the industrial and agricultural equipment on which Marshall Plan money was spent, and produced all of the accoutrements of a booming middle class: said middle class worked at those massive corporations, building everything from tanks to cars to washing machines, and spending their money on the same.

    The implicit deal was this: the government created markets for the corporations, who in turn provided not just employment but also security for their employees, funding health insurance and pensions, while employees (and corporations) paid for the government: in 1960 the lowest income bracket paid 20%, while the highest paid 90%, and the corporate tax rate was 52%. Europe followed a similar model, but spared of the burden of a huge military, nationalized most social security programs, especially health care but also pensions. And, for two decades, the systems were in equilibria.

    How Globalization Upended the System

    Globalization is by no means a recent phenomena: the idea of trading goods with other groups, so as to realize the benefits of comparative advantage2 dates back to the earliest recorded human civilizations in the third millennium B.C.E. More pertinent to this discussion, the combination of the industrial revolution (which supercharged the idea of specialization) and steamships massively increased trade in the 19th century, where the freedom of movement of goods was primarily guaranteed by colonialism: colonies supplied the raw materials and bought the finished goods, giving colonial powers massive trade surpluses that could be used to fight intermittent wars with each other.

    This system was utterly destroyed by two World Wars, resulting in the U.S.-dominated system above; still, though, the flow of goods was similar: the U.S., the world’s new superpower, was a net exporter, even as Europe and Japan in particular built up their own industrial base first with U.S. funds, and then by selling goods both to the U.S. and to each other. The deal was intact.

    Then, in the years leading up to the 1970s, three technological advances completely transformed the meaning of globalization:

    • In 1963 Boeing produced the 707-320B, the first jet airliner capable of non-stop service from the continental United States to Asia; in 1970 the 747 made this routine
    • In 1964 the first transpacific telephone cable between the United States and Japan was completed; over the next several years it would be extended throughout Asia
    • In 1968 ISO 668 standardized shipping containers, dramatically increasing the efficiency with which goods could be shipped over the ocean in particular

    These three factors in combination, for the first time, enabled a new kind of trade. Instead of manufacturing products in the United States (or Europe or Japan or anywhere else) and trading them to other countries, multinational corporations could invert themselves: design products in their home markets, then communicate those designs to factories in other countries, and ship finished products back to their domestic market. And, thanks to the dramatically lower wages in Asia (supercharged by China’s opening in 1978), it was immensely profitable to do just that.

    It is difficult to overstate the positive impact of this particular period of globalization. Billions of people were lifted out of abject poverty, especially in China but also throughout Asia, and the United States and other western countries became significantly richer as well; trade is absolutely a win-win. Critically, though, while everyone benefited from cheaper goods, the profits were not shared equally: the managers of multinational corporations and their owners reaped the vast majority of the benefits, even as their employee base effectively shifted from their domestic markets to Asia.

    This undid the post-World War II deal: middle class jobs began to disappear, and along with them the economic and social security that had been provided by corporations. It took time, to be sure, but the ascension of China to the WTO in 2001 dramatically accelerated this shift, and while its full effects were hidden by a massive expansion in credit fueled by a housing bubble, once that came crashing down in 2008 the former middle classes of developed countries came to realize just how deep was the hole they fell into.

    The Inevitable Fallout

    Remember, everything is a system. And, given the changes wrought by the post 1970s wave of globalization, it is foolish to think that a core component of society — labor — can be fundamentally changed without there being knock-on effects on the other components of that system. The first murmurs were the 2009 rise of the Tea Party on the right, and the 2011 Occupy Wall Street movement on the left. While the participants of the two groups couldn’t be more different — indeed, they loathe each other — both were outraged at “the System”.

    Both movements have flowered this election cycle, both in the United States and the United Kingdom: an old-school leftist was elected the leader of the U.K.’s Labour Party, and another nearly nominated by the Democratic Party. On the right the Republican Party has nominated Donald Trump, aided in no small part by the dramatic weakening of media institutions, while the U.K., in a campaign led by Conservative Party insurgents and the far-right U.K. Independence party, has just voted to leave the European Union with the support of many traditional Labour voters. In both cases there is a new cleavage: less right versus left, and more elites who have benefited from globalization and a middle class that has been left behind.

    Again, there are clear differences between the left and right: the former sees Wall Street or The City as the villain, while the latter blames immigration. Both, though, in their own way, want a return to the old deal: honest work for an honest wage, and an increasing sense of having nothing-to-lose until it happens.3

    Tech and A New System

    A return to the old deal won’t happen, of course, nor should we want it to: the last thirty years have made both the world generally and developed countries in particular richer than ever. What is needed, though, is a new system, and here the tech industry has a critical role to play.

    While the first twenty years of the modern tech industry (starting with the personal computer) primarily benefited corporations, the last fifteen years have dramatically improved the quality of life for consumers. The defining quality of technology, particular Internet-based companies, has been the generation of massive amounts of consumer surplus. How much is it worth to have access to all of the world’s information in the palm of your hand, or to be connected with friends and family wherever they are, or to make new connections with people you have never met? Far, far more than however much one pays for a smartphone and a data plan.

    That this largesse is financially viable for tech companies is a testament to their tremendous scale. While the old order was about multinationals, Google and Facebook and the rest are supranational: their addressable market is the world.4 Moreover, consumers’ benefit is incumbents’ pain: as I detailed above the new world order is slowly but surely drowning the old one. The question is just how transformative will that new world order be?

    If the old system was defined by the government, big corporations, and labor, the new system should be about government, technology, and individuals. It looks something like this:

    stratechery Year One - 284

    Government

    The first implication of the supranational nature of technology is that unlike the old multinationals, there is no need for government support to open markets and guarantee trade; for the most part, the less government involvement the greater maximization there will be of the consumer surplus that is already being generated. Rather, it is the government that ought take a much more active role in supporting individuals.

    At the most basic level this should include security: while universal health care would be ideal, for lots of reasons both practical and political it may not be viable in the U.S. Given that, Obamacare is a huge step in the right direction; other developed countries like the U.K. are obviously well ahead here.

    Second, instead of trying to recreate a 1950s fantasy of employment for life on an assembly line, the goal should be to create a far more dynamic labor market with a defined floor and significantly greater upside than the old system:

    • First, a universal basic income, facilitated by the government, should be set at the lower bounds of what is necessary to escape poverty. Globalization may have been the first shoe to fall on the middle class, but automation is the other, and it will affect just as many jobs as manufacturing, including — especially — white collar ones
    • Second, the government should be loosening regulations on the “gig” economy: technology has dramatically increased the degree to which work can be segmented, and that’s a good thing. Moreover, these sorts of jobs provide the upside to a universal basic income’s floor: our goal should be to make it vastly easier for individuals to better themselves if they choose to do so (while the basic income provide protection against the gig economy’s inherent uncertainty)
    • Third, there should be a significant loosening of the regulations and taxation around business creation. One of the many benefits of technology and the Internet has been to make all kinds of new businesses far more viable than ever before, but it is far too hard to get started, and the bookkeeping requirements are far too onerous. This sort of loosening, combined with the reduction in risk resulting from a better safety net and basic income, plus the possibility of building working capital through gigs, could lead to an explosion in creativity and entrepreneurial activity

    Each of these factors is critical: a universal basic income alone offers some degree of financial security, but it does not offer dignity to the recipient, or any return for society beyond a reduction in guilt. What is most important, and what offers the highest return, is enabling more and better ways to work and ultimately create: that requires fewer regulations and simpler taxation.

    Individuals

    I purposely changed the name of this part of the system from “labor” to “individuals”. While collective action was absolutely appropriate in a world where employment was dominated by massive corporations, collectiveness and the work it was appropriate for has its costs: a ceiling on the individual, both in terms of income and also creativity.

    What makes today’s world so different than the 1950s are the means with which ambition and creativity can be realized. I can write a newsletter without owning a printing press; someone else can create jewelry without a physical storefront; another can make music without a recording studio, and distribute it without a record label. Those are the easy examples — who knows what sorts of products and services might result from an emboldened and secure middle class?

    Young people in particular should relish this new world of opportunities: yes, the world of your parents is gone, but it does not automatically follow that the alternative is worse. Even with today’s mess there are far more entrepreneurial opportunities than ever before, and the younger one is the more one can accept the unnecessary risks that unfortunately still exist. And, on the flipside, opportunities predicated on the old system are themselves riskier than they have ever been.

    Tech

    It’s understandable why so many in tech are dismissive of the old order: beyond the consumer surplus being generated, and the systematic destruction of incumbents, the industry is increasingly the primary economic driver of the United States in particular, which offers a certain sense of invincibility. It would be against the self-interest of both consumers and politicians to hold tech back.

    And yet, there is an aspect of that calculation not far removed from measuring computers based on speeds and feeds. Yes, any rational calculation about the impact of the tech industry shows how indispensable it is, but people are not always rational, especially when they are desperate. It is absolutely in the industry’s best interests to not only participate in but lead the creation of a new system that works to the benefit of all.

    To that end, technology executives and venture capitalists should lead the campaign for the type of reforms I have listed above. More importantly, they should match their rhetoric with actions: companies like Apple and Google should strive to be technology leaders, not tax avoidance ones. Successful entrepreneurs and their investors should champion increased capital gains taxes with a bias towards much longer-term investment: this both encourages the long view even as it accounts for the massive return that comes to investors and shareholders in a winner-take-all world. VCs in particular should be willing to close the carried interest loophole, and everyone in the industry should be willing to shoulder higher tax rates.

    The payoff is equilibrium: the chance to build fabulously successful businesses that go with the current, not against it. The alternative is far worse: once automation arrives, guess who is going to be the scapegoat?

    Brexit: Wrong Reasons, Right Results?

    To be clear, this is a package deal: higher tax rates to fuel a misguided attempt to recreate the 1950s would be just as much of a disaster as undoing the old deal has been for the middle class. The world has changed.

    Indeed, this is why I’m not quite prepared to join in the panic over Brexit, although I understand and acknowledge the very real downsides. I keep coming back to the fact that the European Union is a product of the old order — a world where government entities existed to enable trade for multinationals and rules for everyone else. Small wonder the EU has been the most hostile to the changes wrought by tech! There is no question that undoing 40+ years of integration will be extremely painful — if indeed the U.K. leaves the EU at all — but given that the old order has already been disrupted, how much is to be gained by continuing to pretend that nothing has changed? Alternatively, might there be potential in building something new?

    To be sure, there is no evidence that Brexit was driven by a vision of a new world order; quite the opposite in fact. And, unlike many Brexit voters, I am mindful of the elite consensus about the problems with a withdrawal: trade still matters, and the loss of access to the European market, plus the internal side effects with regards to Scotland and Northern Ireland, are huge problems (and I can read a stock ticker!). But then again, the very definition of who is elite, and why, is as much a part of the system as anything else, and the fact there are so few voices even acknowledging the increased restrictiveness of the EU, or its complete lack of economic growth, much less grappling with why it is the EU came to be and how deeply entwined that is with the old system,5 is to my mind a missed opportunity to at least think about how things could be different.

    Everything is connected, everything is a system — and a crisis is a terrible thing to waste.


    1. The CIA financed nearly all the various organizations and individuals pushing for political integration 

    2. Comparative advantage is the idea that collective productivity can be maximized if every person/group/country specializes in what they are best at, and then trade for what else they need, as opposed to every person/group/country being entirely self-sufficient. This is one of the most important factors underlying economic progress; to take a very fundamental example, few of us grow our own food, as it is more efficient for farmers to do that at scale. We, in turn, sell our specialization to others giving us the means to buy food. 

    3. Is there a racial component to the opposition to immigration? Almost certainly. But I suspect the ugly manifestations of whatever darkness lies in people’s hearts would be much less common in a thriving economy 

    4. Except China, thanks to the Great Firewall 

    5. Above and beyond a desire to keep the peace, which is deeply meaningful 


  • TV Advertising’s Surprising Strength — And Inevitable Fall

    It’s been a good few months for TV executives, who are in the middle of upfront negotiations with advertisers for the 2016-2017 television season. Variety reports:

    After several years of moving money out of TV ad budgets to experiment with new digital outlets and social media, several big advertisers are spending more on the boob tube – and the result, according to ad buyers and other executives familiar with the pace of this year’s “upfront” negotiations, are a series of rate increases that TV has not won since the end of the last U.S. recession.

    “It’s all about money coming back into the marketplace,” said one media buying executive, noting that consumer packaged goods companies, quick-service restaurant chains and pharmaceutical companies are moving money into TV’s annual upfront market, when the nation’s big media companies try to sell the bulk of their ad inventory for the coming programming cycle. Some of the money is coming back from digital spending, and some of it is being moved from TV’s so-called “scatter” market, when advertisers pay for commercials much closer to their air date.

    Reports indicate those rate increases are running between 7% and 12%, and follow on a 2015-2016 season that saw scatter advertising — advertising purchased closer to the airing date — up 16% year-over-year. Apparently all that digital advertising hype was just a fad, right?

    I wouldn’t be so sure.

    Advertising and Attention

    Despite all of the upheaval caused by the Internet, there are two truths about advertising that have remained constant:

    1. Advertising’s share of GDP has remained consistent for 100 years1
    2. TV’s share of advertising, after growing for 40 years, has also remained consistent at just over 40% for the last 20 years

    Those twenty years have seen the emergence of digital advertising generally, and, over the last five years, mobile advertising: while this emergence is likely responsible for the halt in growth for TV, the real victims have been radio, magazines, and especially newspapers, which have shrunk from a nearly 40% advertising share to about 10%.

    Still, digital and mobile’s 33% share of advertising falls well short of the amount of attention attracted. Digital accounted for 47% of time spent with media in 2015, up from 32% in 2011, while TV has fallen from 41% in 2011 to 35% in 2015.2 This decline, though, is not evenly distributed: this jarring chart from Redef about the change in hours spent watching TV by age group shows that the situation for TV is much worse than the top-line numbers suggest:

    REDEF_Feeds_v1.2

    The three age groups with the biggest declines — millennials, basically — are the most attractive to the brand advertisers that dominate TV advertising: for one, the younger you are the less likely you are to have developed affinity for a particular brand, and for another, the younger you are the more years a brand has to earn back the money spent building said affinity. Small wonder brands have been so eager to jump on new digital platforms where said millennials are actually spending their time.

    So why is money suddenly flowing back to TV?

    The Relationship Between TV and Advertisers

    The most obvious reason for TV’s enduring appeal to advertisers is that it is a pretty fantastic advertising medium: relaxed viewers, immersive experience, etc. The appeal, though, goes deeper: the very institution of television advertising is intertwined with the kinds of advertisers that use it the most, the products they sell, and the way they are bought-and-sold. And what should be terrifying to television executives is that all of those pieces that make television advertising the gold mine that it has been are under the exact same threat that TV watching itself is: the threat of the Internet.

    Start with the top 25 advertisers in the U.S. The list is made up of:

    • 4 telecom companies (AT&T, Comcast, Verizon, Softbank/Sprint)
    • 4 automobile companies (General Motors, Ford, Fiat Chrysler, Toyota)
    • 4 credit card companies (America Express, JPMorgan Chase, Bank of America, Capital One)
    • 3 consumer packaged goods (CPG) companies (Procter & Gamble, L’Oréal, Johnson & Johnson)
    • 3 entertainment companies (Disney, Time Warner, 21st Century Fox)
    • 3 retailers (Walmart, Target, Macy’s)
    • 1 from electronics (Samsung), pharmaceuticals (Pfizer), and beer (Anheuser-Busch InBev)

    Notice that the vast majority of the industries on on this list are dominated by massive companies that compete on scale and distribution. CPG is the perfect example: building a “house of brands” allows a company like Procter & Gamble to target demographic groups even as they leverage scale to invest in R&D, bring down the cost of products, and most importantly, dominate the distribution channel (i.e. retail shelf space). Said retailers, meanwhile, are huge in their own right, not only so they can match their massive suppliers at the bargaining table but also so they can scale logistics, inventory management, store development, etc. Automobile companies, meanwhile, are not unlike CPG companies: they operate a “house of brands” to serve different demographics while benefitting from scale in production and distribution; the primary difference is that they make money through one large purchase instead of over many smaller purchases over time.

    Similar principles apply to the other companies on this list: all are looking to reach as many consumers as possible with blunt targeting at best, all benefit from scale, and all are looking to earn significant lifetime value from consumers. And, along those lines, all can afford the expense of TV. In fact, the top 200 advertisers in the U.S. love TV so much that they make up 80% of television advertising, despite accounting for only 51% of total advertising (and 41% of digital).

    Note, though, that many of the companies on this list are threatened by the Internet:

    • CPG companies are threatened on two fronts: on the high end the combination of e-commerce plus highly-targeted and highly-measurable Facebook advertising have given rise to an increasing number of boutique CPG brands that deliver superior products to very targeted groups. On the low end, meanwhile, e-commerce not only reduces the shelf-space advantage but Amazon in particular is moving into private label in a big way.
    • Relatedly, big box retailers that offer few advantages beyond availability and low prices are being outdone by Amazon on both counts. In the very long run it is hard to see why they will continue to exist.
    • The automobile companies, meanwhile, are facing three separate challenges: electrification, transportation-as-a-service (i.e. Uber), and self-driving cars. The latter two in particular (and also the first to an extent) point to a world where cars are pure commodities bought by fleets, rendering advertising unnecessary.

    The other companies face less of a long-term threat, some because they are already commoditized — telecoms, credit cards, electronics — and others because they will probably grow: big movies are only getting bigger (entertainment), and the population is getting older (pharmaceuticals). Still, the inescapable reality is that TV advertisers are 20th century companies: built for mass markets, not niches, for brick-and-mortar retailers, not e-commerce. These companies were built on TV, and TV was built on their advertisements, and while they are propping each other up for now, the decline of one will hasten the decline of the other.

    TV Advertising’s Dead Cat Bounce

    I also suspect the nature of the biggest TV advertisers explains TV’s dead cat bounce: brands uniquely suited to TV are probably by definition less suited to digital advertising, which at least to date has worked much better for direct response marketing. No one is going to click a link in their feed to buy a car or laundry detergent, and a brick-and-mortar retailer doesn’t want to encourage shopping to someone already online. So after a bit of experimentation, they’re back with TV.

    Still, I think Facebook and Snapchat in particular will figure brand advertising out: both have incredibly immersive advertising formats, and both are investing in ways to bring direct response-style tracking to brand advertising, including tracking visits to brick-and-mortar retailers. It wouldn’t surprise me if brand advertising on digital is following the hype cycle:

    stratechery Year One - 282

    This is the story of most things Internet-related, not just narrowly but broadly: it’s no accident many of today’s startups are repeating ideas from the dot com era; it’s not that they were wrong but that they were too early. And, when it comes to old world companies, if you turn that graph upside down, the “trough of disillusionment” looks a lot like a bounce-back!

    Ultimately, given the shift in attention, the threats faced by their best advertisers, and the oncoming train that is Facebook and Snapchat, were I a TV executive I wouldn’t get too excited about one nice week of ad sales. Indeed, the industry has been shifting to subscriptions for years, and while advertising will hold up for a while, the big drama is who will be left without a chair when the music stops.3

    Coda: Aggregation Theory

    One more thing: I wrote a piece earlier this year called The Fang Playbook that posited that Facebook, Amazon, Netflix, and Google (plus Uber) were structurally very similar companies: all leveraged zero distribution costs and zero transaction costs to own users at scale via a superior experience that commoditized suppliers and let them skim off the middle, either through fees, subscriptions, or ads.4

    What I described above is the opposite side of the coin: linear television and its advertisers were all predicated on owning distribution and thus owning customers. The Internet has or is in the process of destroying their business models for broadly similar reasons; for now the intertwinement of these models is keeping everyone afloat, but that only means that when the end comes it will come more swiftly and broadly than anyone is expecting.


    1. Although this may be slowing; more on this tomorrow 

    2. Note that the advertising-free Netflix is categorized as digital; although the streaming service still serves a minority of U.S. households, its subscribers watch an average of 1 hour and 33 minutes a day, and are responsible for a good deal of TV’s fall-off. 

    3. Viacom, for example 

    4. Aka Aggregation Theory  


  • Microsoft and Apple Double Down

    It has been years since Microsoft upstaged an Apple Keynote (such things usually run in the opposite direction), but that is exactly what happened yesterday with the former’s $26.2 billion purchase of LinkedIn overshadowing Apple’s impressive yet iterative announcements at their annual Worldwide Developer’s Conference (WWDC). And yet, despite the contrast in expectations (zero versus high) and reactions (“What?!?” versus “Makes Sense”) there is a certain symmetry to the news: both are doubling down on their strategies, and it’s debatable which is taking the most risk by doing so.

    Microsoft Buys LinkedIn

    In its own small way perhaps the most surprising — and in many respects, encouraging — aspect of Microsoft’s purchase of LinkedIn was just how unexpected it was: a company with a notorious history of cutthroat boardroom politics not only pulled off the largest acquisition in its history without anyone knowing, it actually sat on the news regarding a signed letter of intent for a month! Granted, that meant limiting the information to an exceptionally small number of people, but there has been a lot of evidence over the past two years that Microsoft under CEO Satya Nadella has, at least at the highest levels of management, been more focused and aligned on where Microsoft needs to go than at any point under former CEO Steve Ballmer.

    It was Steve Ballmer who led Microsoft’s previous largest acquisition, that of Nokia in 2013, a deal that from day one made no sense (and that was opposed by Nadella). It is that deal, though, that is perhaps the best place to start from when it comes to understanding this one.

    I marveled last month at how adroitly Nadella had killed the Windows Phone business (or to put it more accurately, allowed Windows to figure out on their own that the platform had been dead from the beginning). The wind down of that business, though, and the ongoing shift of Windows to effectively “maintenance mode”, opens up room in Microsoft’s R&D budget — about 13% of revenue — for something new, so why not LinkedIn? Leaving aside the purchase price, LinkedIn slides right into the Windows Phone void when it comes to Microsoft’s investment in the future, with the benefit of being a business that has actual upside.

    I do believe that upside is magnified significantly by Microsoft: should LinkedIn’s Sales Navigator, for example, sell in to 100% of the Microsoft Dynamics CRM user base, a good portion of this deal would be paid for; on the flipside, Dynamics now becomes a much more compelling offering in its ongoing competition with Salesforce (the most likely candidate for the rumored competitive bidder), Oracle, and SAP. And, of course, Microsoft’s Office products become that much more compelling with far deeper LinkedIn integration than was possible when they were two different companies (privacy concerns are much more easily solved when it’s the same company!).

    I think, though, there is a deeper benefit that alters the trajectory of Microsoft’s productivity business in particular. I’ve written previously how Microsoft’s enterprise approach has been fundamentally upended by the cloud: when compatibility and ease-of-integration are no longer the controlling factors for IT buyers, Microsoft’s focus on lock-in and a good-enough user experience are simply not enough, and to the company’s credit both its Azure and Office 365 divisions have embraced a future that will be won on the user experience on all devices, not just Microsoft-controlled ones. Still, the question of who would own identity — previously the linchpin of Microsoft’s enterprise lock-in (thanks to Active Directory) — has been an open one.

    What is potentially transformative about this deal is a future where Microsoft retains its focus on enterprise while shifting the locus of its business from companies to employees. I have written at length about the importance of owning the end user, but we the end users have multiple identities, one of which is our professional life, and that is a graph in which LinkedIn is nearly unchallenged. To put it another way, the “consumerization of IT” — a tagline favored by Ballmer — it not only about creating a compelling user experience in IT products but about treating enterprise users as, well, consumers: with LinkedIn Microsoft can form a direct relationship with its end users that goes far beyond the CIO and opens up a huge array of opportunities that not only were unavailable previously, but are also critical in a world where CIOs matter less than they ever have previously, and where employees change jobs constantly. Instead of starting from scratch with every new hire, it is Microsoft that is positioned to provide the glue that connects enterprise workers no matter where they are.

    And, for what it’s worth, Microsoft got a deal: I think LinkedIn’s February stock slide was justified, but the fact remains that Microsoft is buying the social network for a price that would have been unthinkable six months ago, and the upside that justified the former stock price remains: LinkedIn knows more about its users than anyone outside of Facebook — and when it comes to our professional lives, they know more. This is the most valuable data in the world.

    The most obvious criticism of this deal is the opportunity cost: what might that $26 billion have been spent on otherwise? Even with this deal Microsoft’s existential threat remains: why would a new company buy any of their products? This acquisition doesn’t solve that in a way a pairing of, say, Dropbox and Slack would (two tastes that would be better together, and who also aren’t selling), but then again, Microsoft has far more money than they do time: LinkedIn builds a bridge for their productivity business to a new world centered around end users, not corporations, and it may even give Microsoft’s inevitable compete products a head start. Imagine this: instead of simply moving Active Directory to the cloud, Microsoft is potentially making LinkedIn the central repository of identity for all business-based interactions: chat, email, and more, and it’s an identity that endures for an end user’s professional life, because it’s managed by the user, not by their transient employer. It’s genuinely exciting, and shocking though it may have been, I think that’s worth $26 billion.

    More broadly, this has to be seen as the final blow to the notion that Windows is Microsoft’s focus; as late as 2014 I argued the company should split itself up, simply because the “Windows first” culture was so entrenched, but Nadella has done a remarkable job reorganizing and reorienting the company towards a future where Windows is one of many clients for Microsoft’s service offerings, and if it wasn’t clear to the rank-and-file yet, it surely must be now. Microsoft is doubling-down on the cloud and on productivity, and now there are 26 billion reasons to believe them.

    Apple at WWDC

    Apple’s WWDC announcements, meanwhile, were perhaps most notable for what they didn’t include, including no mention of iMessage on Android, and no significant discussion about Siri beyond the rumored and sorely needed incremental improvements (and even there, somewhat limited: Siri’s API is only open to a subset of applications).

    Instead the keynote was about enhancing and deepening the value that comes from living the full Apple lifestyle: now your Watch unlocks your Mac, and your desktop is available on your iPhone. You can pay for things in your Mac’s browser using the iPhone, and control your house via the Apple TV from your lock screen. And yes, Messages got a massive update that I am very excited about (more tomorrow as I review the keynote in-depth), but its features are only available if you’re messaging other iPhone users: pay up or get left out.

    Just as notable were the new things that weren’t announced: Apple is allowing users to delete pre-installed apps, and while there is no news about defining new default apps, should Apple do so the company would be on the path towards making its hardware mastery even more compelling by virtue of enabling services companies like Google and Microsoft to enhance the experience of using an iPhone.

    This enhancement, of course, theoretically weakens Apple in the long run, as it squanders the ability of the company to leverage its hardware advantage into a services lock-in, but given my conviction on the power of culture I’m not sure this is a bad thing; here Steve Jobs’ advice rings true:

    I think if you do something and it turns out pretty good, then you should go do something else wonderful, not dwell on it for too long. Just figure out what’s next.

    For Apple what is next should almost certainly be guided by what the company is the best at: integrating hardware and software to deliver a user experience so compelling that consumers continue to self-select into the company’s own orbit, not building infrastructure on top of platforms it doesn’t control.

    That, though, is also what made me nervous about the company’s announcements. CEO Tim Cook emphasized at the beginning of the keynote, in his words, “why we do what we do at Apple.”

    Screen Shot 2016-06-14 at 6.04.37 AM

    Our North Star has always been about improving people’s lives by creating great products that change the world.

    Several of the product announcements, though, like enhancements to photos and Siri, seemed to care more about an absolutist view of privacy than about the best possible end user experience; make no mistake, I value privacy, but everything is a trade-off. At what point might Apple’s strenuous defense of privacy shift from a principled stand to a convenient reason to not be competitive with alternatives like Google photos or Alexa? Or, to put it another way, when does principle become an excuse for not being competitive on the user experience that has been Apple’s biggest differentiator for its entire existence?

    Apple Versus Microsoft

    Apple’s keynote was notable for its having doubled-down on what the company excels at: providing a better experience provided you pay Apple’s hardware margins. In this their announcement echoes Microsoft’s decision to double down on owning professional productivity and services. The difference, though, is in the timing: Microsoft is (finally) pivoting to the approach they should have adopted a decade ago, and while that has cost the company a lot of time and increased their risk, it is exciting to see them closer to the beginning than the end of their strategy. The question for Apple is where on that spectrum do they lie: for how long will a hardware-centric strategy drive growth, and just how much is Apple willing to change its culture to ensure it takes advantage of new opportunities and not simply preserve what it has?

    Or maybe it doesn’t matter: odds are the biggest news from Monday will be Snapchat’s launch of an advertising API; the world goes on, value moves up the stack, and attention to the “story of the day” is all too often a trailing indicator.


  • The Future of Podcasting

    I like driving, even if I end up sitting in traffic. I enjoy doing the laundry, and take my time folding shirts just so. I volunteer to wash the dishes. After all, each of these activities is an excuse to listen to more podcasts.1

    I’ve been listening to podcasts for over a decade now; I don’t remember exactly when I got started but it was around the time that Apple Took Podcasting Mainstream: that’s from the title of the press release announcing iTunes support for podcasts in 2005. Given that most podcasts were listened to on iPods (thus the name) that already synced with iTunes, Apple’s move dramatically simplified the distribution of podcasts: simply click a button in the music management app you already used, hook up the iPod as you already did, and voilà! New podcasts ready to be listened to in the car (via your cassette tape adaptor), while doing laundry, washing the dishes, etc. It was great!

    It also was not in the slightest bit mainstream: according to Edison Research, in 2006 only 22% of Americans were even familiar with the term “podcasting”, and only 11% had ever listened to one. Both numbers have slowly but steadily grown over the years (55% have heard of podcasting as of this year, and 36% have listened to one, and there actually isn’t a readily apparent ‘Serial’ bump), aided in large part by the smartphone: by removing the need to sync with iTunes it was much easier to have fresh podcasts at the ready. Still, there remained the challenge of creating compelling content, discovering content worth listening to, retaining listeners and, of course, paying for it all.

    Podcasting Versus Blogging

    Late last year Joshua Benton wrote that Podcasting in 2015 Feels a Lot Like Blogging Circa 2004:

    Podcasting is giving me a case of déjà vu…The variety and quality of work being done is thrilling; outside attention is growing; new formats are evolving. We’re seeing the same unlocking of creative potential we saw with blogging, and there’s far more good work being produced than anyone has time to take in. The question now is whether podcasting’s future will play out as the last decade of blogging has.

    It’s a good observation, but there are important differences if you look into the various factors I alluded to above:

    • Creation: Blogger was released back in 1999, and WordPress in 2003; both required some level of acumen, but significantly less than recording and mixing a podcast takes today. It’s also very complicated to get a show actually listed in iTunes. This means, by extension, that for all the great podcasts there are today, there were that many more blogs.
    • Distribution: Blogs could be read via URLs typed in a browser that everyone already used. Podcasts are much more complicated: you either have to search a 3rd-party podcast player’s directory (iTunes or self-contained) to add a show, or copy-and-paste a feed address. Alternately, you can simply listen on a website, but that is a suboptimal experience to say the least.
    • Discovery: In 2004 most blogs were found from links on more popular blogs; today new blogs are usually discovered on social networks. Podcasts, meanwhile, really struggle here: yes, iTunes has a front page and a blackbox ranking system, but the requirement to download a file and spend time listening makes it hard to spread virally. Many podcasts are instead built off of established brands like NPR or the personal brands of the podcast hosts.
    • Retention: Back in 2004 most blog readers returned via bookmarks; more advanced users leveraged RSS readers that polled sites for new content and downloaded it in a feed. Today, most readers rely on social network posts that may or may not be seen. Interestingly, it is here that podcasts have an advantage: because they are built on RSS, anyone who “subscribes” through a podcast player downloads podcasts automatically and even gets notifications, making for a very sticky and loyal audience.
    • Monetization: Blogging had a brief honeymoon period where you could actually make money with Google AdSense, but revenue soon plummeted as inventory drastically increased; more devastating, not just for bloggers but all publishers, was Facebook’s absorption of not just what used to be blogging content but also publishing dollars. Increasingly, the best option for publishers is to simply publish directly to Facebook and let them sell ads.

    The monetization of podcasts, meanwhile, deserves a deeper dive.

    The Problem with Monetizing Podcasts

    Podcasting is still a tiny business: according to the Wall Street Journal podcasts only attracted $34 million in advertising revenue last year, about 1/100th the amount spend on billboards (This number is disputed by many of the bigger podcasters, but the highest estimate I’ve seen is $200 million, i.e. 1/15th of billboards). The biggest player in the podcast advertising market is a company called Midroll Media, which was acquired by E.W. Scripps last year for $50 million (with a $10 million earn-out).

    Midroll sells ads for over 200 podcasts, including some of the most popular ones like WTF with Marc Maron and the Bill Simmons Podcast. The not-so-secret reality about podcast ads, though, are that advertisers are quite concentrated: a FiveThirtyEight intern heroically listened to the top 100 shows on the iTunes chart and counted 186 ads; 35 percent of them were from five companies.2 More tellingly, nearly all of the ads were of the direct marketing variety.

    A major challenge in podcast monetization is the complete lack of data: listeners still download MP3s and that’s the end of it; podcasters can measure downloads, but have no idea if the episode is actually listened to, for how long, or whether or not the ads are skipped. In a complete reversal from the online world of text, the measurement system is a big step backwards from what came before: both radio and TV have an established measurement system for what shows are watched, and the scale of advertising is such that surveys can measure advertising effectiveness. Thus the direct marketing advertisers: they can simply do the measurement themselves through coupon codes or special URLs that measure how many people responded to a podcast ad. It’s not totally efficient — some number of conversions forget the code or URL — but it’s something.

    It also won’t scale. For the advertisers that exist the implication of measuring by code or URL is that every single podcast needs customized support, limiting advertising opportunities to bigger podcasts only. More importantly, there simply aren’t that many advertisers with the sort of business model that can justify the hassle. The real money in TV and especially radio is brand advertising; brand advertising is focused on building affinity for a purchase that will happen at some indefinite point in the future, so the focus is less on conversion and more on targeting: knowing in broad strokes who is listening to an ad, and exactly how many people. For podcasting to ever be a true moneymaker it has to tap into that — and that means changing the fundamental nature of the product.

    Midroll Makes Their Move

    Yesterday Scripps/Midroll made another acquisition, this time of the podcast player Stitcher. From the Wall Street Journal:

    Stitcher is a free app that streams more than 65,000 podcasts from publishers ranging from NPR to MSNBC to The Wall Street Journal. It will operate under Midroll Media, the podcast advertising company that Scripps acquired last year…

    “We certainly have the ad sales force and the connections that make us a leader in the space, but today we depend almost exclusively on distribution into other channels,” said Adam Symson, chief digital officer at Scripps. “This puts in place, with a very strong brand, another piece of the puzzle in the ecosystem play.”

    As new listeners and shows enter the podcast world, companies in the space have been contending with a handful of industry challenges, like measuring audience size and wooing big brand marketers. “For the first time we’ll have significantly more ability to help with podcast discovery, to help with distribution, to help shows grow, and to help find out what audiences want in a way that we could not do before,” Mr. Diehn said.

    Stitcher is thought to be the 2nd most popular podcast player, although it has long been controversial in some circles for its default practice of hosting podcasts itself (instead of directing users to download them directly from a podcaster’s server) and inserting ads. That model, though, was likely attractive to Scripps/Midroll: controlling the files and the player means the possibility of making meaningful measurements of play data plus dynamic ad insertion at scale.

    Moreover, Midroll’s leading role in advertising combined with Scripps’ bank account mean the company could offer big bucks to leading podcasters to make themselves exclusive to Stitcher, driving users to the measurable app to the long-term benefit of the company’s efforts to attract brand advertisers. To be very clear, there are a lot of obstacles to this actually happening, but the idea of there being a central aggregator for podcasts that locks in podcasters through superior monetization and listeners through exclusive content is both plausible and attractive from a business perspective.

    Publishers Beware

    Popular podcaster and blogger Marco Arment has been particularly vocal about the problems with this approach; more pertinently, Arment says he built the Overcast podcasting app to resist this outcome:

    Podcasts are hot right now. Big Money is coming. Big Money isn’t going to sell nicely designed, hand-crafted, RSS-backed podcast players for $2.99 or ask you to pay what you want to support them, because that doesn’t make Big Money. They’re coming with shitty apps and fantastic business deals to dominate the market, lock down this open medium into proprietary “technology”, and build empires of middlemen to control distribution and take a cut of everyone’s revenue…I don’t know if Overcast stands a chance of preventing the Facebookization of podcasting, but I know I’m increasing the odds if my app is free without restrictions. As long as I can make money some other way, I’m fine.

    That phrase — “the Facebookization of podcasting” — should send chills down the spine of all the publishing companies jumping headfirst into podcasting. Publishers are already “serfs in a kingdom that Facebook owns” by virtue of the fact that Facebook owns user attention and has superior advertising capabilities. And yet many publishers are so focused on finding new income streams that they are practically begging for exactly what Arment fears.

    Early last month the New York Times suggested many publishers hoped the Facebook of podcasting would be Apple; thanks to that 2005 release the company and its directory remain the center of the podcasting industry, and iTunes and its iOS podcast app are are responsible for a reported 65 percent share of podcast listeners. Indeed, this is the biggest reason to doubt Midroll’s plans: as difficult as it can be to corral advertisers, switching the habits of millions of listeners in the face of a default experience is far more difficult.

    A Third Way

    All that said, I’m not sure the status quo of podcasters hosting their own MP3 files listed in a (relatively) open directory mostly ignored by Apple is sustainable, or even desirable: relatively large independent podcasters like Arment may prefer the current setup, but there is an increasing amount of money and agitation for building something that looks a lot like Midroll’s presumed plans for Stitcher. Apple itself, with its dominant position in podcasting and its newfound focus on services revenue in the face of declining iPhone sales, is not only well-placed but also increasingly motivated to fill that role itself.

    More importantly, though, for publishers podcasting really is a great opportunity to build something sustainable. In Grantland and the (Surprising) Future of Publishing I explained how media companies need to expand their thinking about monetization:

    Too much of the debate about monetization and the future of publishing in particular has artificially restricted itself to monetizing text. That constraint made sense in a physical world: a business that invested heavily in printing presses and delivery trucks didn’t really have a choice but to stick the product and the business model together…

    Focused, quality-obsessed publications [should]…collect “stars” and monetize them through…alternate media forms. Said media forms, like podcasts, are tough to grow on their own, but again, that is what makes them such a great match for writing, which is perfect for growth but terrible for monetization.

    Go back to the five factors that go into effective media: both text and podcasts are relatively easy to create, but text is much easier to distribute and discover; the most effective podcasts, meanwhile, are those driven by brands or personalities; podcasts in general are great at retaining loyal listeners; and their monetization potential is much higher if the measurement can be figured out.

    A Stitcher/Apple-type solution does help on that last point, but it still makes distribution and discovery harder than they should be: a publisher has to tell its readers to go to a different app, search for their name, subscribe, and then depend on that 3rd party app for monetization and measurement. Wouldn’t it be better if the publisher simply did that themselves?

    I think there is a third way here, that preserves independence but starts to solve the monetization and measurement problem: publishers should offer podcasts through their own app that measures listens, and either sell ads themselves if they have the scale or outsource it to a company like Midroll.3 Midroll, for their part, should leverage their new player technology to offer skinnable apps for publishers who can’t build their own. The end result would be a much smoother path for publishers to convert their readers to listeners — and to effectively cross-promote — along with the measurement and scale needed to grow advertising meaningfully (or even offer subscriptions).4

    I know this breaks the modern concept of podcasting, and power users with tens of subscriptions in their podcasting player of choice will be annoyed if they have to download multiple apps. Often, though, a solution that works for power users is actually prohibitive for normal users, and the other solution — a Facebook of podcasts — would be worse for everyone. Just look at what happened to RSS readers: yes, Google killed them, but it was only ever used by a fraction of readers before then because they were too difficult; Facebook, on the other hand, was easy.5 Fortunately for publishers, the challenges of podcast discovery and distribution actually make apps the easiest choice of all.6

    There are plenty of good reasons why the publishing world ended up subservient to Facebook, but to answer Benton’s question as to “whether podcasting’s future will play out as the last decade of blogging has”, I don’t think it has to. The limitations of audio relative to text actually work to the publishers’ advantage in a way that the portability of text did not, and this may be their last chance to build destinations that people will wash the dishes in order to visit.


    1. I mostly listen to sports podcasts, primarily NBA, so no need to email for my list of recommendations 🙂  

    2. Squarespace had 30, Stamps.com 12, Audible 11, MailChimp 8, and Dollar Shave Club 5 

    3. I’m using Midroll as a standin, but this role could be filled by any company 

    4. Also, there’s no need for publishers to fear the App Store: Apple doesn’t take a skim off of advertising, and the App Store infrastructure would actually make subscriptions viable 

    5. To be clear, I love RSS! And it underpins a lot of the web, including Facebook Instant Articles. But I’m talking about the mass market and monetization 

    6. Yes Newsstand was a failure; however, the entire premise of this article is that text is different than audio 


  • The Curse of Culture

    One of the seminal books on culture is Edgar Schein’s Organizational Culture and Leadership. Schein writes in the introduction:

    Perhaps the most intriguing aspect of culture as a concept is that it points us to phenomena that are below the surface, that are powerful in their impact but invisible and to a considerable degree unconscious. In that sense, culture is to a group what personality or character is to an individual. We can see the behavior that results, but often we cannot see the forces underneath that cause certain kinds of behavior. Yet, just as our personality and character guide and constrain our behavior, so does culture guide and constrain the behavior of members of a group through the shared norms that are held in that group.

    In Schein’s telling, things like ping pong tables and kegerators are two (small) examples of artifacts — the visible qualities of an organization. They are easy to observe but their meaning is usually indecipherable and unique to a particular group (to put it another way, copying Google’s perks is missing the point).

    The next level down are espoused beliefs and values, what everyone in an organization understands consciously: “openness,” for example, or “the customer is always right”; as you might expect espoused beliefs and values devolve rather easily into cliché.

    It’s the third level that truly matters: underlying assumptions. Schein writes:

    Basic assumptions, in the sense in which I want to define that concept, have become so taken for granted that one finds little variation within a social unit. This degree of consensus results from repeated success in implementing certain beliefs and values, as previously described. In fact, if a basic assumption comes to be strongly held in a group, members will find behavior based on any other premise inconceivable.

    The implications of this definition are profound: culture is not something that begets success, rather, it is a product of it. All companies start with the espoused beliefs and values of their founder(s), but until those beliefs and values are proven correct and successful they are open to debate and change. If, though, they lead to real sustained success, then those values and beliefs slip from the conscious to the unconscious, and it is this transformation that allows companies to maintain the “secret sauce” that drove their initial success even as they scale. The founder no longer needs to espouse his or her beliefs and values to the 10,000th employee; every single person already in the company will do just that, in every decision they make, big or small.

    Microsoft’s Blindness

    As with most such things, culture is one of a company’s most powerful assets right until it isn’t: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so. Schein continues:

    Basic assumptions, like theories-in-use, tend to be nonconfrontable and nondebatable, and hence are extremely difficult to change. To learn something new in this realm requires us to resurrect, reexamine, and possibly change some of the more stable portions of our cognitive structure…Such learning is intrinsically difficult because the reexamination of basic assumptions temporarily destabilizes our cognitive and interpersonal world, releasing large quantities of basic anxiety. Rather than tolerating such anxiety levels, we tend to want to perceive the events around us as congruent with our assumptions, even if that means distorting, denying, projecting, or in other ways falsifying to ourselves what may be going on around us. It is in this psychological process that culture has its ultimate power.

    Probably the canonical example of this mindset was Microsoft after the launch of the iPhone. It’s hard to remember now, but no company today comes close to matching the stranglehold Microsoft had on the computing industry from 1985 to 2005 or so.1 The company had audacious goals — “A computer on every desk and in every home, running Microsoft software” — which it accomplished and then surpassed: the company owned enterprise back offices as well. This unprecedented success changed that goal — originally an espoused belief — into an unquestioned assumption that of course all computers should be Microsoft-powered. Given this, the real shock would have been then-CEO Steve Ballmer not laughing at the iPhone.

    A year-and-a-half later, Microsoft realized that Windows Mobile, their current phone OS, was not competitive with the iPhone and work began on what became Windows Phone. Still, unacknowledged cultural assumptions remained: one, that Microsoft had the time to bring to bear its unmatched resources to make something that might be worse at the beginning but inevitably superior over time, and two, that the company could leverage Windows’ dominance and their Office business. Both assumptions had become cemented in Microsoft’s victory in the browser wars and their slow-motion takeover of corporate data centers; in truth, though, Microsofts’ mobile efforts were already doomed, and nearly everyone realized it before Windows Phone even launched with a funeral for the iPhone.

    Steve Ballmer never figured it out; his last acts were to reorganize the company around a “One Microsoft” strategy centered on Windows, and to buy Nokia to prop up Windows Phone. It fell to Satya Nadella, his successor, to change the culture, and it’s why the fact his first public event was to announce Office for iPad was so critical. I wrote at the time:

    This is the power CEOs have. They cannot do all the work, and they cannot impact industry trends beyond their control. But they can choose whether or not to accept reality, and in so doing, impact the worldview of all those they lead.

    Microsoft under Nadella’s leadership has, over the last three years, undergone a tremendous transformation, embracing its destiny as a device-agnostic service provider; still, it is fighting the headwinds of Amazon’s cloud, open source tooling, and the fact that mobile users had six years to get used to a world without Microsoft software. How much stronger might the company have been had it faced reality in 2007, but the culture made that impossible.

    Steve Jobs’ Leadership

    Shein defines leadership in the context of culture:

    When we examine culture and leadership closely, we see that they are two sides of the same coin; neither can really be understood by itself. On the one hand, cultural norms define how a given nation or organizations will define leadership—who will get promoted, who will get the attention of followers. On the other hand, it can be argued that the only thing of real importance that leaders do is to create and manage culture; that the unique talent of leaders is their ability to understand and work with culture; and that it is an ultimate act of leadership to destroy culture when it is viewed as dysfunctional.

    A great example of this sort of destruction was Steve Jobs’ first keynote as interim CEO at the 1997 Boston Macworld, specifically the announcement of Apple’s shocking partnership with Microsoft:

    When Jobs said the word Microsoft, the audience audibly groaned. A few minutes later, when Jobs clicked to a slide that said Internet Explorer would be the default browser on Macintosh, the audience booed so loudly that Jobs had to stop speaking. When Jobs finally said the actual words “default browser” the audience booed even louder, with several individuals shouting “No!” It is, given the context of today’s Apple keynotes, shocking to watch.

    Then, after Bill Gates spoke to the crowd via satellite (in what Jobs would call his “worst and stupidest staging event ever”), Jobs launched into what his biographer Walter Isaacson called an “impromptu sermon”:

    If we want to move forward and see Apple healthy and prospering again, we have to let go of a few things here. We have to let go of this notion that for Apple to win Microsoft has to lose. OK? We have to embrace a notion that for Apple to win Apple has to do a really good job, and if others are going to help us, that’s great, cause we need all the help we can get. And if we screw up and we don’t do a good job, it’s not somebody else’s fault. It’s our fault. So, I think that’s a very important perspective.

    I think, if we want Microsoft Office on the Mac, we better treat the company that puts it out with a little bit of gratitude. We like their software. So, the era of setting this up as a competition between Apple and Microsoft is over as far as I’m concerned. This is about getting healthy, and this is about Apple being able to make incredibly great contributions to the industry, to get healthy and prosper again.

    Here’s Shein:

    But as the group runs into adaptive difficulties, as its environment changes to the point where some of its assumptions are no longer valid, leadership comes into play once more. Leadership is now the ability to step outside the culture that created the leader and to start evolutionary change processes that are more adaptive. This ability to perceive the limitations of one’s own culture and to evolve the culture adaptively is the essence and ultimate challenge of leadership.

    Make no mistake: even though he had been gone for over a decade, Steve Jobs was responsible for that booing.

    jobsibm121230-1

    Jobs had set up Apple generally and the Macintosh specifically as completely unique and superior to the alternatives, particularly the hated IBM PC and its Windows (originally DOS) operating system. By 1997, though, Microsoft had won, and Apple was fighting for its life. And yet the audience booed its lifeline! That is how powerful culture can be — and that is why Jobs’ “impromptu sermon” was so necessary and so powerful. It was Apple’s version of Office on the iPad, and a brilliant display of leadership.

    Warning Signs for Apple and Google

    Over the weekend Marco Arment wrote a widely-read piece (now) called If Google’s Right About AI, That’s a Problem for Apple:

    The BlackBerry’s success came to an end not because RIM started releasing worse smartphones, but because the new job of the smartphone shifted almost entirely outside of their capabilities, and it was too late to catch up. RIM hadn’t spent years building a world-class operating system, or a staff full of great designers, or expertise in mass production of luxury-quality consumer electronics, or amazing APIs and developer tools, or an app store with millions of users with credit cards already on file, or all of the other major assets that Apple had developed over a decade (or longer) that enabled the iPhone. No new initiative, management change, or acquisition in 2007 could’ve saved the BlackBerry. It was too late, and the gulf was too wide.

    Today, Amazon, Facebook, and Google are placing large bets on advanced AI, ubiquitous assistants, and voice interfaces, hoping that these will become the next thing that our devices are for. If they’re right — and that’s a big “if” — I’m worried for Apple…If the landscape shifts to prioritize those big-data AI services, Apple will find itself in a similar position as BlackBerry did almost a decade ago: what they’re able to do, despite being very good at it, won’t be enough anymore, and they won’t be able to catch up.

    Arment is exactly right. What is fascinating, though, is that, as I wrote last week, Google has their own set of problems: users actually spend their time in social apps, mostly owned by Facebook, and while Google has a critical asset in Android, its most valuable users (from a monetization standpoint) are on iOS. How will users actually access Google’s AI capabilities (if they turn out to matter), and how will Google monetize them?

    To be sure, neither company is struggling today. Apple may have failed to achieve record results for the first time in 13 years, but their 2Q 2016 revenue of $50.6 billion was more than the revenue of Microsoft, Google, and Facebook combined; Google, meanwhile, is still setting year-over-year records, with $17.3 billion in revenue.

    That, though, is the challenge: BlackBerry wasn’t struggling in 2006, nor was Microsoft in 2007, or even Apple as late as 1993. There was no obvious reason to think that anything was amiss, and it was culture that ensured that whatever hints there were would be ignored. Shein again:

    Culture as a set of basic assumptions defines for us what to pay attention to, what things mean, how to react emotionally to what is going on, and what actions to take in various kinds of situations. Once we have developed an integrated set of such assumptions—a “thought world” or “mental map”—we will be maximally comfortable with others who share the same set of assumptions and very uncomfortable and vulnerable in situations where different assumptions operate, because either we will not understand what is going on, or, worse, we will misperceive and misinterpret the actions of others.

    And so BlackBerry thought Apple was lying about the iPhone; Steve Ballmer declared “He liked Microsoft’s chances”; and Apple, well, Apple had already decided to, in Jobs’ view, sacrifice product for profits. The time to act was at the moment of denial, not the moment of crisis.

    Paths Forward

    That said, both Apple and Google are still operating from positions of considerable strength going forward: iPhone growth may or may not have peaked, but it’s not going anywhere for a good long while, and the company is almost certainly working on a car. Google, meanwhile, is arguably in even better shape: the company has a massive lead in machine learning, which could manifest itself in all kinds of interesting applications, and here Android looms large.

    Still, there are very obvious steps both companies could do to entrench their advantages:

    • Apple could partner with a company like Microsoft (again) to build out its services layer, both on the backend (Azure) and, if they want to get really radical, the front-end (combining Siri and Cortana). The most radical solution, though, would be fully opening up iOS in such a way that users could set Google (or any other company’s) services as defaults. This would foreclose any medium-term threat to the iPhone from an Android experience that is fully-infused with Google’s AI capabilities (more on the long-term problems in a moment)
    • Google could — should! — build a bot for Facebook Messenger. More than that, they should build an entire backend for Facebook Messenger developers. Do people want to live in Facebook? Very well, meet them there, just as Google found its user base on Windows through the browser.

    Both ideas (and there are certainly others) have their issues: Apple would be foreclosing their future as a services provider, but frankly, I am extremely skeptical about this regardless. Not only does the company have the wrong organizational structure but, similar to Microsoft, the company’s overwhelming success has had far-reaching effects on the culture; in this case, the company is so focused on making physical products that it’s doubtful an effective services mentality could ever emerge, not to mention the company’s (at times disingenuous) absolutism about privacy.2

    Google, meanwhile, would be supporting its most dangerous competitor. At the end of the day Google and Facebook share the exact same customers — advertisers — and even though it’s not clear how Google can steal attention back it’s also not obvious that they should aid their rival.3

    The Curse of Culture

    The biggest problem for both, though, is culture. Apple, beyond everything else — and in part because of the humiliation of that 1997 keynote — desires complete control; Google, for its part, desires information, and can’t tolerate the idea of Facebook having more.

    The rigidity of both is the manifestation of the disease that affects every great company: the assurance that what worked before will work eternally into the future, even if circumstances have changed. What makes companies great is inevitably what makes companies fail, whenever that day comes.4


    1. Yes, Apple ultimately came to earn much more revenue that Microsoft ever did, and Google has come close, but both did so in the context of a much larger industry 

    2. This too is why I don’t buy the “Wait for WWDC” response to Marco’s article; the reasons to be skeptical about Apple’s prospects here are structural 

    3. That, in some respects, gets to the tragedy of this piece: Apple and Google are the most natural of partners. Neither has to lose for the other to win, and both have wasted far too much valuable time fighting a war that was never necessary. 

    4. One final quote from Shein:

      If one wishes to distinguish leadership from management or administration, one can argue that leadership creates and changes cultures, while management and administration act within a culture. By defining leadership in this manner, I am not implying that culture is easy to create or change, or that formal leaders are the only determiners of culture. On the contrary, as we will see, culture refers to those elements of a group or organization that are most stable and least malleable. Culture is the result of a complex group learning process that is only partially influenced by leader behavior. But if the group’s survival is threatened because elements of its culture have become maladapted, it is ultimately the function of leadership at all levels of the organization to recognize and do something about this situation. It is in this sense that leadership and culture are conceptually intertwined.

      Are Tim Cook and Sundar Pichai managers, or leaders? And which do they need to be? 


  • Google’s Go-to-Market Gap

    Perhaps the most surprising aspect of Google’s rise is that it is almost entirely attributable to having the best technology. That sounds like it should be the normal state of affairs, but in truth there are an untold number of research projects and startups that had superior technology but never became viable businesses; perhaps there was no business model, or an inability to build a requisite ecosystem, or most commonly, an inability to find a viable market and/or reach consumers who might be interested.

    Great Companies Versus Great Technology

    For example, look at the other technology giants, all of whom got their start on the basis of more than pure technology:

    • While Bill Gates and Paul Allen built Microsoft’s first product (Altair BASIC), the company’s dominance was established via a business development deal with IBM to provide an operating system for the nascent IBM personal computer; the actual OS (MS-DOS) was acquired from a company called Seattle Computer Products. And while Microsoft would go on to develop all kinds of technology, everything that followed rested on the leverage from that IBM deal.
    • Amazon started out as a primitive website that was differentiated by its selection and ability to deliver anywhere in the U.S. And while the company has certainly invented a lot of technology when it comes to web services and logistics, its advantage remains rooted in its scale.
    • Facebook’s technology was so basic that Mark Zuckerberg’s first employee — his roommate Dustin Moskovitz — didn’t even know how to program; he would go on to be Facebook’s first Chief Technical Officer. What got the site off the ground was the way it digitized pre-existing offline networks — it started from its market and worked backwards.
    • Apple’s strategy has certainly been predicated on having the best products, but that does not necessarily mean the company has always had the best technology. The Mac GUI was famously “inspired” by Xerox PARC, the iPod was hardly the first MP3 player, and while the original iPhone was certainly a technological marvel, it not only was built on everything that came before it but also required huge investments in distribution to become the juggernaut it is1

    To be clear, all of these companies had great technology, but it wasn’t enough — it rarely is.

    Google = Best

    Google stands in stark contrast: relying on links and a lot of math to rank sites was a technological breakthrough of the first order — and no company wanted to buy it, despite the fact it was very much on sale. And yet, usage grew exponentially thanks to word-of-mouth: Google’s search was so startlingly better — and the cost of trying it was simply typing in a URL — that the product grew like wild fire without business development, distribution, or marketing. By the time Google did their first distribution deal, with Yahoo in 2000, Google was already handling millions of queries a day simply because they were superior; Yahoo only hastened Google’s inevitable domination.

    The focus on being the best became a core piece of Google’s identity, and the biggest factor in how they hired. Steven Levy wrote in In the Plex:

    The founders also knew that Google had to be a lot smarter to keep satisfying users—and to fulfill the world-changing ambitions of its founders. “We don’t always produce what people want,” Page explained in Google’s early days. “It’s really difficult. To do that you have to be smart—you have to understand everything in the world. In computer science, we call that artificial intelligence.”

    Brin chimed in. “We want Google to be as smart as you—you should be getting an answer the minute you think of it.”

    “The ultimate search engine,” said Page. “We’re a long way from that.”

    Page and Brin both held a core belief that the success of their company would hinge on having world-class engineers and scientists committed to their ambitious vision. Page believed that technology companies can thrive only by “an understanding of engineering at the highest level”…

    “We just hired people like us,” says Page.

    So many of Google’s successes — and failures — is wrapped up in this sentiment. So, too, is their future.

    The Google Assistant

    Yesterday at the Google I/O keynote the dominant theme was the very real progress Google is making on genuine Artificial Intelligence that goes far beyond search. Sundar Pichai said in his opening remarks:

    It’s amazing to see how people engage differently with Google. It’s not just enough to give them links. We really need to help them get things done in the real world. This is why we are evolving search to be much more assistive. We’ve been laying the foundation for this for many, many years through investments in deep areas of computer science. We’ve built the knowledge graph — we today have an understanding of 1 billion entities, people, places, and things, and the relationships between them and the real world. We have dramatically improved the quality of our voice recognition…Image recognition and computer vision, we can do things we never thought we could do before…We even do real-time translation.

    Progress in all of these areas is accelerating, thanks to profound advances in machine learning and [artificial intelligence] (AI), and I believe we are at a seminal moment. We as Google have evolved significantly over the past ten years and we believe we are poised to take a big leap forward in the next ten years leveraging out state-of-the-art capabilities in machine learning and AI, we truly want to take the next step in being more assistive for our users. So today, we are announcing the Google assistant.

    There is little question that Google is far ahead in artificial intelligence. Late January, in a humorous juxtaposition that was almost certainly coincidental but telling all the same, Facebook CEO Mark Zuckerberg posted about the social network company’s progress in building a computer that could play the board game ‘Go’, long thought unbeatable by computers. Mere hours later Demis Hassabis, the head of Google’s DeepMind division, revealed in a blog post that Google had done exactly that: their machine learning-based program, called AlphaGo, had defeated a three-time European champion, and would soon take on the best Go player in the world (AlphaGo would go on to win that match 4–1).

    To be sure, this is a single example, but any time spent using the increasing number of Google products that rely on machine learning-based artificial intelligence — translation, voice and image recognition, and yes, search — quickly make it obvious just how much better Google is, and, thanks to the copious amount of data at the company’s disposal, how much better they are likely to become. The problem is that in today’s world being the best may not be enough.

    Open Versus Closed

    While describing how Google search grew by word-of-mouth, I snuck in one line that looms very large when it comes to thinking about both Google’s past and its future: “the cost of trying it was simply typing in a URL.” Google’s initial success was not just because they were superior at search: thanks to the fact that the interface with Google was a web page, the company had instant access to every person on earth with a PC and a functioning Internet connect — and they didn’t have to pay a dime. On the flip-side, if you heard about this amazing new search engine, you didn’t need to go buy a CD or even download a program: you simply typed “Google.com” and the results spoke for themselves. Make no mistake: the brilliance of Larry Page and Sergey Brin was only perhaps surpassed by the brilliance of the people they hired, particularly in the early days, but the company’s success was very much intertwined with the openness afforded by a browser and the world wide web.

    Today, though, the PC is fading in relevance, and the browser along with it: what matters is mobile, and the means to connect with users is to either be embedded into the phone or have an app where people live. And while Google has a massive foothold thanks to Android, a huge number of its best customers are on iOS, and nearly all its customers live in Facebook.

    The implications of this are obvious — just look at maps. Google Maps is widely regarded as being the superior product to Apple Maps, yet the latter is used three times as often on iPhones; such is the power of defaults and being “good enough.”2 Similarly, while Google’s voice recognition far outpaces Apple’s Siri, the fact that Apple sets the rules means that Google’s Gboard keyboard for iOS cannot include dictation.3 More broadly, on iOS the only way to use the Google assistant that Pichai announced yesterday will be to open a Google app (or go to a search field in, you guessed it, a browser): using Siri will always be much easier and frictionless.

    The situation is even more challenging when it comes to social networks broadly and messaging specifically, which is to mobile as the browser was to the PC: a meta-OS where people spend the vast majority of their time. The problem for Google is that while the browser was an open platform that not even Microsoft could control — sure, they killed Netscape, but Google built its audience from within Internet Explorer — social networks and messaging services are not only closed but nearly impossible to compete with. No matter how great of a messaging service Google may build — another I/O announcement was a messaging service called Allo, which heavily features the Google assistant — the most important feature of any messaging service is whether or not your friends use it, and nearly every geography in the world is locked up by a competitor.

    There is a new arena — the home, the one place where talking is usually better than pecking away at a phone no longer in your pocket — but here Google is behind Amazon. The latter, thanks to the failure of its own smartphone efforts, was freed from the smartphone obsession that resulted in Google wrongly identifying the smartphone-dependent Nest as its connected home offering, instead of a voice-focused standalone device like the Echo. There is almost certainly time to catchup, but it’s telling that Google’s announced competitor — Google Home — is still months away.

    Google’s Go-to-Market Challenge

    The net result is that Google has no choice but to put its founding proposition to the ultimate test: is it enough to be the best? Can the best artificial intelligence overcome the friction that will be involved in using Google assistant on an iPhone? Can the best artificial intelligence actually shift human networks? Can the best artificial intelligence win the home in the face of a big head start?

    That the answer may very well be “no” (or mixed, at best), is at the root of my 2014 piece Peak Google. That piece was about business relevance, something that goes beyond the collection of cash or the creation of superior technologies. The question I was asking was which companies are the best equipped to build new businesses going forward, and here Google’s outlook is far cloudier than it was back when the company was, for all intents and purposes, invented.

    The problem is that as much as Google may be ahead, the company is also on the clock: every interaction with Siri, every signal sent to Facebook, every command answered by Alexa, is one that is not only not captured by Google but also one that is captured by its competitors. Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.4


    1. To put it another way, the technology at the heart of Apple’s products — OS X and iOS — has its roots in NeXT, a business failure 

    2. By most accounts Apple Maps is indeed “good enough” in the U.S.; from personal experience, though, it very much falls short in many other countries 

    3. Google does deserve a lot of credit for finally remembering that Android exists to serve Google, which should be focused on all users, not its own platforms 

    4. Which itself is under threat: to fully leverage Google assistant in Google search will almost certainly deepen Google’s antitrust troubles with the European Union 


  • The Real Problem With Facebook and the News

    I got my start writing for the student newspaper at the University of Wisconsin.1

    What is interesting about that statement is that the appropriate follow-up question is “Which student newspaper?” For many years Wisconsin was unique in being the only university with two daily newspapers, both with five-digit print circulations.2 The older paper, The Daily Cardinal, got its start in 1892, but in 1969, as Wisconsin became ground zero for some of the most intense protests against the Vietnam War, a group of conservative students, with support from right-wing luminary William F. Buckley, resolved to counter what they saw as a pervasive liberal bias from The Daily Cardinal specifically and media generally.

    Against all odds the fledgling paper survived — and it’s those odds that interest me most. To start a paper in 1969 required a not insignificant amount of money to pay for everything from desks to typewriters to, most pertinently, (renting time on) a printing press. The reality is that Wisconsin was a huge aberration, not only amongst universities but amongst cities generally: most had one paper, maybe two, and there were only three broadcast TV networks.

    This was an arrangement that was certainly profitable for those who owned these geographic monopolies, but it also had a curious effect on how news was experienced in the United States: first, there was a strict wall built between the editorial and business sides of a business (a wall that hinders publishers today), and secondly, befitting their dominant market position (and, perhaps, in a careful attempt to ensure they kept it), news organizations adopted a “balanced” he-said/she-said approach to reporting that Jay Rosen has characterized as The View From Nowhere.

    The problem with this approach is that no matter how scrupulous a reporter or editor may be, they are still human, constrained to a world view informed by their own limited experiences, and, as was so often the case in nearly every professional workplace in America, those experiences were shared: white, middle to upper class, often from the coasts, educated at elite universities. And so began a longstanding conservative critique of the media: that while it claims to be balanced, what was actually printed or broadcast, both in terms of selection and tone, had a liberal bias.3

    Facebook Trending News

    Yesterday Gizmodo published a bombshell where the headline basically says it all: Former Facebook Workers: We Routinely Suppressed Conservative News.

    Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section, according to a former journalist who worked on the project. This individual says that workers prevented stories about the right-wing CPAC gathering, Mitt Romney, Rand Paul, and other conservative topics from appearing in the highly-influential section, even though they were organically trending among the site’s users.

    Several former Facebook “news curators,” as they were known internally, also told Gizmodo that they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all. The former curators, all of whom worked as contractors, also said they were directed not to include news about Facebook itself in the trending module.

    In other words, Facebook’s news section operates like a traditional newsroom, reflecting the biases of its workers and the institutional imperatives of the corporation. Imposing human editorial values onto the lists of topics an algorithm spits out is by no means a bad thing—but it is in stark contrast to the company’s claims that the trending module simply lists “topics that have recently become popular on Facebook.”

    There is a lot to unpack here, complicated by a good deal of confusion about what exactly is being alleged:

    • This story is not about the News Feed, that algorithmically-driven stream of content that is at the core of Facebook’s success. Rather, it is about the “Trending News” box of content placed in the upper right of a desktop Facebook page, or more pertinently for most Facebook users, what appears below an activated search box on mobile. It is valuable real estate in the way that all Facebook real estate is valuable, but it is of considerably less importance than what appears in the aforementioned feed. Indeed, I suspect I’m not alone in that before this controversy happened I didn’t even know it existed on mobile at all.
    • Thanks to Gizmodo’s reporting a week ago, we already knew that Facebook has a content team that chooses which trends deserve to be promoted, writes headlines for them, and also blacklists topics (most commonly because “it didn’t have at least three traditional news sources covering it”). Gizmodo added that “Those we interviewed said they didn’t see any signs that blacklisting was being abused or used inappropriately”, and suggested that the content team was being phased out as Facebook’s algorithms improved.4
    • Apparently in response to last week’s story, a former “curator” from the content team and self-identified conservative alleged that conservative topics were sometimes blacklisted; other curators disputed that claim, but all those interviewed with Gizmodo agreed that curators also had the power to “inject” stories into the trending list even if they were not, in fact, trending. Most examples were about Facebook trying to keep up with Twitter in current news, although longer-running topics like Black Lives Matter were allegedly injected as well.

    I parse these details for a few reasons: first, it seems self-evident that a team of curators would, in fact, curate; Facebook’s mistake was in its willingness to let people believe “Trending News” was purely algorithmic. Second, there is very strong evidence that “Trending News” has a human component that, like the “balanced” news organizations of old, is by definition subject to bias. Third, the allegation that said bias is actively trying to suppress conservative news is the opinion of one person only (contra Gizmodo’s headline). And when you consider the make-up of the content team — “young journalists, primarily educated at Ivy League or private East Coast universities”, according to Gizmodo — it seems very possible that the second and third points are, per my observation about the conservative critique of media,5 the exact same thing.

    The Rise of Alternative Media

    As you might expect, the conservative media was all over these allegations; what is most striking, though, at least in the context of the founding of my old paper The Badger Herald, is that these outlets exist at all. The Internet removed the need for things like desks, typewriters, and especially printing presses, making it viable for an entire new universe of publications. And, unlike the news organizations of old who started with a geographic monopoly and worked backwards, Internet-era publications have no distribution advantage (or more pertinently, disadvantage) versus anyone else; the only way to win is to attract more users on the basis of your content.

    To that end Internet publications, particularly political ones, have tended to have a very distinct point of view, whether it be Talking Points Memo on the left or Red State on the right — and those are just two examples of many, covering every part of the ideological spectrum. And why not? The truth is that all of us like to read what we already agree with, particularly when it comes to fraught issues like politics, and we’re more likely to return to a site that makes us feel good about our beliefs.

    Facebook has magnified all of these trends: not only is content content, regardless of source, but it also tries to give us more of what we (literally) like, or click on, or comment on (in this case I am talking about the News Feed, not the Trending News section). If you like publications and stories that are more liberal in nature, you’ll get more liberal stories and publications in your feed; it’s the same thing with conservative stories and publications, or sports, or music, or whatever topics “drives engagement”, to use the parlance.

    The result is that if you are a conservative, say, you are living in a cornucopia of conservative thought unimaginable to those students launching a new college newspaper against the odds in 1969. There are no obstacles to publishing, and Facebook actually tries its darnedest to bring you more of what you like in the name of engagement.

    Polarization and Virtual Villages

    Late last month Ezra Klein, who has covered the topic of polarization in American politics extensively, wrote in an overview of a 10,000 adult survey done by Pew about politics:

    It’s tempting to imagine that rising political polarization is just a temporary blip and America will soon return to a calmer, friendlier political system. Don’t bet on it. Political polarization maps onto more than just politics. It’s changing where people live, what they watch, and who they see — and, in all cases, it’s changing those things in ways that lead to more political polarization, particularly among the people who are already most politically polarized…

    It’s easy to see how this could work to strengthen polarization over time. As Cass Sunstein and others have shown, people become more extreme when they’re around others who share their beliefs. If liberals and conservatives end up moving to different places and surrounding themselves with others like them they’re likely to pull yet further apart. And even for those who can’t move, the internet makes it easy to settle in a virtual neighborhood with people who agree with you. Polarization is going to get a lot worse before it starts getting better.

    When Klein refers to “a virtual neighborhood” he means Facebook: that is where people live, where they go in the empty spaces of their lives. It is by far the biggest traffic driver to nearly every site on the Internet, and the most-used app of every age group. And it is a company whose executives talked about engagement double-digit times on the last earnings call. It is the metric that matters, the one everything at the company is built around.

    This, then, is the deep irony of this controversy: Facebook is receiving a huge amount of criticism for allegedly biasing the news via the empowerment of a team of human curators to make editorial decisions, as opposed to relying on what was previously thought to be an algorithm; it is an algorithm, though — the algorithm that powers the News Feed, with the goal of driving engagement — that is arguably doing more damage to our politics than the most biased human editor ever could.6 The fact of the matter is that, on the part of Facebook people actually see — the News Feed, not Trending News — conservatives see conservative stories, and liberals see liberal ones; the middle of the road is as hard to find as a viable business model for journalism (these things are not disconnected).

    Indeed, one could make the argument that an authoritative news module from Facebook would actually be a civil benefit: at least we would all be starting from a common set of facts. What is far more damaging — and far more engaging, and thus lucrative for Facebook — is all of us in our own virtual neighborhoods of our own making, liking opinions that tell us we’re right instead of engaging with viewpoints that make us question our assumptions.


    1. I don’t usually talk about this much, in part because I’ve almost completely changed my politics since then 

    2. It’s almost unfathomable now, but print advertising was so lucrative that The Badger Herald, where I worked, actually paid a staff of 100 or so people across editorial and ad sales who put out a free 16~20 page broadsheet five days a week. As I recall, at that time The Badger Herald’s daily circulation was 16,000, and The Daily Cardinal was 10,000. Needless to say both have dramatically cut back. 

    3. Per the previous footnote, having been raised in this environment, I know from experience that the idea of a “liberal bias” to the news, whether true or not, has been unquestioned by conservatives for decades 

    4. Indeed, as the Huffington Post reported, today most people don’t even see the same topics. 

    5. And without weighing in as to whether or not it is justified 

    6. And, of course, algorithms, having been created by humans, have their own biases 


  • Everything as a Service

    Last month Benedict Evans observed that The Best is the Last:

    A technology often produces its best results just when it’s ready to be replaced — it’s the best it’s ever been, but it’s also the best it could ever be. There’s no room for more optimisation — the technology has run its course and it’s time for something new, and any further attempts at optimisation produce something that doesn’t make much sense.

    The development of technologies tends to follow an S-Curve: they improve slowly, then quickly, and then slowly again. And at that last stage, they’re really, really good. Everything has been optimised and worked out and understood, and they’re fast, cheap and reliable. That’s also often the point that a new architecture comes to replace them. You can see this very clearly today in devices such as Apple’s new Macbook or Windows ‘ultrabooks’ — they’ve taken Intel’s x86 and the mouse and window-based GUI model as far as they can go, and reached the point that everything possible has been optimised. Smartphones are probably at the point that the curve is starting to flatten…

    Evans’ post was particularly timely as only days later Apple released quarterly results and an earnings forecast that were well under expectations,1 and the primary reason cited by Apple CEO Tim Cook was a significantly slower iPhone upgrade rate.2

    It is certainly reasonable to argue that this slowdown is temporary — an artifact of the iPhone 6 pulling forward upgrades from iPhone users clamoring for larger screens — and that the iPhone 7 will return the franchise to growth; personally, I tend to agree with Neil Cybart that iPhone growth has indeed peaked — structural growth factors like new countries and carriers are largely tapped out,3 and while Apple will still draw switchers, they won’t draw enough to make up for existing customers not upgrading — but even if you disagree, your disagreement by definition must be one of timing.4 As we’ve seen with first PCs and then tablets, as hardware matures upgrade cycles inevitably lengthen and choke off growth. That the iPhone grew far beyond either of these product categories — far beyond any product ever, at least in revenue and profit terms — is a testament to the incredible market that was smartphones, and the incredible product that was the iPhone.

    Indeed, it was the best market — and best product — we’ve ever seen; the question is if it is the last.

    The Manufacturing Model

    From the industrial revolution on, the dominant business model has been manufacturing goods and selling them at (hopefully) a profit. This had a huge number of knock-on effects, including the shift in population from rural areas to urban ones, in cities created around transportation hubs and markets. Manufactured goods (or food produced on increasingly mechanized farms) were transported to a central location, made available for purchase, and carried home by individual buyers, themselves primarily occupied in the creation of said goods. Over time, as economies matured, new types of businesses sprang up like professional services (lawyers, doctors, etc.), transportation, or luxuries like grooming or dining, but it was manufacturing that led to the creation of the critical mass of people necessary to make these sorts of businesses viable.

    Over the past thirty years, this way of organizing people (in developed countries) has been increasingly hollowed out; thanks to improved communication and transportation links a wave of globalization has shifted manufacturing to the developing world and made services an increasingly central part of the economy (78% of U.S. GDP in 2015). This, though, has made companies capable of working and selling across borders more valuable than ever before, and chief amongst these is Apple.

    Apple has arguably perfected the manufacturing model: most of the company’s corporate employees5 are employed in California in the design and marketing of iconic devices that are created in Chinese factories built and run to Apple’s exacting standards (including a substantial number of employees on site), and then transported all over the world to consumers eager for best-in-class smartphones, tablets, computers, and smartwatches.

    What makes this model so effective — and so profitable — is that Apple has differentiated its otherwise commoditizable hardware with software. Software is a completely new type of good in that it is both infinitely differentiable yet infinitely copyable; this means that any piece of software is both completely unique yet has unlimited supply, leading to a theoretical price of $0. However, by combining the differentiable qualities of software with hardware that requires real assets and commodities to manufacture, Apple is able to charge an incredible premium for its products.

    The results speak for themselves: this past “down” quarter saw Apple rake in $50.6 billion in revenue and $10.5 billion in profit. Over the last nine years the iPhone alone has generated $600 billion in revenue and nearly $250 billion in gross profit. It is probably the most valuable — the “best”, at least from a business perspective — manufactured product of all time.

    Apple and Services

    Yesterday Tim Cook appeared on CNBC’s Mad Money with Jim Cramer to defend the iPhone’s prospects. Cook said:

    Let’s look at how did we do in this quarter, and what you would find is $50 billion and $10 billion in profit. No one else is earning anywhere near this.

    They’re the best!

    But, the real answer to your question, is that the thing that is different is that customers love Apple products. And the relationship with Apple doesn’t stop when you buy an iPhone. It continues. You might buy apps across the App Store. You might subscribe to Apple Music. You might use iCloud to buy additional storage. You might buy songs. You might rent movies. And so there’s a significant number of things. You might use Apple Pay every day now. Or at least several times a week. And so that relationship continues.

    This, though, is a subtle shift: Cook is not talking about Apple’s ability to sell new iPhones — to make money with the old model — he is referring to the fact that Apple can (and does) make a significant amount of revenue from people using the iPhone. This is the “services” business model and the fact it shares a name with the economic activity that rose up around manufacturing over the last century is not an accident.6

    The fundamental difference between manufacturing and services is that one entails the creation and transfer of ownership of a product, while the other is much more intangible: you visit a doctor or hire a lawyer, and you don’t get a widget to take home. Moreover, if you want more of a service, you have to pay more — when your hair grows back you don’t get credit from the hairdresser for having visited just a few weeks or months prior.

    Manufacturing can and does undergird services: your lawyer owns computers and has office space in a building that was constructed, and your doctor buys medical devices and prescribes drugs. Even your hairdresser buys scissors and clippers and hair rollers. Similarly, Apple’s services by and large depend on you having bought an iPhone on which you can then subscribe to music or leverage the App Store or make a payment with Apple Pay. In most services business, though, what is manufactured is a modular component of the overall offering, subject to an ongoing cost-benefit comparison with competitors that drives down profits over time.7

    To be sure, these transactions are much smaller on an individual basis, at least compared to an iPhone: you would need to buy more than $1000 worth of apps for Apple to earn the same gross profit as the entry-level iPhone 6S, or subscribe to Apple Music for nearly 10 years, or make over $215,000 in purchases with Apple Pay. What makes services so attractive, though, is that that is possible! Because services revenue is recurring and not tied to the delivery of a physical item it can scale indefinitely; Apple, on the other hand, faces a limit based on the number of people who can both afford their devices and are willing to upgrade.

    Software and the Services Model

    In this, services sound a lot like software: both are intangible, both scale infinitely, and both are infinitely customizable. It follows that a services business model — payment in exchange for service rendered, without the transfer of ownership — is a much more natural fit for software than the transaction model characteristic of manufacturing. It better matches value generated and value received — customers only pay if they use it, and producers are rewarded for making their product indispensable — and more efficiently allocates fixed costs: occasional users may be charged nothing at all, while regular users who find your software differentiated pay more than the marginal cost of providing it.

    These advantages have always been obvious (along with other consumer-centric ones like the need to not install updates, or to move costs from capital to operational expenses), but when the software industry first emerged the model simply wasn’t practical: there was no way to measure how often software was used, or to seamlessly add and remove users. There were, in short, significant distribution and transactional costs that were characteristic of the old manufacturing world, so a manufacturing business model was used.

    The Internet has changed that: it is possible to run software on a central server for multiple clients (spreading the fixed costs amongst them), and there are zero transactional costs involved in calculating usage or in supporting new users (even free ones);8 the result is that nearly all software now is now sold on a service model (or based on advertising, which is the same concept of pricing based on usage), including software that used to be sold like physical goods (like Adobe and Microsoft’s offerings).

    Hardware as a Service

    What happens, though, if we apply the services business model to hardware? Consider an airplane: I fly thousands of miles a year, but while Stratechery is doing well, I certainly don’t own my own plane! Rather, I fly on an airplane that is owned by an airline9 that is paid for in part through some percentage of my ticket cost. I am, effectively, “renting” a seat on that airplane, and once that flight is gone I own nothing other than new GPS coordinates on my phone.

    Now the process of buying an airplane ticket, identifying who I am, etc. is far more cumbersome than simply hopping in my car — there are significant transaction costs — but given that I can’t afford an airplane it’s worth putting up with when I have to travel long distances.

    What happens, though, when those transaction costs are removed? Well, then you get Uber or its competitors: simply touch a button and a car that would have otherwise been unused will pick you up and take you where you want to go, for a price that is a tiny fraction of what the car cost to buy in the first place. The same model applies to hotels — instead of buying a house in every city you visit, simply rent a room — and Airbnb has taken the concept to a new level by leveraging unused space.

    The enabling factor for both Uber and Airbnb applying a services business model to physical goods is your smartphone and the Internet: it enables distribution and transactions costs to be zero, making it infinitely more convenient to simply rent the physical goods you need instead of acquiring them outright.

    Services and the Future

    This idea of a new service-based economy that deprioritizes ownership in favor of renting what you need when you need it isn’t a new one: people have been speculating about this for a few years, and in many cases experimenting with building such businesses out. Still, outside of Uber, success has been limited. I’m reminded, though, of one of my favorite Bill Gates quotes:

    We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.

    It was less than ten years ago that the iPhone was launched — that’s how quickly the world can change. And while changing the status quo is hard, in the grand scheme of things, the fact that Uber and Airbnb only launched only seven and eight years ago respectively is pretty amazing. Moreover, it may be the case that some models require generational changes, or may first spring up in other geographies where people simply have less stuff.10

    With regards to the iPhone, it’s hard to see its record revenues and profits ever being surpassed by another product, by Apple or anyone else: it is in many respects the perfect device from a business perspective, and given that whatever replaces it will likely be significantly less dependent on a physical interface and even more dependent on the cloud (which will help commoditize the hardware), it will likely be sold for much less and with much smaller profit margins.11

    More broadly, I suspect it is going to be increasingly difficult to analyze the future with any lens based on the past. The two companies that dominated earnings in a largely gloomy quarter — Facebook and Amazon — are both uniquely enabled by the Internet; Amazon lets you rent compute power without buying a server, and Facebook serves 1.6 billion people customized content from an effectively infinite number of sources.

    Just as importantly, both companies are enabling new business models in their own right: I wrote last fall about how Amazon Web Services has dramatically lowered the barrier to entry for startups, and as I wrote last week Facebook may very well do the same when it comes to advertising: it is easier, cheaper, yet more measurable (and thus justifiable) for a small business to advertise on Facebook than any other medium ever. Indeed, for all the billions that Apple has extracted from the App Store by virtue of owning distribution onto iPhones, it is Facebook that is actually “earning” the billions it is paid by app developers thanks to the disruptive nature of its advertising product. No, neither company has Apple’s profits, and will not for a long time if ever, but then again, they are at the beginning of something new, not the best of the last.


    The line it is drawn, the curse it is cast
    The slow one now, will later be fast
    As the present now, will later be past
    The order is rapidly fadin’
    And the first one now, will later be last
    For the times they are a-changin’.

    — Bob Dylan, The Times They Are A-Changin’

    Ironically, and tellingly as to the difficulty of this transition, only available on a transactional basis in iTunes


    1. Not just Wall Street’s but also Apple’s; while Apple does not release forecast numbers more than a quarter out, as I noted in the Daily Update the Q1 2016 earnings call included several allusions to Apple’s full-year expectations that clearly did not countenance what is now forecast for the next quarter 

    2. This is another thing that Apple got wrong; last year Cook suggested on every earnings call that there was nothing particularly remarkable about the iPhone 6 upgrade rate, in direct contrast to this call 

    3. And China is a real concern 

    4. And please, note the distinction between noting that iPhone growth may have peaked and saying that the iPhone is dead or that Apple is doomed 

    5. I.e. not retail 

    6. To be very clear, as I laid out two weeks ago, services are much more than just online services like email or search; they are any sort of recurring activity that does not entail a transfer of ownership 

    7. A challenge — and opportunity! — for Apple will be in maintaining its selling prices and margins even as it ramps up its services businesses 

    8. Yes, I am talking about Aggregation Theory  

    9. Or leased 

    10. This, for example, is why car-sharing services are so huge in China: many people don’t have a car at all, and the car in your garage has always been Uber et al’s biggest competitor 

    11. Implicit in that statement is that Apple will continue to sell a lot of iPhone for the foreseeable future