The Experience Economy

The phrase “The Experience Economy”, like the worst sort of corporate speak, sounds less like a viable business plan than it does a discarded slogan for Las Vegas. Still, that was the explanation for SAP’s $8 billion acquisition for Qualtrics just days before the latter was set to IPO.

Personally, I quite prefer the phrase “Experience Management”; it places this move by SAP very much in line with the enterprise software provider’s history.

SAP and ERP

SAP was founded in 1972 by five former IBM employees, who a year later launched an accounting system called RF; the ‘R’, which would christen all of SAP’s products for decades, stood for “real-time.” The idea was that, by leveraging databases, companies could get a “real-time” view of the state of their company. Three years later the company launched a purchasing, inventory management, and invoice verification system called RM. Over the ensuing years more and more modules would be developed to cover more and more back office functions; in 1990 Gartner christened the term ERP — Enterprise Resource Planning — to describe integrated software systems that managed nearly all of a company’s assets, allowing for, yes, “real-time” reports on how the company was operating.

It makes sense that this is where enterprise computing really took hold: the logistics of managing large multinational companies were daunting, the exact sort of challenge at which computers were particularly adept. And, conveniently enough, those large enterprises had the capability to pay for what were expensive, time-consuming, and error-prone software installations and ongoing maintenance.

That these systems were almost completely internally focused makes sense as well: there was no Internet, which meant data had to be entered manually in a centralized location. Computers certainly made tedious bookkeeping much easier, but there were physical limits to just how much of the business could be managed.

The Rise of CRM

By the 1990s the world was rapidly changing: the PC revolution and corporate intranets led to computers on every desk, dramatically increasing the utility and efficiency of ERP systems. Just as importantly, the emergence of the Internet made it increasingly possible to connect to centralized systems from locations other than the main office.

The category of software that symbolized this shift, and which defined that decade of enterprise computing, was CRM: Customer Relationship Management software. CRMs allowed companies to manage outside relationships: not just who contacts were, but also the entire history of interactions with those contacts. This was in many respects a more complex job than ERP systems simply because there were so many more inputs, specifically global sales forces which actually interacted with customers.

Thanks to the combination of PCs and the Internet, though, far-flung sales representatives could now input data from all over the world into a centralized piece of software that, like ERP, gave management a much more fine-grained view into how the business was actually operating. The big difference, though, is that while ERP gave a view into “what” was happening, CRM showed “who” it was happening with.

The Consumer Era

Fast forward another 20 years and the world has dramatically shifted yet again: not only are computing devices and Internet access ubiquitous, but critically, that ubiquity is not confined to businesses: customers, the ultimate endpoint of any business, are today just as connected as the employees of any large enterprise.

This can be a rather frightening proposition for large businesses: look no further than social media, where seemingly every week some terrible story about a company with poor customer service goes viral; there are an untold number of similar sob stories shared instantly with friends and family.

At the same time, competition is dramatically higher as well; customer choices used to be constrained by geography and limited channels for advertising: you could choose one mass-market product from conglomerate X, or a strikingly similar product from conglomerate Y. Today, though, you can find multiple products from any number of vendors, some large and many more small, the latter of which are particularly adept at using channels like Facebook to reach specific niches that were never well-served by large enterprises designed to serve everyone.

Bill McDermott, the CEO of SAP, explained this challenge on an investor call about the Qualtrics acquisition:

There are millions of complaints every day about disappointing customer experiences. This is called the experience gap. Businesses used to have time to sort this out, but in today’s unforgiving world, the damage is immediate, disruption is imminent. This has shifted the challenge from a running a business to guaranteeing great experiences for every single person.

These shifts, though, afford an opportunity, which is exactly why SAP bought Qualtrics.

Qualtrics and the Consumer Experience

I actually have personal experience with Qualtrics: when I was an MBA student we used Qualtrics to create surveys for our marketing research course.1 My experience is not a surprise: Qualtrics, which was founded in 2002, was originally focused on the academic market. The company wrote in its S-1:

Founded in 2002 with the goal of solving the most complex problems encountered by the most advanced academic researchers, we were forged in an environment that required rigorous analytical methods, ease of use, the versatility to address the broadest range of inquiries, and the scalability to reach millions of touch points globally. Our leading presence with academic institutions has introduced millions of students to Qualtrics and allowed them to become proficient in the use of our software. As these students have migrated into the workplace, they have often brought us with them, spawning a whole new class of commercial customers and developing new use cases for our XM Platform.

Still, at first glance it seems kind of amazing that some survey software would be worth $8 billion! In fact, it’s not: after all, Survey Monkey IPO’d a couple of months ago, and is worth $1.3 billion. What makes Qualtrics different is what comes after the survey: a much more extensive toolbox of data analysis and reports that, at least in theory, give actionable insights into what exactly consumers think about a product or their interaction with a company.

What makes this possible is the paradigm shift I just described: consumers are always connected, which means reaching them is dramatically cheaper than it used to be. Even seemingly basic channels like email are very effective at driving surveys that show exactly how consumers are feeling immediately after interacting with a company or buying their product.

This gives an entirely new level of insight to management: while ERP showed what was happening in the main office, and CRM what was happening in offices all over the world, experience management promises the ability to understand what is happening with customers directly. It is a perfect example of businesses using new technology and paradigms to their advantage.

SAP and Experience Management

Still, experience management — which, the last few paragraphs notwithstanding, is still glorified surveys — has limited utility. When an ERP system shows a problem, it is very clear who is responsible, and what needs to be done to fix it; the situation is the same with CRM. What makes experience management into an actual tool of management is tying customer feedback to specific moments in time, whether those be customer service interactions or specific transactions.

This is where SAP comes in: according to the company, SAP is at the center of 77% of transactions worldwide, thanks in large part to their dominance at point-of-sale (because of their strength in ERP). That means the company has massive amounts of what it calls operational data. CEO Bill McDermott explained on the call:

To win in the experience economy there are two pieces to the puzzle. SAP has the first one: operational data, or what we call O-data, from the systems that run companies. Our applications portfolio is end-to-end, from demand chain to supply chain. The second piece of the puzzle is owned by Qualtrics. Experience data, or, X-data. This is actual feedback in real-time from actual people. How they’re engaging with a company’s brand. Are they satisfied with the customer experience that was offered. Is the product doing what they expected? What do they feel about the direction of their employer?

Think of it this way: the O-data tells you what happened, the X-data tells you why it happened. At present, there is not technology company that brings these two worlds together. In particular, this exposes the structural weaknesses of CRM offerings, which are still back-office focused. Experience management is about helping every person outside of companies influence every person inside a company. So SAP and Qualtrics will do just that: the strategic value of this announcement is rivaled only by the business value.

That business value is very much predicated on SAP’s nearly fifty year history: the real potential of this deal is tying data from consumers about their experience to actual transaction data, whether that be a purchase or a customer service interaction.

The management opportunity afforded by Qualtrics and SAP

In SAP’s vision, managers can react not simply to events after they show up on the balance sheet, but ideally before they, well, don’t: a constant refrain on the investors call was the important of limiting churn, which makes perfect sense. It is far more expensive to acquire a new customer than it is to retain an old one, and the combination of Qualtrics and SAP, uniquely enabled by the state of technology today, gives businesses an opportunity to do just that.


The potential of the SAP + Qualtrics tie-up holds a lesson for businesses of all types: while it is always easy to see how the Internet screws up existing business models, it also presents completely new opportunities. Businesses that succeed will see the Internet as an opportunity; those that fail will frame it as the bogeyman in their demise. It is to SAP’s credit that they have embraced the former, and now it is on their customers to do the same.

  1. You have never known email pain until you have had several hundred classmates asking you to take their survey [↩︎]

Apple’s Social Network

While the saying goes that “No news is good news,” in the case of Apple it turns out that “News about no news is bad news.” From Bloomberg:

Apple Inc. shares had their worst day since 2014 amid concerns that growth in its powerhouse product, the iPhone, is slowing. In the fiscal fourth quarter, Apple said iPhone unit sales barely grew from a year earlier, even though new flagship devices came out in the period. At the same time, Apple said it would stop providing unit sales for iPhones, iPads, and Macs in fiscal 2019, a step toward becoming more of a services business. While some pundits praised the move as a way to highlight a potent new business model, many analysts complained it was an attempt to hide the pain of a stagnant smartphone market.

Apple has long been an exception in the smartphone space when it comes to reporting unit sales, so deciding not to is not that out of the ordinary; Apple, though, has always positioned itself as the extraordinary alternative — the best — and that approach paid off for years with sales numbers that were worth bragging about.

A History of iPhone Unit and Revenue Growth

The reality, though, is that unit sales in isolation have indeed misrepresented Apple’s business for the last several years; specifically, they have underestimated it. Consider the last six years of iPhone revenue growth and unit growth:

iPhone unit growth and revenue growth over time

iPhone unit growth and revenue were obviously highly correlated in the early years of the iPhone, when the only price difference in the line concerned the amount of storage on the flagship device. As Apple started keeping older models in the lineup, though, revenue growth was a bit slower than unit growth due to a slowly declining average selling price.

Then the iPhone 6 happened: not only was the “big-screen iPhone” stupendously popular — and, it should be noted, was the first phone sold at launch on all of China’s mobile carriers — it also, for the first time, included a configuration — the $749 iPhone 6 Plus — that had a higher base price than the iPhone’s traditional $649. The result was revenue growth that, for the first time, significantly outpaced unit growth.

The iPhone 6S was the opposite story: while Apple thought that iPhone 6 sales figures represented the new normal, in reality Apple had pulled forward a huge number of flagship phone buyers. Ultimately the company had to take a $2 billion inventory write-off on the iPhone 6S after over-forecasting sales; meanwhile, older model phones (including the iPhone 6) were still selling well, so again unit growth outpaced revenue growth.

It turned out, though, that the 6S was the new normal: iPhone unit sales have been basically flat ever since:

iPhone unit sales over time

What has changed is Apple’s pricing: the iPhone 7 Plus cost $20 more than the iPhone 6S Plus. Then, last year, came the big jump: both the iPhones 8 and 8 Plus cost more than their predecessors ($50 and $30 respectively); more importantly, they were no longer the flagship. That appellation belonged to the $999 iPhone X, and given how many Apple fans will only buy the best, average selling price skyrocketed:

iPhone average selling price over time

Still, even though unit growth had been stagnant for a full three years (not just the last year, as many reports, including the one above, incorrectly stated), reporting those numbers helped Apple tell its story: after all, you needed unit numbers to calculate the average selling price.

What the reports are right about, though, is that unit sales going forward are absolutely a story Apple would prefer to avoid: it is very unlikely that units will grow, and while Apple pushed pricing even higher with the iPhone XS Max, it probably can’t go much further, which means it is likely that the average selling price-based revenue growth story is drawing to an end as well.

Today at Apple

To this point I have focused on the iPhone, and for good reason: last quarter it made up 59% of Apple’s revenue; for the company’s holiday quarter it will likely approach 70%. However, the company will also stop reporting unit sales for Macs and iPads. This came on the heels of a product announcement last week where Apple introduced a new MacBook Air, Mac Mini, and iPads Pro; all were priced significantly higher than their predecessors.

This isn’t a surprise: the Mac line has been increasing in price for years, while the iPads Pro are balanced by a strong entry-level product that starts at $329. The reality of both the Mac and iPads Pro is that they are niche products, and niche customers are willing to pay higher prices for products that better meet their needs.1

What was more interesting about last week’s event though, and which casts more light on Apple’s new growth story than the products announced, was the ten minutes in the middle devoted to Apple Retail.

This is how Apple CEO Tim Cook introduced the segment:

Now there are ways that Apple aims to inspire creativity in our users, including in our stores. The mission of our stores has always been to enrich the lives of our customers by educating and inspiring them to go even further. One of the new ways that we’re taking their creativity even further is through our Today at Apple sessions.

Today at Apple was announced with a press release in April, 2017, and received its first on-stage mention during the iPhone X keynote. Last week’s presentation, though, really highlighted how Today at Apple is perhaps the best way to understand the way Apple thinks about its growth opportunities going forward. Senior Vice President of Retail Angela Ahrendts explained:

We started with the things that are core to Apple’s DNA, things people most use their devices for and trust us to teach them, like photography, music, gaming, and app development. And as Apple continues to develop curriculum like Everyone Can Code and Everyone Can Create, we embed these lessons and techniques into our Today at Apple programming for all customers, including educators and entrepreneurs. And we hold all of our sessions in all 505 retail locations, like at Apple Cotai Central in Macau, which opened a few months ago. Here, customers are attending a session called Photo Walks, where they learn new features, like portrait lighting and depth control, while exploring the city together in a real social way…

And as we continue to push the design of our flagships to be even greater gathering places where everyone is welcome, we’re also creating global platforms for local talent. Photographers, musicians, developers, and artists share their creative gifts…

Since the launch of Today at Apple, only 18 months ago, we have held over 18,000 sessions a week, attended by millions of curious creatives around the world. And with the newest release of the Apple Store app, we’ve made it even easier for you to find out what’s happening near you. Just tap on the Sessions tab and you’ll see a spotlight of the newest Today at Apple sessions in your city. It will also recommend sessions based on the products that you own, and signature programs like Music Labs, Kids Hour, and Photo Walks.

What is striking about Today at Apple is the scale of its ambition combined with its price: free. Of course that is not true in practice, because one needs an Apple device to realistically participate (and an Apple ID to even sign up), but that raises the question as to what Apple customers are paying for when they buy an Apple product? Apple’s point in highlighting Today at Apple is that customers are not simply buying an iPhone or an iPad or a Mac, but rather buying into an ongoing relationship with Apple.

Apple’s Social Network

More broadly, this explains CFO Luca Maestri reasoning on the earnings call for no longer reporting unit sales:

Third, starting with the December quarter, we will no longer be providing unit sales data for iPhone, iPad and Mac. As we have stated many times, our objective is to make great products and services that enrich people’s lives, and to provide an unparalleled customer experience so that our users are highly satisfied, loyal and engaged.

“Engaged” is an interesting choice of words, as engagement is an objective normally associated with social networks like Facebook. The reasoning is obvious: the more engaged users are, the more they use a social network, which means the more ads they can be shown. Social networks accomplish this by aggregating content from suppliers as well as users themselves, and continually tweaking algorithms in an attempt to keep you swiping and tapping, and coming back to swipe and tap some more.

This is a world that has always been foreign to Apple: its past attempts at facilitating social interaction on its platforms are memorable only as the butt of jokes (iTunes Ping anyone?). This isn’t a surprise: Apple’s culture and approach to products are antithetical to the culture and approach necessary to create and grow a traditional social network. Apple wants total control and to release as perfect a product it can; a social network requires an iterative approach that is designed to deal with constant variability and edge cases.

This, though, is why Today at Apple is compelling, particular Ahrendts’ reference to bringing people together in a “real social way” — and she could not have emphasized the word “real” more strongly. Apple is in effect trying to build a social network in the real world, facilitated and controlled by Apple, and betting that customers will continue to pay to gain access.

Apple’s Average Revenue per User

To be perfectly clear, I am not arguing that Today at Apple is the answer to a saturated smartphone market or Apple reaching the limits of price increases. The company is clearly relying on “Services” revenue, which mostly means App Store revenue, a huge portion of which comes from in-app purchases for games, as well as a growing number of subscriptions, some provided by Apple (like Apple Music), but most by 3rd parties.

What this framing of a “real world social network” does provide, though, is insight into where it is Apple’s new reporting falls short. It is all well and good that Apple will now separate Services revenue and its associated cost-of-sales starting next quarter; more insight into Apple’s growth driver is clearly appropriate.

What is missing, though, is the equivalent of unit sales for Services, specifically, the number of active customers Apple has, and the associated revenue per user. This is the exact metric that matters to social media companies, and to the extent that Apple’s growth is derived from continually monetizing its existing user base over time, it makes sense here as well.

To be sure, an accurate number would very much include device revenue: I laid out in Apple’s Middle Age that the company’s growth was based on getting more money from its existing user base through higher average selling prices, selling more devices (i.e. Apple Watch, AirPods, HomePod, etc.), and increased services revenue; to the extent Apple is correct that focusing on only devices misses the story, it is also correct that focusing on only Services is misleading as well.

Apple’s Priorities

Unfortunately Cook already declared on another earnings call last February that this number isn’t coming:

We’re not releasing a user number, because we think that the proper way to look at it is to look at active devices. It’s also the one that is the most accurate for us to measure. And so that’s our thinking behind there.

There are two problems with this: first, while an active devices number is helpful, the 1.3 billion number that Apple announced on that February earnings call was the first in two years; it has not been updated since. Second, the number of active devices may be easier for Apple to measure, but it simply isn’t as valuable to investors as the number of active users for reasons Cook stated himself last week:

Our installed base is growing at double digits, and that’s probably a much more significant metric for us from an ecosystem point of view and customer loyalty, et cetera. The second thing is this is a little bit like if you go to the market and you push your cart up to the cashier and she says, or he says how many units you have in there? It doesn’t matter a lot how many units there are in there in terms of the overall value of what’s in the cart.

It’s not just “overall value”, though: it’s how many customers there are total, and the ways in which their cart is changing — i.e. what is the installed base, and what is the rate of growth that Cook is referring to?

Unfortunately Apple appears to be most concerned with the top and bottom line. Maestri said just before Cook’s comment:

At the end of the day, we make our decisions from a financial standpoint to try and optimize our revenue and our gross margin dollars, and that we think is the focus that is in the best interest of our investors.

It is certainly difficult for anyone, particularly Apple’s investors, to complain about Apple’s revenue and gross margin dollars, going on many years now. For all those years, though, said revenue and profit were based on unit sales.

Now Apple is arguing that unit sales is the wrong way to understand its business, but refuses to provide the numbers that underlie the story it wants to tell. It is very fair for investors to be skeptical: both as to whether Apple can ever really be valued independently from device sales, and also whether the company, for all its fine rhetoric and stage presentations, is truly prioritizing what drives the revenue and profit instead of revenue and profit themselves. I do think the answer is the former; I just wish Apple would show it with its reporting.

  1. Well, theoretically anyways, in the case of the MacBook Pro [↩︎]

IBM’s Old Playbook

The best way to understand how it is Red Hat built a multi-billion dollar business off of open source software is to start with IBM. Founder Bob Young explained at the All Things Open conference in 2014:

There is no magic bullet to it. It is a lot of hard work staying up with your customers and understanding and thinking through where are the opportunities. What are other suppliers in the market not doing for those customers that you can do better for them? One of the great examples to give you an idea of what inspired us very early on, and by very early on we’re talking Mark Ewing and I doing not enough business to pay the rent on our apartments, but yet we were paying attention to [Lou Gerstner and] IBM…

Gerstner came into IBM and got it turned around in three years. It was miraculous…Gerstner’s insight was he went around and talked to a whole bunch IBM customers and found out that the customers didn’t actually like any of his products. They were ok, but whenever he would sit down with any given customer there was always someone who did that product better than IBM did…He said, “So why are you buying from IBM?” The customers were saying “IBM is the only technology company with an office everywhere that we do business,” and as a result Gerstner understood that he wasn’t selling products he was selling a service.

He talked about that publicly, and so at Red Hat we go, “OK, we don’t have a product to sell because ours is open source and everyone can use our innovations as quickly as we can, so we’re not really selling a product, but Gerstner at IBM is telling us the customers don’t buy products, they buy services, things that make themselves more successful.” And so that was one of our early insights into what we were doing was this idea that we were actually in the services business, even back when we were selling shrink-wrapped boxes of Linux, we saw that as an interim step to getting us big enough that we could sign service contracts with real customers.

Yesterday Young’s story came full circle when IBM bought Red Hat for $34 billion, a 60% premium over Red Hat’s Friday closing price. IBM is hoping it too to can come full circle: recapture Gerstner’s magic, which depended not only on his insight about services, but also a secular shift in enterprise computing.

How Gerstner Transformed IBM

I’ve written previously about Gerstner’s IBM turnaround in the context of Satya Nadella’s attempt to do the same at Microsoft, and Gerstner’s insight that while culture is extremely difficult to change, it is impossible to change nature. From Microsoft’s Monopoly Hangover:

The great thing about a monopoly is that a company can do anything, because there is no competition; the bad thing is that when the monopoly is finished the company is still capable of doing anything at a mediocre level, but nothing at a high one because it has become fat and lazy. To put it another way, for a former monopoly “big” is the only truly differentiated asset. This was Gerstner’s key insight when it came to mapping out IBM’s future…In Gerstner’s vision, only IBM had the breadth to deliver solutions instead of products.

A strategy predicated on providing solutions, though, needs a problem, and the other thing that made Gerstner’s turnaround possible was the Internet. By the mid-1990s businesses were faced with a completely new set of technologies that were nominally similar to their IT projects of the last fifteen years, but in fact completely different. Gerstner described the problem/opportunity in Who Says Elephants Can’t Dance:

If the strategists were right, and the cloud really did become the locus of all this interaction, it would cause two revolutions — one in computing and one in business. It would change computing because it would shift the workloads from PCs and other so-called client devices to larger enterprise systems inside companies and to the cloud — the network — itself. This would reverse the trend that had made the PC the center of innovation and investment — with all the obvious implications for IT companies that had made their fortunes on PC technologies.

Far more important, the massive, global connectivity that the cloud depicted would create a revolution in the interactions among millions of businesses, schools, governments, and consumers. It would change commerce, education, health care, government services, and on and on. It would cause the biggest wave of business transformation since the introduction of digital data processing in the 1960s…Terms like “information superhighway” and “e-commerce” were insufficient to describe what we were talking about. We needed a vocabulary to help the industry, our customers, and even IBM employees understand that what we saw transcended access to digital information and online commerce. It would reshape every important kind of relationship and interaction among businesses and people. Eventually our marketing and Internet teams came forward with the term “e-business.”

Those of you my age or older surely remember what soon became IBM’s ubiquitous ‘e’:

IBM's e-business marketing campaign

IBM went on to spend over $5 billion marketing “e-business”, an investment Gerstner called “one of the finest jobs of brand positions I’ve seen in my career.” It worked because it was true: large enterprises, most of which had only ever interacted with customers indirectly through a long chain of wholesalers and distributors and retailers suddenly had the capability — the responsibility, even — of interacting with end users directly. This could be as simple as a website, or e-commerce, or customer support, not to mention the ability to tap into all of the other parts of the value chain in real-time. The technology challenges and the business possibilities — the problem set, if you will — were immense, and Gerstner positioned IBM as the company that could solve these new problems.

It was an attractive proposition for nearly all non-tech companies: the challenge with the Internet in the 1990s was that the underlying technologies were so varied and quite immature; different problem spaces had different companies hawking products, many of them startups with no experience working with large enterprises, and even if they had better products no IT department wanted to manage and integrate a multitude of vendors. IBM, on the other hand, offered the proverbial “one throat to choke”; they promised to solve all of the problems associated with this new-fangled Internet stuff, and besides, IT departments were familiar and comfortable with IBM.

It was also a strategy that made sense in its potential to squeeze profit out of the value chain:

The actual technologies underlying the Internet were open and commoditized, which meant IBM could form a point of integration and extract profits, which is exactly what happened: IBM’s revenue and growth increased steadily — often rapidly! — over the next decade, as the company managed everything from datacenters to internal networks to external websites to e-commerce operations to all the middleware that tied it together (made by IBM, naturally, which was where the company made most of its profits). IBM took care of everything, slowly locking its customers in, and once again grew fat and lazy.

When IBM Lost the Cloud

In the final paragraph of Who Says Elephants Can’t Dance? Gerstner wrote of his successor Sam Palmisano:

I was always an outsider. But that was my job. I know Sam Palmisano has an opportunity to make the connections to the past as I could never do. His challenge will be to make them without going backward; to know that the centrifugal forces that drove IBM to be inward-looking and self-absorbed still lie powerful in the company.

Palmisano failed miserably, and there is no greater example than his 2010 announcement of the company’s 2015 Roadmap, which was centered around a promise of delivering $20/share in profit by 2015. Palmisano said at the time:

[The consensus view is that] product cycles will drive industry growth. The industry is consolidating and at the end of the day consumer technology will obliterate all computer science over the last 20 years. I’m an East Coast guy. We’re going to have a slightly different view. Product cycles aren’t going to drive sustainable growth. Clients in the future will demand quantifiable returns on their investment. They are not going to buy fashion and trends. Enterprise will have its own unique model. You can’t do what we’re doing in a cloud.

Amazon Web Services, meanwhile, had launched a full four years and two months before Palmisano’s declaration; it was the height of folly to not simply mock the idea of the cloud, but to commit to a profit number in the face of an existential threat that was predicated on spending absolutely massive amounts of money on infrastructure.1

Gerstner identified exactly what it was that Palmisano got wrong: he was “inward-looking and self-absorbed” such that he couldn’t imagine an enterprise solution better than IBM’s customized solutions. That, though, was to miss the point. As I wrote in a Daily Update back in 2014 when the company formally abandoned the 2015 profit goal:

The reality…is that the businesses IBM served — and the entire reason IBM had a market — didn’t buy customized technological solutions to make themselves feel good about themselves; they bought them because they helped them accomplish their business objectives. Gerstner’s key insight was that many companies had a problem that only IBM could solve, not that customized solutions were the end-all be-all. And so, as universally provided cloud services slowly but surely became good-enough, IBM no longer had a monopoly on problem solving.

The company has spent the years since then claiming it is committed to catching up in the public cloud, but the truth is that Palmisano sealed the company’s cloud fate when he failed to invest a decade ago; indeed, one of the most important takeaways from the Red Hat acquisition is the admission that IBM’s public cloud efforts are effectively dead.

IBM’s Struggles

So what precisely is the point of IBM acquiring Red Hat, and what if anything does it have to do with Lou Gerstner?

Well first off, IBM hasn’t been doing very well for quite some time now: last year’s annual revenue was the lowest since 1997, part-way through Gerstner’s transformation; of course, as this ZDNet article from whence this graph comes points out, $79 billion in 1997 is $120 billion today.

IBM's declining revenue
From ZDNet

The company did finally return to growth earlier this year after 22 straight quarters of decline, only to decline again last quarter: IBM’s ancient mainframe business was up 2%, and its traditional services business, up 3%, but Technology Services and Cloud Platforms were flat, and Cognitive Solutions (i.e. Watson) was down 5%.

Meanwhile, the aformentioned commitment to the cloud has mostly been an accounting fiction derived from re-classifying existing businesses; the more pertinent number is the company’s capital expenditures, which in 2017 were $3.2 billion, down from 2016’s $3.6 billion. Charles Fitzgerald writes on Platformonomics:

Capex spending by cloud players

We see IBM’s CAPEX slowly trailing off, like the company itself. IBM has always spent a lot on CAPEX (as high as $7 billion a year in their more glorious past), from well before the cloud era, so we can’t assume the absolute magnitude of spend is going towards the cloud. The big three all surpassed IBM’s CAPEX spend in 2012/13. In resisting the upward pull on CAPEX we see from all the other cloud vendors, IBM simply isn’t playing the hyper-scale cloud game.

The Red Hat Acquisition

This is where the Red Hat acquisition comes in: while IBM will certainly be happy to have the company’s cash-generating RHEL subscription business, the real prize is Openshift, a software suite for building and managing Kubernetes containers. I wrote about Kubernetes in 2016’s How Google is Challenging AWS:

In 2014 Google announced Kubernetes, an open-source container cluster manager based on Google’s internal Borg service that abstracts Google’s massive infrastructure such that any Google service can instantly access all of the computing power they need without worrying about the details. The central precept is containers, which I wrote about in 2014: engineers build on a standard interface that retains (nearly) full flexibility without needing to know anything about the underlying hardware or operating system (in this it’s an evolutionary step beyond virtual machines).

Where Kubernetes differs from Borg is that it is fully portable: it runs on AWS, it runs on Azure, it runs on the Google Cloud Platform, it runs on on-premise infrastructure, you can even run it in your house. More relevantly to this article, it is the perfect antidote to AWS’ ten year head-start in infrastructure-as-a-service: while Google has made great strides in its own infrastructure offerings, the potential impact of Kubernetes specifically and container-based development broadly is to make irrelevant which infrastructure provider you use. No wonder it is one of the fastest growing open-source projects of all time: there is no lock-in.

This is exactly what IBM is counting on; the company wrote in its press release announcing the deal:

This acquisition brings together the best-in-class hybrid cloud providers and will enable companies to securely move all business applications to the cloud. Companies today are already using multiple clouds. However, research shows that 80 percent of business workloads have yet to move to the cloud, held back by the proprietary nature of today’s cloud market. This prevents portability of data and applications across multiple clouds, data security in a multi-cloud environment and consistent cloud management.

IBM and Red Hat will be strongly positioned to address this issue and accelerate hybrid multi-cloud adoption. Together, they will help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. In doing so, they will draw on their shared leadership in key technologies, such as Linux, containers, Kubernetes, multi-cloud management, and cloud management and automation.

This is the bet: while in the 1990s the complexity of the Internet made it difficult for businesses to go online, providing an opening for IBM to sell solutions, today IBM argues the reduction of cloud computing to three centralized providers makes businesses reluctant to commit to any one of them. IBM is betting it can again provide the solution, combining with Red Hat to build products that will seamlessly bridge private data centers and all of the public clouds.

IBM’s Unprepared Mind

The best thing going for this strategy is its pragmatism: IBM gave up its potential to compete in the public cloud a decade ago, faked it for the last five years, and now is finally admitting its best option is to build on top of everyone else’s clouds. That, though, gets at the strategy’s weakness: it seems more attuned to IBM’s needs than potential customers. After all, if an enterprise is concerned about lock-in, is IBM really a better option? And if the answer is that “Red Hat is open”, at what point do increasingly sophisticated businesses build it themselves?

The problem for IBM is that they are not building solutions for clueless IT departments bewildered by a dizzying array of open technologies: instead they are building on top of three cloud providers, one of which (Microsoft) is specializing in precisely the sort of hybrid solutions that IBM is targeting. The difference is that because Microsoft has actually spent the money on infrastructure their ability to extract money from the value chain is correspondingly higher; IBM has to pay rent:

Perhaps the bigger issue, though, goes back to Gerstner: before IBM could take advantage of the Internet, the company needed an overhaul of its culture; the extent to which the company will manage to leverage its acquisition of Red Hat will depend on a similar transformation. Unfortunately, that seems unlikely; current CEO Ginni Rometty, who took over the company at the beginning of 2012, not only supported Palmisano’s disastrous Roadmap 2015, she actually undertook most of the cuts and financial engineering necessary to make it happen, before finally giving up in 2014. Meanwhile the company’s most prominent marketing has been around Watson, the capabilities of which have been significantly oversold; it’s not a surprise sales are shrinking after disappointing rollouts.

Gerstner knew turnarounds were hard: he called the arrival of the Internet “lucky” in terms of his tenure at IBM. But, as the Louis Pasteur quote goes, “Fortune favors the prepared mind.” Gerstner had identified a strategy and begun to change the culture of IBM, so that when the problem arrived, the company was ready. Today IBM claims it has found a problem; it is an open question if the problem actually exists, but unfortunately there is even less evidence that IBM is truly ready to take advantage of it if it does.

  1. This footnote is a repeat from Microsoft’s Monopoly Hangover; Gerstner predicted the public cloud in the first appendix of his book, which was published in 2003, four years before AWS was launched:

    Put all of this together—the emergence of large-scale computing grids, the development of autonomic technologies that will allow these systems to be more self-managing, and the proliferation of computing devices into the very fabric of life and business—and it suggests one more major development in the history of the IT industry. This one will change the way IT companies take their products to market. It will change who they sell to and who the customer considers its “supplier.” This development is what some have called “utility” computing.

    The essential idea is that very soon enterprises will get their information technology in much the same way they get water or electric power. They don’t now own a waterworks or power plant, and soon they’ll no longer have to buy, house, and maintain any aspect of a traditional computing environment: The processing, the storage, the applications, the systems management, and the security will all be provided over the Net as a service—on demand.

    The value proposition to customers is compelling: fewer assets; converting fixed costs to variable costs; access to unlimited computing resources on an as-needed basis; and the chance to shed the headaches of technology cycles, upgrades, maintenance, integration, and management.

    Also, in a post-September 11, 2001, world in which there’s much greater urgency about the security of information and systems, on-demand computing would provide access to an ultra-secure infrastructure and the ability to draw on systems that are dispersed— creating a new level of immunity from a natural disaster or an event that could wipe out a traditional, centralized data center. [↩︎]

The Problem with Facebook and Virtual Reality

Facebook, believe it or not, has actually made virtual reality better, at least from one perspective.

My first VR device was PlayStation VR, and the calculus was straightforward: I owned a PS4 and did not own a Windows PC, which means I had a device that was compatible with the PlayStation VR and did not have one that was compatible with the Oculus Rift or the HTC Vive.

I used it exactly once.

The PlayStation VR and all of its necessary accessories and cords

The problem is that actually hooking up the VR headset was way too complicated with way too many wires, and given that I lived at the time in a relatively small apartment, it wasn’t viable to leave the entire thing hooked up when I wasn’t using it. I did finally move to a new place, but frankly, I can’t remember if I unpacked it or not.

Then, earlier this year, Facebook came out with the Oculus Go.

The Oculus Go is a standalone device

The Go sported hardware that was about the level of a mid-tier smartphone, and priced to match: $199. Critically, it was a completely standalone device: no console or PC necessary. Sure, the quality wasn’t nearly as good, but convenience matters a lot, particularly for someone like me who only occasionally plays video games or watches TV or movies. Putting on a wingsuit or watching some NBA highlights is surprisingly fun, and critically, easy. At least as long as I have the Go out of course, and charged. It’s hard to imagine giving it a second thought otherwise.

The Virtual Reality Niche

That is the first challenge of virtual reality: it is a destination, both in terms of a place you go virtually, but also, critically, the end result of deliberative actions in the real world. One doesn’t experience virtual reality by accident: it is a choice, and often — like in the case of my PlayStation VR — a rather complicated one.

That is not necessarily a problem: going to see a movie is a choice, as is playing a video game on a console or PC. Both are very legitimate ways to make money: global box office revenue in 2017 was $40.6 billion U.S., and billions more were made on all the other distribution channels in a movie’s typical release window; video games have long since been an even bigger deal, generating $109 billion globally last year.

Still, that is an order of magnitude less than the amount of revenue generated by something like smartphones. Apple, for example, sold $158 billion worth of iPhones over the last year; the entire industry was worth around $478.7 billion in 2017. The disparity should not come as a surprise: unlike movies or video games, smartphones are an accompaniment on your way to a destination, not a destination in and of themselves.

That may seem counterintuitive at first: isn’t it a good thing to be the center of one’s attention? That center, though, can only ever be occupied by one thing, and the addressable market is constrained by time. Assume eight hours for sleep, eight for work, a couple of hours for, you know, actually navigating life, and that leaves at best six hours to fight for. That is why devices intended to augment life, not replace it, have always been more compelling: every moment one is awake is worth addressing.

In other words, the virtual reality market is fundamentally constrained by its very nature: because it is about the temporary exit from real life, not the addition to it, there simply isn’t nearly as much room for virtual reality as there is for any number of other tech products.

Facebook’s Head-scratching Acquisition

This, incidentally, includes Facebook: the strength of the social network is counterintuitive like virtual reality is counterintuitive, but in the exact opposite way. No one plans to visit Facebook: who among us has “Facebook Time” set on our calendar? And yet the vast majority of people who are able — over 2 billion worldwide — visit Facebook every single day, for minutes at a time.

The truth is that everyone has vast stretches of time between moments of intentionality: standing in line, riding the bus, using the bathroom. That is Facebook’s domain, and it is far more valuable than it might seem at first: not only is the sheer amount of time available more than you might think, it is also a time when the human mind is, by definition, less engaged; we visit Facebook seeking stimulation, and don’t much care if that stimulation comes from friends and family, desperate media companies, or advertisers that have paid for the right. And pay they have, to the tune of $48 billion over the last year — more than the global box office, and nearly half of total video game revenue.

What may surprise you is that Facebook landed on this gold mine somewhat by accident: at the beginning of this decade the company was desperately trying to build a platform, that is, a place where 3rd-party developers could build their own direct connections with customers. This has long been the stated goal of Silicon Valley visionaries, but generally speaking the pursuit of platforms has been a bit like declarations of disruption: widespread in rhetoric, but few and far between in reality.

So it was with Facebook: the company’s profitability and dramatic rise in valuation — the last three months notwithstanding — have been predicated on the company not being a platform, at least not one for 3rd-party developers. After all, to give space to 3rd-party developers is to not give space to advertisers, at least on mobile, and it is mobile that has provided, well, the platform for Facebook to fill those empty spaces. And, as I noted back in 2013, the mobile ad unit couldn’t be better.

This is why Facebook’s acquisition of Oculus back in 2014 was such a head-scratcher; I was immediately skeptical, writing in Face Is Not the Future:

Setting aside implementation details for a moment, it’s difficult to think of a bigger contrast than a watch and an Occulus headset that you, in the words of [Facebook CEO Mark] Zuckerberg, “put on in your home.” What makes mobile such a big deal relative to the PC is the fact it is with you everywhere. A virtual reality headset is actually a regression in which your computing experience is neatly segregated into something you do deliberately.

Zuckerberg, though, having first failed to build a platform on the PC, and then failing miserably with a phone, would not be satisfied with being merely an app; he would have his platform, and virtual reality would give him the occasion.

Facebook’s Oculus Drama

When the Oculus acquisition was announced Zuckerberg wrote:

Our mission is to make the world more open and connected. For the past few years, this has mostly meant building mobile apps that help you share with the people you care about. We have a lot more to do on mobile, but at this point we feel we’re in a position where we can start focusing on what platforms will come next to enable even more useful, entertaining and personal experiences…

This is a fascinating statement in retrospect. Of course there is the blithe dismissal of mobile, which would increase Facebook’s valuation tenfold, because Facebook was only an app, not a platform. More striking, though, is Zuckerberg’s evaluation that Facebook was now in a position to focus elsewhere: after the revelations of state-sponsored interference and legitimate questions about Facebook’s impact on society broadly it seems rather misguided.

Oculus’s mission is to enable you to experience the impossible. Their technology opens up the possibility of completely new kinds of experiences. Immersive gaming will be the first, and Oculus already has big plans here that won’t be changing and we hope to accelerate. The Rift is highly anticipated by the gaming community, and there’s a lot of interest from developers in building for this platform. We’re going to focus on helping Oculus build out their product and develop partnerships to support more games. Oculus will continue operating independently within Facebook to achieve this.

This is related to the reasons why Oculus and Facebook are in the news this week; TechCrunch reported that Oculus co-founder Brendan Iribe left the company because of a dispute about the next-generation of computer-based VR headsets; Facebook said that computer-based VR was still a part of future plans.

But this is just the start. After games, we’re going to make Oculus a platform for many other experiences…This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures. These are just some of the potential uses. By working with developers and partners across the industry, together we can build many more. One day, we believe this kind of immersive, augmented reality will become a part of daily life for billions of people.

This, though, makes one think that TechCrunch was on to something. Microsoft, to its dismay, found out with the Xbox One that serving gamers and serving consumers generally are two very different propositions, and any move perceived by the former to be in favor of the latter will hurt sales specifically and the development of a thriving ecosystem generally. The problem for Facebook, though, is that the fundamental nature of the company — not to mention Zuckerberg’s platform ambitions — rely on serving as many customers as possible.

I suspect that wasn’t the top priority of Oculus’s founders: virtual reality is a hard problem, one where even the best technology — which unquestionably, means connecting to a PC — is not good enough. To that end, given that their priority was virtual reality first and reach second, I suspect Oculus’ founders would rather be spending more time making PC virtual reality better and less time selling warmed over smartphone innards.

The Problems with Facebook and Oculus

Still, I can’t deny that the Oculus Go, underpowered though it may be, is nicer to use in important ways — particularly convenience — that are serially undervalued by technologists. As I noted at the beginning, Facebook’s influence, particularly its desire to reach as many users as possible and control the entire experience — two desires that are satisfied with a standalone device — may indeed make virtual reality more widespread than it might have been had Oculus remained an independent company.

What is inevitable though — what was always inevitable, from the day Facebook bought Oculus — is that this will be one acquisition Facebook made that was a mistake. If Facebook wanted a presence in virtual reality the best possible route was the same it took in mobile: to be an app-exposed service, available on all devices, funded by advertising. I have long found it distressing that Zuckerberg, not just in 2014, but even today, judging by his comments in keynotes and on earnings calls, seems unable or unwilling to accept this fundamental truth about Facebook’s place in tech’s value chain.

In fact, Zuckerberg’s rhetoric around virtual reality has betrayed more than a lack of strategic sense: his keynote at the Oculus developer conference in 2016, a month before the last election, was, in retrospect, an advertisement of the company’s naïveté regarding its impact on the world:

We’re here to make virtual reality the next major computing platform. At Facebook, this is something we’re really committed to. You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much much better than it is today. Anything, whether it’s hardware, or software, a company, a developer ecosystem, you can take anything and make it much, much better. And as I look out today, I see a lot of people who share this engineering mindset. And we all know where we want to improve and where we want virtual reality to eventually get…

I wrote at the time:

Perhaps I underestimated Zuckerberg: he doesn’t want a platform for the sake of having a platform, and his focus is not necessarily on Facebook the business. Rather, he seems driven to create utopia: a world that is better in every possible way than the one we currently inhabit. And, granted, owning a virtual reality company is perhaps the most obvious route to getting there…

Needless to say, 2016 suggests that the results of this approach are not very promising: when our individual realities collide in the real world the results are incredibly destructive to the norms that hold societies together. Make no mistake, Zuckerberg gave an impressive demo of what can happen when Facebook controls your eyes in virtual reality; what concerns me is the real world results of Facebook controlling everyone’s attention with the sole goal of telling each of us what we want to hear.

The following years have only borne out the validity of this analysis: of all the myriad of problems faced by Facebook — some warranted, and some unfair — the most concerning is the seeming inability of the company to even countenance the possibility that it is not an obvious force for good.

Facebook’s Mismatch

Again, though, Facebook aside, virtual reality is more compelling than you might think. There are some experiences that really are better in the fully immersive environment provided by virtual reality, and just because the future is closer to game consoles (at best) than to smartphones is nothing to apologize for. What remains more compelling, though, is augmented reality: the promise is that, like smartphones, it is an accompaniment to your day, not the center, which means its potential usefulness is far greater. To that end, you can be sure that any Facebook executive would be happy to explain why virtual reality and Oculus is a step in that direction.

That may be true technologically, but again, the fundamental nature of the service and the business model are all wrong. Anything made by Facebook is necessarily biased towards being accessible by everyone, which is a problem when creating a new market. Before technology is mature integrated products advance more rapidly, and can be sold at a premium; it follows that market makers are more likely to have hardware-based business models that segment the market, not service-based ones that try and reach everyone.

To that end, it is hard to not feel optimistic about Apple’s chances at eventually surpassing Oculus and everyone else. The best way to think about Apple has always been as a personal computer company; the only difference over time is that computers have grown ever more personal, moving from the desk to the lap to the pocket and today to the wrist (and ears). The face is a logical next step, and no company has proven itself better at the sort of hardware engineering necessary to make it happen.

Critically, Apple also has the right business model: it can sell barely good-enough devices at a premium to a userbase that will buy simply because they are from Apple, and from there figure out a use case without the need to reach everyone. I was very critical of this approach with the Apple Watch — it was clear from the launch keynote that Apple had no idea what this cool piece of hardware engineering would be used for — but, as the Apple Watch has settled into its niche as a health and fitness device and slowly expanded from there, I am more appreciative of the value of simply shipping a great piece of hardware and letting the real world figure it out.

That there gets at Facebook’s fundamental problem: the company is starting with a use case — social networking, or “connecting people” to use their favored phrase — and backing out to hardware and business models. It is an overly prescriptive approach that is exactly what you would expect from an app-enabled service, and the opposite of what you would expect from an actual platform. In other words, to be a platform is not a choice; it is destiny, and Facebook’s has always run in a different direction.

The Battle for the Home

If the first stage of competition in consumer technology was the race to be the computer users went to (won by Microsoft and the PC), and the second was to be the computer users carried with them (won by Apple in terms of profits, and Google in terms of marketshare), the outlines of the current battle came sharply into focus over the last month: what company will win the race to be the computer within which users live?

The Announcements

The first announcement came from Amazon three weeks ago: a new high-end Echo Plus, Echo Dots, several Echo devices for use with 3rd party stereos and speakers (or other Echoes), and an updated Echo Show (i.e. an Echo with a screen). All standard fare, and then things got wacky: the company also announced a microwave, a wall clock, smart plugs, a device for the car, and a TV Tuner/DVR, all with Alexa built-in.

Next up was Facebook: earlier this week the company launched the Portal, a video chat device that can track faces, has Alexa integration, and a smattering of 3rd-party apps likes Spotify. The device was reportedly delayed last spring as the company grappled with the fallout of the Cambridge Analytica scandal, and was instead launched in the midst of a data exposure scandal.

Third was Google: yesterday the company announced the Google Home Hub — a Google Home with a screen attached, a la the Echo Show — as well as the Pixel 3 phone and the Pixel Slate tablet, along with far deeper integration between Nest home automation products and the Google Home ecosystem.

And, of course, there is Apple, which launched the HomePod earlier this year, and added a few new capabilities with a software update last month.

Each of these companies brings different strengths, weaknesses, go-to-market strategies, and business models to the fight for the home; a question that is just as important of who will win, though, is to what degree it matters.

Strengths

Each of these companies’ strengths in the home is closely connected to their success elsewhere.

Amazon: Amazon deserves to go first, in large part because they were first: while Google acquired Nest in 2014, Nest itself was predicated on the smartphone being the center of the connected home. Amazon, though, thanks to its phone failure, had the freedom to imagine what a connected home might look like as its own independent entity, leading the company to launch the Echo speaker and Alexa assistant in late 2014.

I was immediately optimistic, in part because the Echo was everything the failed Fire phone was not: its success depended not on the integration of hardware and software, the refinement of which a service company like Amazon is fundamentally unsuited for, but rather the integration of hardware and service. It also helped that Amazon had a business model that made sense: on one hand, the investments in Alexa would pay off with services for AWS, and on the other, Amazon’s goal of taking a slice of all economic activity was by definition centered around capturing an ever-increasing share of purchases made for and consumed in the home, and Alexa could make that easier.

That led to an early lead in the development of the Alexa ecosystem, both in terms of “Skills” and also in devices that incorporated Alexa. As I noted in 2016, this made Alexa Amazon’s operating system for the home, and today Alexa has over 30,000 skills and is built into 20,000 devices.

That, though, makes Amazon’s recent announcements that much more interesting: Amazon isn’t simply content with being the voice assistant for 3rd-party devices, it also is making those devices directly. This, by extension, perhaps points to Amazon’s biggest strength: because Amazon.com is so dominant, the company can have its cake and eat it too. That is, just as Amazon.com is both a marketplace and a channel for Amazon to sell its own products, Alexa is both a necessary component of 3rd-party devices and also a driver of Amazon’s own devices; the company faces no strategy taxes in its drive to win.

Google: Google was very late to respond to Alexa; the original Google Home wasn’t announced until May 2016, and didn’t ship until November 2016, a full two years after the Echo. The company was, as I noted above — and as you would expect for a market leader — locked into the smartphone paradigm; an app plus Nest was its answer, until Alexa made it clear this was wrong.

Google, though, has started to catch up, and the reason is obvious: if a home device is about the integration of hardware and services, it follows that the company that is best at services — consumer services, anyways — would be very well-placed to succeed. The company still trails Alexa by a lot in actions/skills (around 2,000) and 3rd-party devices (over 5,000), but Google’s core functionality is plenty strong enough to sell devices on its own. There are still more Echoes being sold, but Google Home is catching up.

To that end, one of the more interesting takeaways from yesterday’s Google event was the extent to which Google is leaning on its own services to sell its devices: not only did the company tout the helpfulness of Google Assistant, it also prominently featured YouTube, particularly in the context of the Google Home Hub. This is particularly noteworthy because Google handicapped the YouTube functionality of the Echo Show, clearly with this product in mind. Google is also including six months of YouTube Premium with a Google Home Hub; indeed, every Google product included some sort of YouTube subscription product.

Apple: The HomePod is exactly what you would expect from Apple: the best hardware at the highest price. The sound is excellent and, naturally, even better if you buy two. The HomePod is also — again, as you would expect from Apple — locked into the Apple ecosystem;1 this is from one perspective a weakness, but this is the Strength section, and the reality is that people are more committed to their iPhones — and thus Apple’s ecosystem — than they are to home speakers, meaning that for many customers this limitation is a strength.

Along those lines, Apple is clearly the most attractive option from a privacy perspective: the company doesn’t sell highly-targeted ads, has made privacy a public priority, and is thus the only choice for those nervous about having an Internet-connected microphone in their house.

Facebook: Perhaps the most compelling case for Portal is historical. In the introduction I framed the battle for the home as following the battle for the desk and the battle for the pocket. There were, though, intervening battles that were enabled by those fights for physical spaces. Specifically, the PC created the conditions for the Internet, which in turn made smartphones that could access the Internet so compelling. Smartphones, then, created the conditions for social networking (including messaging) to infiltrate all aspects of life.

Might it be the case, then, that just as the Internet was the key to unlocking the potential of mobile, so might social networking be the key to unlocking the potential of the home? That appears to be Facebook’s bet: sure, the device has some neat hardware features, particularly the ability to follow you around the room or zoom out during a call, but neat hardware features can and will be copied. If Portal is to be a successful venture for Facebook, it will be because the tie-in to Facebook’s social network makes this device compelling.

Weaknesses

As is so often the case, each companies’ weakness is the inverse of their strength:

Amazon: Amazon simply isn’t that good at making consumer products. In my experience its devices are worse than the competition2 both aesthetically and in terms of hardware capabilities like sound quality. In addition, Amazon’s brute force skills approach — it is on the user to speak correctly, not on the service to figure it out — lends itself to more skills initially but a potentially more frustrating user experience.

Amazon also has less of a view into an individual user’s life; sure, it knows what kind of toothpaste you prefer, but it doesn’t know when your first meeting is, or what appointments you have. That is the province of Google in particular, and also Apple. What is more valuable: being able to buy things by voice, or being told that you best be leaving for that early meeting STAT?

Google: As a product Google’s offering is remarkably strong (there are other weaknesses, which I will get into below). The company is the best at the core functionality of a home device, and it knows enough about you to genuinely add usefulness. Its products are also more attractive and better-performing than Amazon’s (in my estimation).

Google does face questions about privacy: the company collects data obsessively — right up to the creepy line, as former CEO Eric Schmidt has said — and that could be a hindrance to the company’s ability to penetrate the home. That said, Google has so far escaped Facebook-level scrutiny, and wisely excluded a camera from the Google Home Hub. Google knows its advantage is in providing information; it has sufficient other avenues to collect it, without putting a camera in your bedroom.

Apple: Apple, even more than Google, seemed blinded by its smartphone success. This isn’t a surprise: the ultimate point of Android was to be a conduit to Google’s services; it follows, then, that if home devices are about services, that Google would be more attuned to the opportunity (and the threat). Apple, on the other hand, is and always will be a product company; the company offers services to help sell its hardware, not the other way around, and it follows that the company is heavily incentivized to insist that the iPhone and Apple Watch, which both offer attractive hardware margins and are differentiated by the integration of hardware and software, are better home devices.

That, furthermore, explains Apple’s biggest weakness: the relative performance of Siri as compared to Alexa or Google Assistant. The problem isn’t a matter of trivia, but rather speed and reliability. Siri is consistently slower and more likely to make mistakes in transcription than either Alexa or Google Assistant (and, for the record, more likely to fail trivia questions as well). As always, Apple is the most potent example of how strengths equal weaknesses: just as it was inevitable that a services company like Amazon would be poor at product, a truly extraordinary product company like Apple will face fundamental challenges in services.

Facebook: If the strengths of Facebook Portal were largely theoretical, the weaknesses are extremely real: it is, frankly, mind-boggling that the company would launch Portal given the current public mood around the company. And, to be clear, that mood is largely deserved; I wrote last week about the company as a Data Factory, and one of the telling examples was how Facebook lets advertisers use numbers provided for two-factor authentication for targeting. This strongly suggests that, from Facebook’s perspective, data is data: everything is an input, and while the company may promise that Portal is private, one wonders why anyone would believe them.

That noted, I actually suspect Portal data is private; this seems like more of an attempt to enhance the value of the Facebook graph, and thus the app’s stickiness, than to collect more data. The problem, though, is that Facebook is not in the position to expect nuance, and that this product was launched anyways supports the argument that the company’s executives are indeed out of touch.

Go-to-Market

The various go-to-market possibilities for these four companies could very well have been folded into strengths-and-weaknesses, but they’re worth highlighting on their own, given how important an effective go-to-market strategy is in consumer products.

Amazon: This is arguably Amazon’s biggest strength: not only does the company have direct access to the top e-commerce site in the world and one of the largest retailers period — and, because it is them, can skip a retailer mark-up — it also gets access to prime real estate:

Amazon's front page featuring an Echo Dot

There is not only no question in a consumer’s mind about where to buy an Echo, it is also nearly impossible that they not know about it. Moreover, Amazon has a second trick up its sleeve: it doesn’t stock any of its competitors’ Google or Apple’s home products, making acquiring them that much more of a hassle.

Google: I highlighted this as a major Google weakness when it launched its #MadeByGoogle line two years ago, but to the company’s credit, it has worked hard to build out its channel. Today Google products are available on most non-Amazon e-commerce sites and in retailers like Best Buy, Target, and Walmart. The company has also invested in advertising to build awareness; there is still a long ways to go, to be sure, and go-to-market remains a Google weakness, but the company has impressed me with its work in this area.

Apple: This is a huge area of strength for Apple as well. The company obviously has a very strong channel, both online and through its retail stores. Both reflect Apple’s biggest strength, which is its brand: there is no company that has more loyal customers, and those customers are tremendously biased to buy an Apple product over a competitor’s; they are also more likely to be receptive to Apple’s privacy message, perhaps because they care, or perhaps because that is the message that plays to Apple’s strengths.

Facebook: It appears the company learned nothing from the Facebook First flop. The Facebook First, if you don’t recall, was Facebook’s ill-fated phone; it was manufactured by HTC and was discontinued within weeks of launch. There simply was no evidence that customers wanted to pay for a product that was predicated on Facebook integration, and there was certainly no effective go-to-market strategy.

It is hard to see how the Portal will be different: again, the defining feature is that the camera follows you around, a feature that is cool in theory but bizarrely out-of-touch with Facebook’s current perception in the market. Is the company really going to spend the millions necessary to market this thing? And if so, where is it going to be available to purchase? I can see why this product was designed; I see little understanding of how it might be sold.

Business Models

This too ties into strengths-and-weaknesses, but like the go-to-market strategies, is worth calling out in its own right:

Amazon: I explained the company’s business model above: Amazon wants to own the home, because it sells a huge number of items that are used in the home. This is why the company is willing to press its advantage as both a platform and retailer when it comes to Alexa devices: winning has a very direct connection to the company’s ultimate upside.

Google: The business model is a bit fuzzier here: Google makes money through ads sold in an auction where the winner is chosen by the user. That is a model that doesn’t work for voice in particular; affiliate fees are less profitable given that they foreclose the possibility of an advertiser forming a direct relationship with the end user. That noted, the introduction of a visual interface does also offer the possibility of ads.

More noteworthy is the incorporation of YouTube: YouTube has seen the addition of more and more subscription services, including YouTube Premium, YouTube TV, and YouTube Music. All of these work in conjunction with Google’s designs on to the home.

The most compelling business case for Google, though, is the same as it ever was: maintaining a dominant presence in all aspects of a user’s life, not just on the go (in the case of Android) but also in the home provides the data for more effective advertising in the places where it makes sense. No, Google may not sell that many voice ads, but voice interaction will affect what ads are shown in Search, and that is worth an awful lot.

Apple: Apple’s business model is the most straightforward: HomePod is clearly sold at a profit, part of Apple’s strategy of increasing its monetization of its current userbase. This is also a limitation: as noted above, the HomePod is significantly more expensive than any of its competitors.

Facebook: The social network company has the weakest business model story of all: there are no add-on services to sell, and the company has promised not to use the Portal for advertising, for now anyway. The best argument is similar to Google: more data and more engagement mean more opportunities to show better-targeted ads on the company’s other products.

Winners and Losers

There are compelling cases to be made for at least three of the four companies:

Amazon: Amazon’s head start is meaningful, and its widespread integration with other products mean it is likely that more people have a device with Alexa integration than not. The company is also highly motivated to win and has the business model to justify it.

Google: I find Google’s case the most compelling. Product is not the only thing that matters, but it is awfully important, and Google is the best placed to deliver the best product. Its services are superior, its knowledge of users the most comprehensive, and its overall product chops have improved considerably. Yes, its go-to-market is worse than Amazon’s and it has a late start, but it is still early.

Apple: The loyalty of Apple’s userbase cannot be overstated, particularly when you remember that the company’s userbase is the most affluent customers of all. This makes it difficult to ever count Apple out, even if their product is late and tied to the worst services.

Facebook: It is hard to envision how Portal won’t be a loser: the company has no natural userbase, has a terrible reputation for privacy, and has no obvious business model or go-to-market strategy.

Does It Matter?

There is one final question that overshadows all-of-this: while the home may be the current battleground in consumer technology, is it actually a distinct product area — a new epoch, if you will? When it came to mobile, it didn’t matter who had won in PCs; Microsoft ended up being an also-ran.

The fortunes of Apple, in particular, depend on whether or not this is the case. If it is a truly new paradigm, then it is hard to see Apple succeeding. It has a very nice speaker, but everything else about its product is worse. On the other hand, the HomePod’s close connection to the iPhone and Apple’s overall ecosystem may be its saving grace: perhaps the smartphone is still what matters.

More broadly, it may be the case that we are entering an era where there are new battles, the scale of which are closer to skirmishes than all-out wars a la smartphones. What made the smartphone more important than the PC was the fact they were with you all the time. Sure, we spend a lot of time at home, but we also spend time outside (AR?), entertaining ourselves (TV and VR), or on the go (self-driving cars); the one constant is the smartphone, and we may never see anything the scale of the smartphone wars again.

  1. You can use the HomePod as an AirPlay speaker for services like Spotify, but then you are just overpaying for a dumb speaker [↩︎]
  2. I haven’t tried Facebook’s Portal [↩︎]

Data Factories

I’m generally annoyed by the cliché “If you’re not paying you’re the product”; Derek Powazek has explained why the implications of this statement are usually misleading and often wrong, something that is particularly problematic in the context of Aggregators. After all, if a company’s market power flows from controlling demand — that is, users — that means said company is incentivized to keep those users satisfied; it is suppliers that have to “take it or leave it”.

This explains why the idea of an Aggregator being a monopoly is hard to get one’s head around; in the physical world where market power comes from controlling distribution — think AT&T, or your local cable company, or a utility company — there is no incentive to treat end users well, because users have no choice in the matter. On the Internet, though, where distribution is effectively free, alternatives are only a click away; Aggregators are extremely motivated to make sure that click doesn’t happen, which means giving the users what they want (the technical term is “increasing engagement”). Users are a priority, not a product.

And yet, as is so often the case, clichés persist because there is some truth to them. Facebook and Google — the two Super Aggregators — make money through ads, and advertisers come to Facebook and Google because they want to reach consumers. From an advertiser perspective users — or to be more precise, access to users’ attention — is a product they are absolutely paying for.

Views on Facebook

This seeming dichotomy — prioritizing users on one hand, and selling access to their attention on the other — makes more sense if you first think of Super Aggregators as two distinct businesses: Aggregator and advertising-seller. To use Facebook as an example (as I will for the rest of the article, although nearly everything applies to Google as well), it is both an Aggregator that content providers clamor to reach, as well as the gatekeeper for consumers advertisers wish to sell to:

Two views on Facebook's business

Still, this isn’t quite right, because Facebook the company is not simply the so-called “Blue App” but also several other businesses, most notably Instagram and WhatsApp (there is also Messenger, but given its user-facing network is the same as the Blue App I don’t really consider it to be distinct). Once you add those to the mix Facebook the company looks like this:

Facebook's conglomerate

You’ll note that I’ve taken to using the term “Blue App” to distinguish Facebook the network from Facebook the company; the question, though, is what exactly is the company anyways?

The Data Factory

At a superficial level, Facebook is a sort of holding company for social networks; back in 2014 I called it The Social Conglomerate. That, though, is very much a user-centric perspective; to that end, if you consider the advertising perspective, you could argue that Facebook the company is an advertising dashboard and sales force.

I think, though, that sells short the functionality of Facebook the company. Specifically, Facebook is a data factory. Wikipedia defines a factory thusly:

A factory or manufacturing plant is an industrial site, usually consisting of buildings and machinery, or more commonly a complex having several buildings, where workers manufacture goods or operate machines processing one product into another.

Facebook quite clearly isn’t an industrial site (although it operates multiple data centers with lots of buildings and machinery), but it most certainly processes data from its raw form to something uniquely valuable both to Facebook’s products (and by extension its users and content suppliers) and also advertisers (and again, all of this analysis applies to Google as well):

  • Users are better able to connect with others, find content they are interested in, form groups and manage events, etc., thanks to Facebook’s data.
  • Content providers are able to reach far more readers than they would on their own, most of whom would not even be aware those content providers exist, much less visit of their own volition.
  • Advertisers are able to maximize the return on their advertising dollar by only showing ads to individuals they believe are predisposed to like their product, making it more viable than ever before to target niches (to the benefit of their customers as well).

And then, in exchange for these benefits that derive from data, Facebook sucks in data from all three entities:

  • Users provide Facebook with data directly, both through information and media they upload, and also through their actions on Facebook properties.
  • Content is not simply data in its own right, but also a catalyst for generating user action data.
  • Advertisers, like content providers, not only provide data in its own right, which acts as a catalyst for generating user action data, but also upload huge amounts of data directly in order to better target prospective customers.

Those aren’t the only avenues through which Facebook collects data: the company has deals with multiple third-party data collection companies, gathering everything from web traffic to offline store receipts, and also has incentivized an untold number of websites — particularly content providers — to include Facebook links on their sites that collect data from those sites.

That results in a much fuller picture of Facebook’s business:

The Facebook Data Factory

Data comes in from anywhere, and value — also in the form of data — flows out, transformed by the data factory.

Regulating the Internet

Two weeks ago, in The European Union Versus the Internet, I argued that effective regulation of tech companies, particularly Super Aggregators like Facebook and Google, had to work with the fundamental principles of the Internet, not against them; otherwise, the likely outcome would be to entrench these Internet giants with little gain to consumers.

First and foremost regulators need to understand that the power of Aggregators comes from controlling demand, not supply. Specifically, consumers voluntarily use Google and Facebook, and “suppliers” like content providers, advertisers, and users themselves, have no choice but to go where consumers are. To that end:

Facebook’s ultimate threat can never come from publishers or advertisers, but rather demand — that is, users. The real danger, though, is not from users also using competing social networks (although Facebook has always been paranoid about exactly that); that is not enough to break the virtuous cycle. Rather, the only thing that could undo Facebook’s power is users actively rejecting the app. And, I suspect, the only way users would do that en masse would be if it became accepted fact that Facebook is actively bad for you — the online equivalent of smoking.

For Facebook, the Cambridge Analytica scandal was akin to the Surgeon General’s report on smoking: the threat was not that regulators would act, but that users would, and nothing could be more fatal. That is because the regulatory corollary of Aggregation Theory is that the ultimate form of regulation is user generated.

If regulators, EU or otherwise, truly want to constrain Facebook and Google — or, for that matter, all of the other ad networks and companies that in reality are far more of a threat to user privacy — then the ultimate force is user demand, and the lever is demanding transparency on exactly what these companies are doing.

What, though, does transparency mean in the context of enabling “user generated regulation”, and what might meaningful regulation look like that achieves the goal of forcing said transparency in a way that fosters competition instead of inhibiting it? The answer goes back to data factories.

Raw Data Versus Processed Data

The first challenge with a data factory is that it is impossible to peer inside. Both Facebook and Google offer customers ways to view their data, but not only is the presentation overwhelming, the data is precisely what you gave them. It is the raw inputs.

Advertisers, interestingly enough, cannot download custom audiences once uploaded, but given that data is (also) their business, it is extremely likely that they retain the list of email addresses they uploaded in the first place; the same thing applies to 3rd party data providers. Websites, meanwhile, are completely in the dark: that Facebook badge or like button may provide a page view or two, but it doesn’t give any data back in return.

What no one gets is the final product: the melding of all that data from all those sources to build a far more detailed profile of every Facebook user than they provided on their own. There is no question, though, that it is happening. Last week Gizmodo had an excellent write-up of a paper in the journal Proceedings on Privacy Enhancing Technologies detailing how Facebook users could be targeted for ads with a whole host of information that was never provided by the user, including landline numbers, unpublished email addresses, and phone numbers provided for two-factor authentication:

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network. When asked about this, a Facebook spokesperson said that “we use the information people provide to offer a more personalized experience, including showing more relevant ads.” She said users bothered by this can set up two-factor authentication without using their phone numbers; Facebook stopped making a phone number mandatory for two-factor authentication four months ago.

That quote from the spokesperson is an acknowledgement of the data factory: Facebook doesn’t care where it gets data, it is all just an input in service of the output — a targetable profile.

This lack of care about what precisely goes into a finished product is hardly unique to Facebook. One of the most famous examples is Nike:

A boy sewing Nike soccer balls
According to the Internet, this is the photo from Life Magazine. I could not find a copy to be sure.

That image is from the June, 1986, issue of Life Magazine, which detailed how children in Pakistan were manufacturing soccer balls for pennies a day. Nike executives, in a refrain that is vaguely familiar, were initially aggrieved; after all, soccer balls were not inflated until after they were shipped, which meant the photo was staged.

That was surely correct, and yet such a complaint utterly missed the point: Nike didn’t really care where it got its soccer balls, or shoes or clothes or anything else. It simply paid the factory owners and washed its hands of the problem. That photo, and the decades of protests and boycotts that followed, forced the company to do better.

The Privacy Obstacle

Unfortunately, while Nike could not stop a photographer from traveling to Pakistan (and, truth be told, stage a photo), the general public has no way to see inside the Facebook or Google factories — and this is where regulators come in.

The most important thing that regulators could do is force Facebook and Google — and all data collectors — to disclose their factory output. Give users the ability to see not simply what they put in — which again, Google and Facebook do (and which GDPR requires), but also what comes out after all of the inputs are mixed and matched.

Make no mistake, no company will do this on its own, and not simply for business reasons. Note the Facebook spokesman’s response to Gizmodo when asked about the use of uploaded contact information:

“People own their address books,” a Facebook spokesperson said by email. “We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them.”

This gets at how it is privacy regulations in particular go wrong: in the attempt to make rules that protect people without their agency, those wishing to take said agency cannot even know what exactly Facebook knows about them because, well, privacy. Meanwhile, websites throw up pop-ups and overlays that no one reads, or ban entire continents, not because their users care but because a regulator said so.

Privacy Realities

Here is the other reality regulators need to grapple with: most users don’t care about privacy, particularly if it saves them money. I came across this tweet in response to an interview clip of Tim Cook talking about privacy and it rather succinctly made the point:

A tweet from someone that would sacrifice privacy for cheaper iPhones

Frankly, I don’t blame the apathy of most users: what Facebook and Google and all of the other ad-supported services and sites on the Internet provide is immensely valuable. Moreover, I’m the first (and often only!) to defend personalized ads: I think they are a critical component of building a future where anyone can build a niche business thanks to the Internet making the entire world an addressable market — if only they can find their customers.

At the same time, most users truly have no idea what data these companies hold. Might they change their minds if they actually saw the processed data, not simply the raw inputs? I don’t know, but I do think it is their decision to make.

Moreover, establishing clear requirements that users be able to view not only the data they uploaded but their entire processed profile — the output of the data factory — would be far less burdensome to new and smaller companies that seek to challenge these behemoths. Data export controls could be built in from the start, even as they are free to build factories as complex as the big companies they are challenging — or, as a potential selling point, show off that they don’t have a factory at all. This is much easier than trying to abide by rules that apply to every user — whether they want the protection or not — and which were designed with Facebook and Google in mind, not an understaffed startup.

Indeed, that is the crux of the matter: regulators need to trust users to take care of their own privacy, and enable them to do so — and, by extension, create the conditions for users to actually know what is going on with their data. And, if they decide they don’t care, so be it. The market will have spoken, an outcome that should be the regulator’s goal in the first place.

Instagram’s CEO

In the hours after The New York Times broke the story that Instagram co-founders Kevin Systrom and Mike Krieger had resigned from Instagram, the question quickly turned to why; the immediate culprit was everyone’s favorite punching bag, Facebook CEO Mark Zuckerberg:

  • Bloomberg: The founders of Instagram are leaving Facebook Inc. after growing tensions with Chief Executive Officer Mark Zuckerberg over the direction of the photo-sharing app, people familiar with the matter said.
  • TechCrunch: According to TechCrunch’s sources, tension had mounted this year between Instagram and Facebook’s leadership regarding Instagram’s autonomy. Facebook had agreed to let it run independently as part of the acquisition deal. But in May, Instagram’s beloved VP of Product Kevin Weil moved to Facebook’s new blockchain team and was replaced by former VP of Facebook News Feed Adam Mosseri — a member of Zuckerberg’s inner circle.
  • Recode: Instagram co-founders Kevin Systrom and Mike Krieger are resigning from the company they built amid frustration and agitation with Facebook CEO Mark Zuckerberg’s increased meddling and control over Instagram, according to sources.

All of these stories are interesting, and undoubtedly more details will come out over the next few days. At the same time, by virtue of looking at events that occurred over the past few weeks or months or even years, they all fundamentally miss the point. The date that matters when it comes to understanding these resignations is April 9, 2012, and the people most responsible are Kevin Systrom and Mike Krieger.

Extraordinary Product Leaders

Zuckerberg’s statement about Systrom and Krieger’s resignation was quite terse, and perhaps for that reason, rather revealing:

“Kevin and Mike are extraordinary product leaders and Instagram reflects their combined creative talents. I’ve learned a lot working with them for the past six years and have really enjoyed it. I wish them all the best and I’m looking forward to seeing what they build next.”

Calling Systrom and Krieger “extraordinary product leaders” is high praise, and remarkably enough, an understatement.

Instagram started out as a Foursquare-esque check-in app called Burbn, but when Systrom and Krieger realized that Brbn’s users were not checking in but were sharing photos like crazy, the pair quickly built a new app called Instagram. MG Siegler wrote at the time, in a remarkably prescient synopsis:

Unlike Burbn, Instagram is neither a location-based app (though that is one component), nor is it HTML5-based. But it did spring out of the way co-founders Kevin Systrom and Mike Krieger saw people using Burbn. That is: quick, social sharing — and a desire to share photos from places. That’s the foundation of Instagram.

More specifically, Instagram is a iPhone photo-sharing application that allows you to apply interesting filters to your photos to make them really pop…Once you take a picture and apply a filter (there’s also an option not to), the photo is shared into your Instagram Feed. From here, your friends on the site can “like” or comment on it. But another key to Instagram is that it’s just as easy to share these photos to other social networks — like Twitter, Facebook, and Flickr.

Nearly all of the key pieces were there from the beginning:

  • Instagram had a reason to download: cool filters that, unlike the competition, were free.
  • Instagram had a great user experience: instantaneous sharing to social networks, without jumping through “Save to Camera Roll” hoops.
  • Instagram had the seeds of something much greater than a photo editing app: it was, from the beginning, a social network in its own right; as Chris Dixon describes it, “Come for the tool, stay for the network.”

Instagram took off like a rocket, and had 10 million users in a year; that number would triple within the next six months, and was set to grow even faster when the startup finally launched an Android version, which racked up 1 million downloads in 24 hours. That is when Facebook came in with an offer Systrom and company couldn’t refuse: $1 billion in cash and stock for, well, for what?

Technically speaking, Instagram was a company. In practice, though, Instagram was a product, and its business model was venture capital funding. To be sure, this wouldn’t be the case forever, but on April 9, 2012, the road from popular product to viable company was a long and arduous one. Instagram would not only need to continue growing its user base, it would also have to scale its infrastructure, figure out a business model (ok fine, advertising), build up tools to support that business model (first a sales team, then a self-serve model, plus tracking and targeting capabilities), all while fighting off larger and more established companies — particularly Facebook — that were waking up to the threat Instagram posed to their hold on user attention.

Or Systrom and Instagram could offload all of those responsibilities to Facebook and continue being “extraordinary product leaders”, and pocket $1 billion to boot (and, to be fair to Systrom and team, that understates their gains; that $1 billion included $700 million in Facebook stock, which today is worth nearly $4 billion). It is a defensible choice (for Instagram anyways; not for the regulators that approved the deal), but the implication is that, title notwithstanding, Systrom was never the CEO of Instagram; to be a CEO is to have a company that can stand on its own.

The difference from Zuckerberg — Instagram’s real CEO — is stark. Facebook launched in February, 2004, and sold its first ad two months later. True, “Facebook Flyers” bear little resemblance to the News Feed ads that power the company today, but Zuckerberg’s immediate instinct to build not just a product but a company is notable. Indeed, it casts a new light on the brash CEO’s (in)famous business cards:

A fictionalized version of Mark Zuckerberg's business card
A recreation of Mark Zuckerberg’s infamous business card from the movie The Social Network. The actual card looked like this, with the caption in the lower left corner. They were also not Zuckerberg’s primary business cards.

He was indeed, in title and in practice. To be CEO — to have control — is, at least in the long run, not simply about building a great product. It is about finding and developing a business model that lets you determine your own destiny.

How the Snapchat Threat Was Vanquished

Probably the pinnacle of Systrom and Zuckerberg’s collaboration — extraordinary product leader and ruthless CEO — was Instagram Stories. Systrom freely admitted that the concept was copied from Snapchat; as I noted at the time, that would certainly be good enough given Instagram’s larger network:

Instagram and Facebook are smart enough to know that Instagram Stories are not going to displace Snapchat’s place in its users lives. What Instagram Stories can do, though, is remove the motivation for the hundreds of millions of users on Instagram to even give Snapchat a shot.

Getting consumer adoption of new products is hard; when that adoption requires a network, it’s harder still, at least if most of your network is not using said product; on the flipside, those same difficulties become massive accelerants once the product passes a certain threshold of your friends. Snapchat has passed that threshold amongst teenagers and increasingly young adults in the United States, and every day gets closer with other demographics and geographies.

Instagram, though, is already there, but with a product that does Facebook’s job of presenting your best self. What makes this move so audacious is Zuckerberg and Systrom’s bet that they can refashion Instagram into a product for being yourself, at least to a sufficient degree to hold off Snapchat’s ongoing suction of attention.

That article was mostly spot-on; my primary error was underestimating just how good Instagram’s product would be. Instagram Stories from day one were a better experience than Snapchat Stories, particularly in terms of speed; the product differences only grew from there. Ultimately, Instagram Stories didn’t simply stem Snapchat’s growth; it actually accelerated Instagram’s:

Instagram's Monthly Active Users

Meanwhile, Zuckerberg and Facebook’s ad team were cutting off Snapchat’s monetization oxygen, as I explained in Facebook’s Lenses:

Facebook spent years building out News Feed advertising — not simply the display and targeting technology but also the entire back-end apparatus for advertisers, connections with non-Facebook data sources and points-of-sale, relationships with ad buyers, etc. — and then simply plugged Instagram into that infrastructure.

The payoff of this integrated approach cannot be overstated. Instagram got to scale in terms of monetization years faster than they would have on their own, even as the initial product team had the freedom to stay focused on the user experience. Facebook the app benefited as well, because Instagram both increased the surface area for Facebook ad campaigns even as it increased Facebook’s targeting capabilities.

The biggest impact, though is on potential competition. It is tempting to focus on the “R” in “ROI” — the return on investment — and as I just noted Instagram + Facebook makes that even more attractive. Just as important, though, is the “I”; there is tremendous benefit to being a one-stop shop for advertisers, who can save time and money by focusing their spend on Facebook. The tools are familiar, the buys are made across platforms, and as Zuckerberg and Sandberg alluded to with regard to Stories, the ads themselves only need to be made once to be used across multiple platforms. Why even go to the trouble to advertise anywhere else?

This dynamic, by the way, was very much apparent when Snap IPO’d a year-and-a-half ago; indeed, Snap CEO Evan Spiegel, often cast as the anti-Systrom — the CEO that said “No” to Facebook — arguably had the same flaw. Systrom offloaded the building of a business to Zuckerberg; Spiegel didn’t bother until it was much too late.

Instagram’s Challenge

Still, as good as the Instagram Stories product was, it is difficult to overstate the built-in advantage that came from Instagram’s larger network, and impossible to overstate the importance of having a shared advertising backend with Facebook. To put it another way, Instagram’s two biggest advantages relative to Snapchat, or any other competitors that may arise, didn’t have much to do with product — Systrom’s speciality — at all.

There is no better example than IGTV. Three months ago Systrom announced Instagram’s new long-form video offering with a presentation that exuded product sensibility. I marveled at the time:

In just a few minutes Systrom brilliantly and, to my mind, accurately, explained how video consumption was changing for teenagers in particular, highlighted how current solutions (i.e. YouTube) fell short, and set out the principles that should guide the creation of a better service (mobile first, simple, and high quality videos). And, naturally, that better service was IGTV.

It doesn’t seem to have mattered in the slightest. Josh Constine wrote a smart article about IGTV’s struggles last month on TechCrunch:

It’s indeed too early for a scientific analysis, and Instagram’s feed has been around since 2010, so it’s obviously not a fair comparison, but we took a look at the IGTV view counts of some of the feature’s launch partner creators. Across six of those creators, their recent feed videos are getting roughly 6.8X as many views as their IGTV posts. If IGTV’s launch partners that benefited from early access and guidance aren’t doing so hot, it means there’s likely no free view count bonanza in store from other creators or regular users. They, and IGTV, will have to work for their audience. That’s already proving difficult for the standalone IGTV app. Though it peaked at the #25 overall US iPhone app and has seen 2.5 million downloads across iOS and Android according to Sensor Tower, it’s since dropped to #1497 and seen a 94 percent decrease in weekly installs to just 70,000 last week.

The one big surprise of the launch event was where IGTV would exist. Instagram announced it’d live in a standalone IGTV app, but also as a feature in the main app accessible from an orange button atop the home screen that would occasionally call out that new content was inside. It could have had its own carousel like Stories or been integrated into Explore until it was ready for primetime. Instead, it was ignorable. IGTV didn’t get the benefit of the home screen spotlight like Instagram Stories. Blow past that one orange button and avoid downloading the separate app, and users could go right on tapping and scrolling through Instagram without coming across IGTV’s longer videos. View counts of the launch partners reflect that.

I’m not arguing that product doesn’t matter. It does, incredibly so. But it is not the only thing that matters, and its relative importance decreases over time as things like network effects and business models come to bear. To that end, the single most important issue facing Instagram — the company that is, or, more accurately, Facebook — is Stories monetization. I wrote last month in Facebook’s Story Problem — and Opportunity:

That’s the thing about Stories, though: while more people may use Instagram because of Stories, some significant number of people view Stories instead of the Instagram News Feed, or both in place of the Facebook News Feed. In the long run that is fine by Facebook — better to have users on your properties than not — but the very same user not viewing the News Feed, particularly the Facebook News Feed, may simply not be as valuable, at least for now.

This is the context for whatever dispute drove Systrom and Krieger’s resignation: not only do they not actually control their own company (because they don’t control monetization), they also aren’t essential to solving the biggest issue facing their product. Instagram Stories monetization is ultimately Facebook’s problem, and in case it wasn’t clear before, it is now obvious that Facebook will provide the solution.


I don’t write any of this to denigrate Systrom and Krieger in the slightest. If anything my appreciation for their incredible product sense has only grown over the last few years. Both are truly extraordinary, as is their creation.

Controlling one’s own destiny, though, takes more than product or popularity. It takes money, which is to say it takes building a company, working business model and all. That is why I mark April 9, 2012, as the day yesterday became inevitable. Letting Facebook build the business may have made Systrom and Krieger rich and freed them to focus on product, but it made Zuckerberg the true CEO, and always, inevitably, CEOs call the shots.

I wrote a follow-up to this article in this Daily Update.

The European Union Versus the Internet

Earlier this summer the Internet breathed a sigh of relief: the European Parliament voted down a new Copyright Directive that would have required Internet sites to proactively filter uploaded content for copyright violations (the so-called “meme ban”), as well as obtain a license to include any text from linked sites (the “link tax”).

Alas, the victory was short-lived. From EUbusiness:

Internet tech giants including Google and Facebook could be made to monitor, filter and block internet uploads under amendments to the draft Copyright Directive approved by the EU Parliament Wednesday. At their plenary session, MEPs adopted amendments to the Commission’s draft EU Copyright Directive following from their previous rejection, adding safeguards to protect small firms and freedom of expression…

Parliament’s position toughens the Commission’s proposed plans to make online platforms and aggregators liable for copyright infringements. This would also apply to snippets, where only a small part of a news publisher’s text is displayed. In practice, this liability requires these parties to pay right holders for copyrighted material that they make available.

At the same time, in an attempt to encourage start-ups and innovation, the text now exempts small and micro platforms from the directive.

I chose this rather obscure source to quote from for a reason: should Stratechery ever have more than either 50 employees or €10 million in revenue, under this legislation I would likely need to compensate EUbusiness for that excerpt. Fortunately (well, unfortunately!), this won’t be the case anytime soon; I appreciate the European Parliament giving me a chance to start-up and innovate.

This exception, along with the removal of an explicit call for filtering (that will still be necessary in practice), was enough to get the Copyright Directive passed. This doesn’t mean it is law: the final form of the Directive needs to be negotiated by the EU Parliament, European Commission, and the Council of the Europe Union (which represents national governments), and then implemented via national laws in each EU country (that’s why it is a Directive).

Still, this is hardly the only piece of evidence that EU policy makers have yet to come to grips with the nature of the Internet: there is also the General Data Protection Regulation (GDPR), which came into effect early this year. Much like the Copyright Directive, the GDPR is targeted at Google and Facebook, but as is always the case when you fundamentally misunderstand what you are fighting, the net effect is to in fact strengthen their moats. After all, who is better equipped to navigate complex regulation than the biggest companies of all, and who needs less outside data than those that collect the most?

In fact, examining where it is that the EU’s new Copyright Directive goes wrong — not just in terms of policy, but also for the industries it seeks to protect — hints at a new way to regulate, one that works with the fundamental forces unleashed by the Internet, instead of against them.

Article 13 and Copyright

Forgive the (literal) legalese, but here is the relevant part of the Copyright Directive (the original directive is here and the amendements passed last week are here) pertaining to copyright liability for Internet platforms:

Online content sharing service providers perform an act of communication to the public and therefore are responsible for their content and should therefore conclude fair and appropriate licensing agreements with rightholders. Where licensing agreements are concluded, they should also cover, to the same extent and scope, the liability of users when they are acting in a non-commercial capacity…

Member States should provide that where right holders do not wish to conclude licensing agreements, online content sharing service providers and right holders should cooperate in good faith in order to ensure that unauthorised protected works or other subject matter, are not available on their services. Cooperation between online content service providers and right holders should not lead to preventing the availability of non-infringing works or other protected subject matter, including those covered by an exception or limitation to copyright…

This is legislative fantasizing at its finest: Internet platforms should get a license from all copyright holders, but if they don’t want to (or, more realistically, are unable to), then they should keep all copyrighted material off of their platforms, even as they allow all non-infringing work and exceptions. This last bit is a direct response to the “meme ban” framing: memes are OK, but the exception “should only be applied in certain special cases which do not conflict with normal exploitation of the work or other subject-matter concerned and do not unreasonably prejudice the legitimate interests of the rightholder.”1 That’s nearly impossible for a human to parse; expecting a scalable solution — which yes, inevitably means content filtering — is absurd. There simply is no way, especially at scale, to preemptively eliminate copyright violations without a huge number of mistakes.

The question, then, is in what direction those mistakes should run. Through what, in retrospect, are fortunate accidents of history,2 Internet companies are mostly shielded from liability, and need only respond to takedown notices in a reasonable amount of time. In other words, the system is biased towards false negatives: if mistakes are made, it is that content that should not be uploaded is. The Copyright Directive, though, would shift the bias towards false positive: it mistakes are made, it is that allowable content will be blocked for fear of liability.

This is a mistake. For one, the very concept of copyright is a government-granted monopoly on a particular arrangement of words. I certainly am not opposed to that in principle — I am obviously a benefactor — but in a free society the benefit of the doubt should run in the opposite direction of those with the legal right to deny freedom. The Copyright Directive, on the other hand, requires Internet Platforms to act as de facto enforcement mechanisms of that government monopoly, and the only logical response is to go too far.

Moreover, the cost of copyright infringement to copyright holders has in fact decreased dramatically. Here I am referring to cost in a literal sense: to “steal” a copyrighted work in the analog age required the production of a physical product with its associated marginal costs; anyone that paid that cost was spending real money that was not going to the copyright holder. Digital goods, on the other hand, cost nothing to copy; pirated songs or movies or yes, Stratechery Daily Updates, are very weak indicators at best of foregone revenue for the copyright holder. To put it another way, the harm is real but the extent of the harm is unknowable, somewhere in between the astronomical amounts claimed by copyright holders and the zero marginal cost of the work itself.

The larger challenge is that the entire copyright system was predicated on those physical mediums: physical goods are easier to track, easier to ban, and critically, easier to price. By extension, any regulation — or business model, for that matter — that starts with the same assumptions that guided copyright in the pre-Internet era is simply not going to make sense today. It makes far more sense to build new business models predicated on the Internet.

The music industry is a perfect example: the RIAA is still complaining about billions of dollars in losses due to piracy, but many don’t realize the industry has returned to growth, including a 16.5% revenue jump last year. The driver is streaming, which — just look at the name! — depends on the Internet: subscribers get access to basically all of the songs they could ever want, while the recording industry earns somewhere around $65 per individual subscriber per year with no marginal costs.3 It’s a fantastic value for customers and an equally fantastic revenue model for recording companies; that alignment stems from swimming with the Internet, not against it.

This, you’ll note, is not a statement that copyright is inherently bad, but rather an argument that copyright regulation and business models predicated on scarcity are unworkable and ultimately unprofitable; what makes far more sense for everyone from customers to creators is an approach that presumes abundance. Regulation should adopt a similar perspective: placing the burden on copyright holders not only to police their works, but also to innovate towards business models that actually align with the world as it is, not as it was.

Article 11 and Aggregators

This shift from scarcity to abundance has also had far-reaching effects on the value chains of publications, something I have described in Aggregation Theory (“Value has shifted away from companies that control the distribution of scarce resources to those that control demand for abundant ones“). Unfortunately the authors of the Copyright Directive are quite explicit in their lack of understanding of this dynamic; from Article 11 of the Directive:

The increasing imbalance between powerful platforms and press publishers, which can also be news agencies, has already led to a remarkable regression of the media landscape on a regional level. In the transition from print to digital, publishers and news agencies of press publications are facing problems in licensing the online use of their publications and recouping their investments. In the absence of recognition of publishers of press publications as rightholders, licensing and enforcement in the digital environment is often complex and inefficient.

In this reading the problem facing publishers is a bureaucratic one: capturing what is rightfully theirs is “complex and inefficient”, so the Directive provides for “the exclusive right to authorise or prohibit direct or indirect, temporary or permanent reproduction by any means and in any form, in whole or in part” of their publications “so that they may obtain fair and proportionate remuneration for the digital use of their press publications by information society service providers.”4

The problem, though, is that the issue facing publishers is not a problem of bureaucracy but of their relative position in a world characterized by abundance. I wrote in Economic Power in the Age of Abundance:

For your typical newspaper the competitive environment is diametrically opposed to what they are used to: instead of there being a scarce amount of published material, there is an overwhelming abundance. More importantly, this shift in the competitive environment has fundamentally changed just who has economic power.

In a world defined by scarcity, those who control the scarce resources have the power to set the price for access to those resources. In the case of newspapers, the scarce resource was reader’s attention, and the purchasers were advertisers…The Internet, though, is a world of abundance, and there is a new power that matters: the ability to make sense of that abundance, to index it, to find needles in the proverbial haystack. And that power is held by Google. Thus, while the audiences advertisers crave are now hopelessly fractured amongst an effectively infinite number of publishers, the readers they seek to reach by necessity start at the same place — Google — and thus, that is where the advertising money has gone.

This is the illustration I use to show the shift in publishing specifically (this time using Facebook):

A drawing of Aggregation Theory - Facebook and Newspapers

This is why the so-called “link tax” is doomed to failure — indeed, it has already failed every time it has been attempted. Google, which makes no direct revenue from Google News,5 will simply stop serving Google News to the EU, or dramatically curtail what it displays, and the only entities that will be harmed — other than EU consumers — are the publications that get traffic from Google News. Again, that is exactly what happened previously.

There is another way to understand the extent to which this proposal is a naked attempt to work against natural market forces: Google’s search engine respects a site’s robot.txt file, wherein a publisher can exclude their site from the company’s index. Were it truly the case that Google was profiting unfairly from the hard word of publishers, then publishers have a readily-accessible tool to make them stop. And yet they don’t, because the reality is that while publishers need Google (and Facebook), that need is not reciprocated. To that end, the only way to characterize money that might flow from Google and Facebook (or a €10-million-in-revenue-generating Stratechery) to publishers is as a redistribution tax, enforced by those that hold the guns.

Here again the solution ought to flow in the opposite direction, in a way that leverages the Internet, instead of fighting it. An increasing number of publishers, from large newspapers to sites like Stratechery, are taking advantage of the massive addressable market unlocked by the Internet, leveraging the marketing possibilities of free social media and search engine results, and connecting directly with readers that care — and charging them for it.

I do recognize this is a process that takes time: it is particularly difficult for publishers built with monopoly-assumptions to change not just their business model but their entire editorial strategy for a world where quality matters more than quantity. To that end, if the EU wants to, as they say in the Copyright Directive, “guarantee the availability of reliable information”, then make the tax and subsidy plan they effectively propose explicit. At least then it would be clear to everyone what is going on.

The GDPR and the Regulatory Corollary of Aggregation

This brings me to a piece of legislation I have been very critical of for quite some time: GDPR. The intent of the legislation is certainly admirable — protect consumer privacy —although (and this may be the American in me speaking) I am perhaps a bit skeptical about just how much most consumers care relative to elites in the media. Regardless, the intent matters less than the effect, the latter of which is to entrench Google and Facebook. I wrote in Open, Closed, and Privacy:

While GDPR advocates have pointed to the lobbying Google and Facebook have done against the law as evidence that it will be effective, that is to completely miss the point: of course neither company wants to incur the costs entailed in such significant regulation, which will absolutely restrict the amount of information they can collect. What is missed is that the increase in digital advertising is a secular trend driven first-and-foremost by eyeballs: more-and-more time is spent on phones, and the ad dollars will inevitably follow. The calculation that matters, then, is not how much Google or Facebook are hurt in isolation, but how much they are hurt relatively to their competitors, and the obvious answer is “a lot less”, which, in the context of that secular increase, means growth.

This is the conundrum that faces all major Internet regulation, including the Copyright Directive; after all, Google and Facebook can afford — or have already built — content filtering systems, and they already have users’ attention such that they can afford to cut off content suppliers. To that end, the question is less about what regulation is necessary and more about what regulation is even possible (presuming, of course, that entrenching Google and Facebook is not the goal).

This is where thinking about the problems with the Copyright Directive is useful:

  • First, just as business models ought to be constructed that leverage the Internet instead of fight it, so should regulation.
  • Second, regulation should start with the understanding that power on the Internet flows from controlling demand, not supply.

To understand what this sort of regulation might look like, it may be helpful to work backwards. Specifically, over the last six months Facebook has made massive strides when it comes to protecting user privacy. The company has shut down third-party access to sensitive data, conducted multiple audits of app developers that accessed that data, added new privacy controls, and more. Moreover, the company has done this for all of its users, not just those in the EU, suggesting its actions were not driven by GDPR.

Indeed, the cause is obvious: the Cambridge Analytica scandal, and all of the negative attention associated with it. To put it another way, bad PR drove more Facebook action in terms of user privacy than GDPR or a FTC consent decree. This shouldn’t be a surprise; I wrote in Facebook’s Motivations:

Perhaps there is a third motivation though: call it “enlightened self-interest.” Keep in mind from whence Facebook’s power flows: controlling demand. Facebook is a super-aggregator, which means it leverages its direct relationship with users, zero marginal costs to serve those users, and network effects, to steadily decrease acquisition costs and scale infinitely in a virtuous cycle that gives the company power over both supply (publishers) and advertisers.

It follows that Facebook’s ultimate threat can never come from publishers or advertisers, but rather demand — that is, users. The real danger, though, is not from users also using competing social networks (although Facebook has always been paranoid about exactly that); that is not enough to break the virtuous cycle. Rather, the only thing that could undo Facebook’s power is users actively rejecting the app. And, I suspect, the only way users would do that en masse would be if it became accepted fact that Facebook is actively bad for you — the online equivalent of smoking.

For Facebook, the Cambridge Analytica scandal was akin to the Surgeon General’s report on smoking: the threat was not that regulators would act, but that users would, and nothing could be more fatal. That is because:

The regulatory corollary of Aggregation Theory is that the ultimate form of regulation is user generated.

If regulators, EU or otherwise, truly want to constrain Facebook and Google — or, for that matter, all of the other ad networks and companies that in reality are far more of a threat to user privacy — then the ultimate force is user demand, and the lever is demanding transparency on exactly what these companies are doing.

To that end, were I a regulator concerned about user privacy, my starting point would not be an enforcement mechanism but a transparency mechanism. I would establish clear metrics to measure user privacy — types of data retained, types of data inferred, mechanisms to delete user-generated data, mechanisms to delete inferred data, what data is shared, and with whom — and then measure the companies under my purview — with subpoena power if necessary — and publish the results for the users to see.

This is the way to truly bring the market to bear on these giants: not regulatory fiat, but user sentiment. That is because it is an approach that understands the world as it is, not as it was, and which appreciates that bad PR — because it affects demand — is a far more effective instigator of change than a fine paid from monopoly profits.

I wrote a follow-up to this article in this Daily Update.

  1. Full text of the “meme exception”:

    Despite some overlap with existing exceptions or limitations, such as the ones for quotation and parody, not all content that is uploaded or made available by a user that reasonably includes extracts of protected works or other subject-matter is covered by Article 5 of Directive 2001/29/EC. A situation of this type creates legal uncertainty for both users and rightholders. It is therefore necessary to provide a new specific exception to permit the legitimate uses of extracts of pre-existing protected works or other subject-matter in content that is uploaded or made available by users. Where content generated or made available by a user involves the short and proportionate use of a quotation or of an extract of a protected work or other subject-matter for a legitimate purpose, such use should be protected by the exception provided for in this Directive. This exception should only be applied in certain special cases which do not conflict with normal exploitation of the work or other subject-matter concerned and do not unreasonably prejudice the legitimate interests of the rightholder. For the purpose of assessing such prejudice, it is essential that the degree of originality of the content concerned, the length/extent of the quotation or extract used, the professional nature of the content concerned or the degree of economic harm be examined, where relevant, while not precluding the legitimate enjoyment of the exception. This exception should be without prejudice to the moral rights of the authors of the work or other subject-matter. [↩︎]

  2. That link is about Section 230, which is a U.S. law shielding Internet platforms from liability for what their users upload, but the same principle broadly speaking applies in the E.U. presently [↩︎]
  3. That $65 figure is an estimate of the amount paid out by streaming services like Spotify; the total number per listener is lower, thanks to family plans and shared accounts [↩︎]
  4. The first quotation is from EU Directive 2001/29/EC which is explicitly evoked in the new Copyright Directive, from whence comes the second quotation. [↩︎]
  5. The company does, of course, collect data to be used in advertising elsewhere [↩︎]

The iPhone Franchise

Apple released a new flagship iPhone yesterday, the iPhone XS. This isn’t exactly ground-breaking news: it is exactly what the company has done for eleven years now (matching the 11-year run of non-iOS iPods, by the way1). To that end, what has always interested me more are new-to-the-world non-flagship models: the iPhone 5C in 2013, the iPhone 8 last year (or was it the iPhone X?), and the iPhone XR yesterday. Each, I think, highlights critical junctions not only in how Apple thinks about the iPhone strategically, but also about how Apple thinks about itself.

The iPhone 5C

It’s hard to remember now, but the dominant Apple narrative in 2013, after a five-year iPhone run that saw the company’s stock price increase around 700%, was that the company was at risk of low-end disruption from Android and high-end saturation now that smartphone technology was “good enough”.

Apple's stock price during the iPhone era

This was, for me, rather fortuitous: Stratechery launched in the middle of the Apple-needs-a-cheap-iPhone era, providing plenty of fodder not only for articles defending Apple’s competitive position,2 but also multiple articles speculating on what the iPhone 5C would cost and how it would be positioned.

For the record, I guessed wrong, and I knew I was wrong Two Minutes, Fifty-six Seconds into the keynote.

It was at two minutes, fifty-six seconds that Tim Cook said there would be a video – a video! – about the iTunes Festival.

And it was awesome.

In case you didn’t watch the whole video (and you really should – it’s only a couple of minutes; due to a copyright claim I had to embed Apple’s full-length keynote), this clip of the ending captures why it matters:

Message: Apple is cool.
Message: Apple is cool.

This was Apple, standing up and saying to all the pundits, to all the analysts, to everyone demanding a low price iPhone:

NO!

No, we will NOT compete on price, we will offer something our competitors can’t match.

No, we are NOT selling a phone, we are selling an experience.

No, we will NOT be cheap, but we will be cool.

No, you in the tech press and on Wall Street do NOT understand Apple, but we believe that normal people love us, love our products, and will continue to buy, start to buy, or aspire to buy.

Oh, and Samsung? Damn straight people line up for us. 20 million for a concert. “It’s like a product launch.”

Apple's iTunes Festival video on the left, Samsung's Galaxy SIII commercial mocking those standing in line on the right
Apple’s iTunes Festival video on the left, Samsung’s Galaxy SIII commercial mocking those standing in line on the right

This attitude and emphasis on higher-order differentiation — the experience of using an iPhone — dominated the entire keynote and the presentation of features, with particularly emphasis throughout on the interplay between software and hardware.

In fact, that understated Apple’s position in the market: as I discussed last year the iPhone 5C — which in retrospect, was really just an iPhone 5 replacement in Apple’s trickle-down approach to serving more price-sensitive customers — was a bit of a failure: Apple customers only wanted the best iPhone, and those that couldn’t afford the current flagship preferred a former flagship, not one that was “unapologetically plastic”.

Thus the first lesson: Apple wouldn’t go down-market, nor did its customers want it to.

The iPhone X

Last year, meanwhile, was in many respects the opposite of the iPhone 5S and 5C launch, at least from a framing perspective. The iPhone 8 was the next in line after the iPhone 7 and all of the iPhones before it; it was the iPhone X that was presented as being out-of-band — “one more thing”, to use the company’s famous phrase. The iPhone X was the “future of the smartphone”, with a $999 price tag to match.

A year on, it is quite clear that the future is very much here. CEO Tim Cook bragged during yesterday’s keynote that the iPhone X was the best-selling phone in the world, something that was readily apparent in Apple’s financial results. iPhone revenue was again up-and-to-the-right, not because Apple was selling more iPhones — unit growth was flat — but because the iPhone X grew ASP so dramatically:

iPhone Revenue, Units, and ASP on a TTM basis

This was the second lesson: for Apple’s best customers, price was no object.

The iPhone XR

To be clear, the overall strategy and pricing of the iPhones XS and XR were planned out two to three years ago; that’s how long product cycles take when it comes to high-end smartphones. Perhaps that is why the lessons of the iPhone 5C seem so readily apparent in the iPhone XR in particular.

First off, while the XR does not have stainless steel edges like the iPhones X or XS, it is a far cry from plastic: the back is glass, like the high end phones, and the aluminum sides not only look premium but will be hidden when the phone is in a case, as most will be. What really matters is that the front looks the same, with that notch: this looks like a high-end iPhone, with all of the status that implies.

Second, the iPhone XR is big — bigger than the XS (and smaller than the XS Max, and yes, that is its real name). This matters less for 2018 and more for 2020 and beyond: presuming Apple follows its trickle-down strategy for serving more price-sensitive markets, that means in two years its lowest-end offering will not be a small phone that the vast majority of the market rejected years ago, particularly customers for whom their phone is their only computing device, but one that is far more attractive and useful for far more people.

Third, that 2020 iPhone XR is going to be remarkably well-specced. Indeed, probably the biggest surprise from these announcements (well, other than the name “XS Max”) is just how good of a smartphone the XR is.

  • The XR has Apple’s industry-leading A12 chip, which is so far ahead of the industry that it will still be competitive with the best Android smartphones in two years, and massively more powerful than lower-end phones.
  • The XR has the same wide-angle camera as the XS, and the same iteration of Face ID. Both, again, are industry-leading and will be more than competitive two years from now.
  • The biggest differences from the XS are the aforementioned case materials, an LCD screen, and the lack of 3D Touch. Again, though, aluminum is still a premium material, Apple’s LCD screens are — and yes there is a theme here — the best in the industry, and 3D Touch is a feature that is so fiddly and undiscoverable that one could make the case XR owners are actually better off.

There really is no other way to put it: the XR is a fantastic phone, one that would be more than sufficient to maintain Apple’s position atop the industry were it the flagship. And yet, in the context of Apple’s strategy, it is best thought of as being quite literally ahead of its time.

The iPhone XS

There is, of course, the question of cannibalism: if the XR is so great, why spend $250 more on an XS, or $350 more for the giant XS Max?

This is where the iPhone X lesson matters. Last year’s iPhone 8 was a great phone too, with the same A11 processor as the iPhone X, a high quality LCD screen like the iPhone XR, and a premium aluminum-and-glass case (and 3D Touch!). It also had Touch ID and a more familiar interface, both arguably advantages in their own right, and the Plus size that so many people preferred.

It didn’t matter: Apple’s best customers, not just those who buy an iPhone every year, but also those whose only two alternatives are “my current once-flagship iPhone” or “the new flagship iPhone” are motivated first-and-foremost by having the best; price is a secondary concern. That is why the iPhone X was the best-selling smartphone, and the iPhone 8 — which launched two months before the iPhone X — a footnote.

To be sure, the iPhone X had the advantage of being something truly new, not just the hardware but also the accompanying software. It was the sort of phone an Apple fan might buy a year sooner than they had planned, or that someone more price sensitive might choose over a cheaper option. The XS will face headwinds in both regards: it is faster than the iPhone X, has a better camera, comes in gold — it’s an S-model, in other words — but it’s hard to see it pulling forward upgrades; it’s more likely natural XS buyers were pulled forward by the X. And, as noted above, the XR is a much more attractive alternative to the X than the 8 was to the X; most Apple fans may want the best, but some just want a deal, and the XR is a great one.

Apple should be fine though: overall unit sales may fall slightly, but the $1,099 XS Max will push the average selling price even higher. Note, too, that the XR is only available starting at $749; the longstanding $650 iPhone price point was bumped up to $699 last year, and is now a distant memory.3

Apple's Fall 2018 iPhone Lineup

To put it another way, to the extent the XR cannibalizes the XS, it cannibalizes them with an average selling price equal to Apple’s top-of-the-line iPhone from two years ago; the iPhone 8 is $50 higher than the former $550 price point as well.

Mission Impossible iPhone

This is what I meant when I said Apple’s second iPhone models capture how the company has changed not only its strategy but how the company seems to view itself:

  • 2013 was a time of uncertainty, with a sliding stock price and a steadily building clamor heralding Apple doom via low-end disruption; the company, though, found its voice with the 5C and declared its intentions to be unapologetically high-end; the 5C’s failure, such that it was, only cemented the rightness of that decision.
  • In 2017 the company, for the first time in ten years, started to truly test the price elasticity of demand for the iPhone: given its commitment to being the best, just how much could Apple charge for an iPhone X?
  • This year, then, comes the fully-formed iPhone juggernaut: an even more expensive phone, with arguably one of the weaker feature-driven reasons-to-buy to date, but for the fact it is Apple’s newest, and best, iPhone. And below that, a cheaper iPhone XR that is nearly as good, but neatly segmented primarily by virtue of not being the best, yet close enough to be a force in the market for years to come.

The strategy is, dare I say, bordering on over-confidence. Apple is raising prices on its best product even as that product’s relative differentiation from the company’s next best model is the smallest it has ever been.

Here, though, I thought the keynote’s “Mission: Impossible”-themed opening really hit the mark: the reason why franchises rule Hollywood is their dependability. Sure, they cost a fortune to make and to market, but they are known quantities that sell all over the world — $735 million-to-date for the latest Tom Cruise thriller, to take a pertinent example.

That is the iPhone: it is a franchise, the closest thing to a hardware annuity stream tech has ever seen. Some people buy an iPhone every year; some are on a two-year cycle; others wait for screens to crack, batteries to die, or apps to slow. Nearly all, though, buy another iPhone, making the purpose of yesterday’s keynote less an exercise in selling a device and more a matter of informing self-selected segments which device they will ultimately buy, and for what price.

I wrote a follow-up to this article in this Daily Update.

  1. Specifically, the original iPod was released in October, 2001, and the 7th generation iPod Nano in September, 2012; the last iPod Touch was released in July, 2015 [↩︎]
  2. I.e. Two Bears and What Clayton Christensen Got Wrong [↩︎]
  3. Hilariously, Senior Vice President of Worldwide Marketing Phil Schiller said in reference to the $749 price point, “That’s less than the iPhone 8 Plus. I’m really proud of the work the team has done on that”; Apple shareholders are surely proud that the price is $50 higher than the iPhone 8! [↩︎]

Uber’s Bundles

With Uber, nothing is easy.

Start with profitability, or the lack thereof: two weeks ago the company reported its quarterly “earnings”,1 and once again the losses were massive: $891 million on $2.8 billion in revenue. Clearly the business is failing, no?

Well, like I said, it’s not that easy: unlike a company like MoviePass, Uber has positive unit economics — that is, the company makes money on each ride. This is clear intuitively: Uber keeps somewhere between 20%–30% of each fare,2 from which it pays insurance costs, credit card fees, etc., and keeps the rest.3 According to last quarter’s numbers “the rest” totaled $1.5 billion, for a gross profit margin of 55% (13% of Uber’s total bookings). Moreover, margin is improving — it was 47% a year ago — mostly because Uber is managing to both take a higher percentage of fares even as it has reduced its spending on promotions and driver incentives (Cost of Revenue, meanwhile, appears to correspond very closely to gross bookings).

The problem for Uber is trifold: first, the company continues to spend massive amounts of money on “below-the-line” costs: $2.2 billion for Operations and Support, Sales and Marketing,4 Research and Development, General and Administrative, and Depreciation and Amortization. Second, it seems likely that a good portion of the company’s improving margin stems from exiting more difficult markets like Russia and Southeast Asia, as opposed to improvements in its core markets in the United States, Europe, and Oceania. And most concerning of all, Lyft seems to be outgrowing Uber.

Uber’s Lyft Problem

Lyft is a problem for Uber with riders, investors, and drivers.

From a rider perspective, Lyft has, unsurprisingly, benefited from the self-inflicted disaster that was 2017 (although to be fair, 2017 was the year of revelations of problems that had been in place for years). The consumer benefit of services like Uber and Lyft has always been clear, and Uber’s aggressive expansion paid off when the service became the default choice for a large portion of the market, something that is critical for a commodity offering with two-sided network effects. The problem is that Uber gave riders plenty of reasons to question their default choice with not just a sexual harassment scandal, and not just a lawsuit alleging the theft of intellectual property from Google, and not just allegations of brazenly circumventing local regulators, but all three (and honestly, this understates things).

This was particularly problematic because that two-sided network effect wasn’t that strong: sure, Uber was more likely to monopolize driver time given its larger user base, but as long as drivers are independent contractors Uber can’t do anything to prevent them from multihoming, that is, being available on both Uber and Lyft’s networks at the same time. Lyft was ready-and-able to absorb unhappy Uber riders, because they were effectively using Uber’s drivers to accommodate them.

The timing could not have been worse: it was only a few months prior that Lyft appeared to be for sale and unable to find a buyer; it seemed that former-CEO Travis Kalanick was going to win one of his biggest gambles, turning down an offer to acquire Lyft in 2014 in exchange for 18% of Uber.

It proved to be Kalanick’s biggest mistake, at least from a business perspective: within weeks of the Uber scandal explosion Lyft raised $600 million, and a month later formed a partnership with Waymo, Google’s self-driving car company. Suddenly the best way to invest in the most promising self-driving technology was Lyft; unsurprisingly Lyft has since raised an additional $2.3 billion, including an investment from Google Capital.

Uber’s Competitive Context

The reason this context matters is that a proper analysis of Uber’s business is fundamentally different today than it was two years ago — or four years ago, when I wrote Why Uber Fights. That is when I made the argument that even though Uber’s two-sided network effects were relatively weak thanks to the lack of driver lock-in, the fact that ride-sharing was a commodity market meant its head start and brand would lead to slow-but-steady growth in marketshare, eventually starving Lyft due to an inability to raise funds based on increasingly inferior financial results.

I stand by that analysis: it is exactly what happened, and Uber came very close to knocking Lyft out. At the same time, it is also no longer applicable, because Lyft no longer has any problem raising money, while Uber appears to be having a hard time holding onto its market share (as an indirect indicator of Uber’s waning power with consumers, note Uber’s recent inability to defeat a cap on ride-sharing in New York, after doing just that three years ago). To that end, the prospect of Lyft being present in the market for the foreseeable future means Uber’s needs a new strategy than simply squeezing Lyft dry.

Welcome to bundles, Uber-style.

Uber’s Consumer Bundle: Transportation-as-a-Service

Uber CEO Dara Khosrowshahi laid out a new vision for the Uber app in an interview with Kara Swisher earlier this year at the Code Conference:

DK: We are thinking about alternative forms of transport. If you look at Jump, the dockless bicycle startup Uber acquired earlier this year, the average length of a trip at Jump is 2.6 miles. That is, 30 to 40 percent of our trips in San Francisco are 2.6 miles or less. Jump is much, much cheaper than taking an UberX. To some extent it’s like, “Hey, let’s cannibalize ourselves.” Let’s create a cheaper form of transportation from A to B, and for you to come to Uber, and Uber not just being about cars, and Uber not being about what the best solution for us is, but really being about the best solution for here.

KS: So bikes, scooters?

DK: Bikes, perhaps scooters. I wanna get the bus network on. I wanna get the BART, or the Metro, etc., onto Uber. So, any way for you to get from point A to B.

KS: Wait, you wanna start your own BART? No.

DK: No, no, no. We’re not gonna go vertical. Just like Amazon sells third-party goods, we are going to also offer third-party transportation services. So, we wanna kinda be the Amazon for transportation, and we want to offer the BART as an alternative. There’s a company called Masabi that is connecting Metro, etc., into a payment system. So we want you to be able to say, “Should I take the BART? Should I take a bike? Should I take an Uber?” All of it to be real-time information, all of it to be optimized for you, and all of it to be done with the push of a button.

KS: So, any transportation?

DK: Any transportation, totally frictionless, real time.

In case you had any question about how serious Khosrowshahi is about the concept, he told the Financial Times in an interview yesterday:

During rush hour, it is very inefficient for a one-tonne hulk of metal to take one person 10 blocks…We’re able to shape behaviour in a way that’s a win for the user. It’s a win for the city. Short-term financially, maybe it’s not a win for us, but strategically long term we think that is exactly where we want to head…

We are willing to trade off short-term per-unit economics for long-term higher engagement…I’ve found in my career that engagement over the long term wins wars and sometimes it’s worth it to lose battles in order to win wars.

This is very much a bundle, and like any bundle, what makes the economics work in the long run is earning a larger total spend from consumers even if they spend less on any particular item. To that end, as Khosrowshahi notes, the real enemy is the car in the garage; to the extent Uber can replace that the greater its opportunity is.

Uber's consumer bundle

Moreover, the more that Uber can handle all of an end user’s transportation needs through the sort of complexity inherent in building such a service, the stickier Uber becomes for consumers. Granted, Lyft is promising to build the same thing, but Uber is a bit ahead and still has the bigger war chest, which may prove more helpful in a land grab as opposed to the current war of attrition. Moreover, Uber still has a significant geographic advantage over Lyft, which only just started expanding internationally, making it a better option for travelers.

Uber’s Driver Bundle: Uber Eats

Uber Eats, meanwhile, has the potential to be a very attractive business in its own right: Khosrowshahi said at Code Conference that the business has “a $6 billion bookings run rate, growing over 200 percent.” Uber takes 30% of that ($1.6 billion), as well as a $5 delivery fee from customers, out of which it pays drivers a pickup fee, drop-off fee, and per-mile rate (of which it keeps 25%); according to The Information, the service isn’t making money yet, but it is much more profitable than Uber’s ride-sharing business was at a similar scale.

Leaving aside the drivers for a moment, this is a classic aggregation play, where owning consumer demand gives Uber the ability to attract suppliers, increasing consumer demand in a virtuous cycle. Jason Droege, the Uber Vice President and Head of UberEverything, told Eater in an interview this past summer:

I think that we’re all here to service the consumer, right? And the eater. And I think eaters today want convenience, they want value, they want flexibility, and they want choice. And delivery offers all of those things. And restaurants choose to participate in delivery. And so if they don’t believe it’s valuable as a channel to connect to their consumers, or maybe new consumers, or reach new people with their brand, then that’s okay. We’re here to provide a conduit between the two. Not to tell them how to run their business.

It certainly is an open question as to whether services like Uber Eats help or hurt established restaurants; this New Yorker article recounts a number of anecdotes about restauranteurs who are a bit fuzzy on exactly how much Uber Eats is costing them, much like Uber drivers that forget to account for the wear-and-tear on their cars. At the same time, Uber is creating entirely new opportunities for restaurants focused on delivery, just as it did for drivers that only wanted to work sometimes, or couldn’t find any other job at all, as well as companies to service them like HyreCar.

Moreover, Uber Eats has a leg-up in the space because of Uber itself: the latter can acquire customers from the former (both because of owned-and-operated advertising as well as reducing drop-off because Uber already has payment details), and all of those huge marketing and G&A expenses from building out teams in every city Uber operates is easily leveraged for Uber Eats. This of course applies to driver acquisition costs as well.

The biggest payoff, though, comes from effectively bundling opportunities for drivers. The problem for any standalone restaurant delivery app is that the vast majority of orders come at lunch and dinner, but the driver may wish to work at other times of the day as well. With Uber that is easy: just pick up riders (Uber drivers can drive for just Uber, just Uber Eats, or both). In other words, Uber has more and more ways to monopolize a driver’s time, to the driver’s benefit personally and Uber’s benefit competitively.

Uber's driver bundle

To be sure a GrubHub driver, to take one Uber Eats competitor at random, could also drive for Lyft (or Uber, for that matter), but that is where rewarding drivers for a certain number of rides in a given time period is particularly effective: because drivers can complete their “Quests” with Uber ride-sharing trips or Uber Eats trips, it often makes more sense to simply stick with Uber.

More broadly, the challenge Uber faces with drivers derives from the same fungibility that makes the service possible in the first place. To that end, the best way to approach the driver market is not to compete against this reality but to embrace it, and having multiple services that utilize the same driver pool accomplishes exactly that.

Self-Driving Cars: A Bundle as the Way Forward

Self-driving cars, meanwhile, remain Uber’s white whale. The company received a $500 million investment from Toyota yesterday, and will work to incorporate its technology into Toyota Sienna minivans.

This is definitely not the divesture of the unit that The Information says has been mooted; the unit has apparently cost Uber $2 billion over the last two years. Of course, as I noted above, that cost pales in comparison to the strategic impact of losing Google as a potential partner to Lyft.

Still, it is never too late to consider doing the right thing: I continue to believe that Uber’s investment in self-driving cars was a strategic mistake. Yes, its biggest cost is drivers, and a theoretical Google ride-sharing service could, were it at scale, completely undercut Uber, but that is the shallowest possible way to analyze how this market might have played out.

Keep in mind the point I just made about drivers: sure it sounds attractive to convert your most expensive supply input, which must be paid on a marginal basis and that you don’t control, into a fixed cost that you have exclusive rights to. That, though, means massively more capital expenditures for a business that is currently losing around a billion dollars a quarter. Worse, it means competing with Google in an area — machine learning — where the search giant has a massive advantage.

Moreover, in the long run it seems unlikely that Google would want to build up a vertical Uber competitor: it remains far more logical, both financially and in terms of Google’s historical margin profile, to license out their technology. To be sure, if Waymo’s technology were superior, they would have wholesale transfer pricing power, which Tren Griffin describes as:

The bargaining power of company A that supplies a unique product XYZ to Company B which may enable company A to take the profits of company B by increasing the wholesale price of XYZ

Here’s the thing though: Uber is better equipped than anyone else to deal with Waymo’s potential ability to extract margin for superior self-driving technology. After all, the company is already paying for driving technology — the technology just happens to be a human!

Of course that doesn’t mean Uber should settle for paying Waymo instead of drivers: the ride-sharing service remains the best possible way to go-to-market for everyone working on self-driving technology. To that end Uber should be willing to partner with anyone and everyone — and to share its technology with whoever wants it. In the long run Uber has market power thanks to its network, and it will best exploit that power to the extent it can engender competition amongst suppliers in the self-driving car space.

Uber's self-driving bundle

Moreover, it seems certain that whenever self-driving cars come along (and even Waymo is having trouble), they will not be suitable for all of the environments Uber operates in. That makes Uber uniquely suited to bundle self-driving car service with traditional Uber car service, as well as all of the other transportation services it plans to offer to consumers. This “bundle” will allow self-driving technology to come-to-market gradually when and where it makes sense, while still giving riders the confidence they can get from anywhere to anywhere.

To be fair, Khosrowshahi has signaled the desire to partner with multiple self-driving partners, including Google. I suspect, though, that will be hard to accomplish as long as Uber is pursuing its own exclusive technology. To that end Khosrowshahi should cut the cord with Uber’s self-driving program sooner rather than later, or perhaps even open-source it; the money savings are in fact the second most important potential benefit.


There was a certain satisfying simplicity to the brutality of Uber’s original strategy under Kalanick: be as aggressive as possible to establish an early lead, and then leverage Uber’s seemingly limitless ability to raise money to spend its competitors into submission. In the end, though, that same brutality did Kalanick in, and his strategy along with it.

That left Khosrowshahi with a much more complicated situation: not only did he need to fix Uber internally, he needed to create an entirely new strategy to win in a market that was fundamentally altered because of Uber’s crisis. The bundling of services for users, of opportunities for drivers, and ideally of technologies for self-driving makes sense as an alternative.

This strategy is, though, befitting the nature of the situation, considerably more complicated, and the commensurate chances of success — and ultimately, of profitability — a fair bit lower. In other words, Uber’s boardroom drama may be over, but the company remains perhaps the most compelling in tech.

I wrote a follow-up to this article in this Daily Update.

  1. Uber voluntarily shares high level numbers with media outlets (the Wall Street Journal has collected them here), but the numbers are selective, unaudited, and come with no financial documents [↩︎]
  2. The company now deducts the amount spent on driver incentives and promotions, in addition to driver earnings on a percentage basis, from its overall bookings; this is a very welcome improvement to the company’s reporting. Previously it was unclear whether or not this spending was accounted for correctly [↩︎]
  3. You can see an old breakdown from a 2015 leaked document here [↩︎]
  4. As noted in an earlier footnote, Uber does appear to be deducting promotional expenses that apply to specific rides from booking, so these are non-unit marketing costs [↩︎]