The IT Era and the Internet Revolution

I like to say that I write about media generally and journalism specifically because the industry is a canary in the coal mine when it comes to the impact of the Internet: text shifted from newsprint to the web seamlessly, completely upending the industry’s business model along the way.

Of course I have a vested interest in this shift: for better or worse I, by virtue of making my living on Stratechery, am a member of the media, and it would be disingenuous to pretend that my opinions aren’t shaped by the fact I have a personal stake in the matter. Today, though, and somewhat reluctantly, I am not just acknowledging my interests but explicitly putting Stratechery forward as an example of just how misguided the conventional wisdom is about the Internet’s long-term impact on society, in ways that extend far beyond newspapers (but per my point, let’s start there).

What Killed Newspapers

On Monday Jack Shafer, the current dean of media critics, asked What If the Newspaper Industry Made a Colossal Mistake?:

What if, in the mad dash two decades ago to repurpose and extend editorial content onto the Web, editors and publishers made a colossal business blunder that wasted hundreds of millions of dollars? What if the industry should have stuck with its strengths — the print editions where the vast majority of their readers still reside and where the overwhelming majority of advertising and subscription revenue come from — instead of chasing the online chimera?

That’s the contrarian conclusion I drew from a new paper written by H. Iris Chyi and Ori Tenenboim of the University of Texas and published this summer in Journalism Practice. Buttressed by copious mounds of data and a rigorous, sustained argument, the paper cracks open the watchworks of the newspaper industry to make a convincing case that the tech-heavy Web strategy pursued by most papers has been a bust. The key to the newspaper future might reside in its past and not in smartphones, iPads and VR. “Digital first,” the authors claim, has been a losing proposition for most newspapers.

Shafer’s theory is that the online editions of newspapers is inferior to print editions; ergo, people read them less. To buttress his point Shafer cites statistics showing that most local residents don’t read their local newspaper online.

The flaw in this reasoning should be obvious to any long-time Stratechery reader: people in the pre-Internet era didn’t read local newspapers because holding an unwieldy ink-staining piece of flimsy newsprint was particularly enjoyable; people read local newspapers because it was the only option. And, by extension, people don’t avoid local newspapers’ websites because the reading experience sucks — although that is true — they don’t even think to visit them because there are far better ways to occupy their finite attention.

Moreover, while some of those alternatives are distractions like games or social networking, any given newspaper’s real competitors are other newspapers and online-only news sites. When I was growing up in Wisconsin I could get the Wisconsin State Journal in my mailbox or I could go to a bookstore to buy the New York Times; it didn’t matter if the latter was “better”, it was too inconvenient for most. Now, though, the only inconvenience is tapping a different app. Of course most readers don’t even bother to do that: they just click on whatever is in their Facebook feed, interspersed with advertisements that are both more targeted and more measurable than newspaper advertisements ever were.

The truth is there is no one to blame for the demise of newspapers — not Google or Facebook, and not 1990s era publishers. The entire linchpin of the newspaper business model was controlling distribution, and when that linchpin was obliterated by the Internet it was inevitable that the entire apparatus would collapse.

The IT Era

Make no mistake, this sucks for journalists in particular; newsroom employment has plummeted over the last decade:


Still, just for a moment set aside those disappearing jobs and look at what happened from roughly 1985 to 2007: at a time when newspaper revenue continued to grow jobs didn’t grow at all; naturally, newspaper companies were enjoying record profits.

What had happened was information technology: copy could be made on computers, passed on to editors via local area networks, then laid out digitally. It was a massive efficiency improvement over typewriters, halftone negatives, and literal cutting-and-pasting:


Newspapers obviously weren’t the only industry to benefit from information technology: the rise of ERP systems, databases, and personal computers provided massive gains in productivity for nearly all businesses (although it ended up taking nearly a decade for the improvements to show up). What this first wave of information technology did not do, though, was fundamentally change how those businesses worked, which meant nine of the ten largest companies in 1980 were all amongst the 21 largest companies in 19951. The biggest change is that more and more of those productivity gains started accruing to company shareholders, not the workers — and newspapers were no exception.

This is why I believe it is critical to draw a clear line between the IT era and the Internet era: the IT era saw the formation of many still formidable technology companies, but their success was based on entrenching deep-pocketed incumbent enterprises who could use technology to increase the productivity of their workers. What makes the Internet era a much bigger deal is that it challenges the very foundations of those enterprises.

The Internet Revolution

I already explained what happened to newspapers when distribution erased local newspapers moats in the blink of an eye; as I suggested at the beginning, though, this was not an isolated incident but a sign of what was to come. Back in July I laid out how the acquisition of Dollar Shaving Club suggested the same process was happening to consumer packaged goods companies: leveraging size to secure shelf space supported by TV advertising was no longer the only way to compete. A few weeks before that I pointed out that television was intertwined with its advertisers; the Internet was eroding the business of linear TV, CPG companies, retailers, and even automative companies simultaneously, leaving the entire post World War II economic order dependent on sports to hold everything together.

The ways these changes arrive are strikingly similar; I call it the FANG Playbook after Facebook-Amazon-Netflix-Google:

None of the FANG companies created what most considered the most valuable pieces of their respective ecosystems; they simply made those pieces easier for consumers to access, so consumers increasingly discovered said pieces via the FANG home pages. And, given that Internet made distribution free, that meant the FANG companies were well on their way to having far more power and monetization potential than anyone realized…

By owning the consumer entry point — the primary choke point — in each of their respective industries the FANG companies have been able to modularize and commoditize their suppliers, whether those be publishers, merchants and suppliers, content producers, or basically anyone who needs to be found on the Internet.

This is the critical difference between the IT-era and the Internet revolution: the first made existing companies more efficient, the second, primarily by making distribution free, destroyed those same companies’ business models.

The Internet Upside

What is easy to forget in this tale of woe is that all of these upheavals have massively benefited consumers: that I can read any newspaper in the world is a good thing. That we have access to many more products at much lower price points is amazing. That one can search the entire corpus of human knowledge from just about anywhere in the world, or connect to billions of people, or more prosaically, watch what one wants to watch when one wants to watch it, is pretty great.

Beyond that, what are even more difficult to see are the new possibilities that arise from said upheaval, which is where Stratechery comes in. By no means is this site a replacement for newspapers: I’m pretty explicit about the fact I don’t do original reporting. And yet, I certainly wouldn’t classify the time spent reading this site in the same category as the diversions of gaming and social networking I mentioned earlier. Rather, my goal is to deliver something completely new: deep, ongoing analysis into the business and strategy of technology. It is a viewpoint that wasn’t worth cutting-and-pasting into a broadsheet meant to serve a geographically limited market, but when the addressable market is the entire world the economics suddenly work very well indeed.

Oh, and did you catch that? Saying “the addressable market is the whole world” is the exact same thing as saying that newspapers suddenly had to compete with every other news source in the world; it’s not that the Internet is inherently “good” or “bad”, rather it is a new reality, and just because industries predicated on old assumptions must now fail should not obscure the fact that entirely new industries built with new assumptions — including huge new opportunities for small-scale entrepreneurship by individuals or small teams — are now possible. See YouTube or Etsy or yes, journalism, and this is only the beginning.

The Importance of Secondary Effects

I’d certainly like to think the benefits of this change run deeper than simply ensuring I earn a decent living; it is deeply meaningful to me when I have readers say my writing helped them land a job, or that they are applying my frameworks to a particularly difficult decision they are facing, or even that they feel like they are getting a business school education for practically free. To be certain that last point is overstating things, at least from a business school’s perspective: the breadth of material covered, the degree on your resume, and of course the friends you make are big advantages. But it used to be that the choice was a binary one: spend six figures on business school or don’t; if one can pick up a useful bit of business thinking for $10/month while still being a productive worker then isn’t that a win for society?

These secondary effects will be the key to building a prosperous society amidst the ruin of the Internet’s creative destruction: what is so exciting about Uber is not the fact it is wiping out the taxi industry, but rather that transportation as a service has the potential to radically transform our cities. What happens when parking lots go away, commutes in self-driving cars lend themselves to increased productivity, or going out is as easy as tapping an app? What kind of new jobs and services might arise?

As another example, I wrote last month about the dramatic shift in enterprise software that is being enabled by the cloud. The simple ability to pay-as-you-go has already had a big impact on startups and venture capital, but the initial impact on established companies has been to operationalize costs and increase scalability for established processes; the true transformation — building and selling software in completely new ways — is only getting started. Again, there is a difference between making an existing process more efficient and enabling a completely new approach. The gains from the former are easy to measure; the transformation of the latter is only apparent in retrospect, in part because the old way takes time to die and repurpose.

This two stage process is going to be the most traumatic when it comes to the already-started-and-accelerating introduction of automation and artificial intelligence. The downsides are obvious to everyone: if computers can do the job of a human, then the human no longer has a job. In the long run, though, what might that human do instead? To presume that displaced workers will only ever sit around collecting a universal basic income2 is to, in my mind, sell short the human drive and ingenuity that has already carried us far from our cavemen ancestors.

I know that my perspective is a privileged one: I am a clear beneficiary of this new world order. Moreover, I know that very good and important things will be lost, at least for a time, in these transitions. I strongly agree with Shafer, for example, that newspapers “still publish a disproportionate amount of the accountability journalism available” and that “we stand to lose one of the vital bulwarks that protect and sustain our culture.”

Fixing that and the many other problems wrought by the Internet, though, requires looking forwards, not backwards. The most fundamental assumptions underlying businesses — critical institutions in any society — have changed irrevocably, and to pretend they haven’t is a colossal mistake.

  1. Gulf Oil, which was the 7th largest company in 1980, was the exception []
  2. Which I support []

Chat and the Consumerization of IT

It was in 2001, the same year the iPod was introduced, that Douglas Neal and John Taylor coined the phrase “Consumerization of IT”; they set down a specific definition in this 2004 position paper:

The defining aspect of consumerization is the concept of ‘dual use’. Increasingly, hardware devices, network infrastructure and value-added services will be used by both businesses and consumers. This will require IT organizations to rethink their investments and strategies.

Neal and Taylor’s argument was rooted in math: there were more consumers than there were IT users, which meant that over the long run the rate of improvement in consumer technologies would exceed that of enterprise-focused ones; IT departments needed to grapple with increased demand from their users to use the same technology they used at home. The pinnacle of the trend seemed to be the iPhone: designed unabashedly as a consumer device, Apple’s product was so superior to what was on the market that employees clamored to use it for company business; surprisingly, Apple helped out, adding significant enterprise-focused features to the iPhone.

Meanwhile, a year earlier Google had launched Google Apps for your domain, bringing popular Google consumer applications like Gmail, Google Talk, and Google Calendar to enterprises. The company basically repeated Neal and Taylor’s thesis in the blog post announcement:

A hosted service like Google Apps for Your Domain eliminates many of the expenses and hassles of maintaining a communications infrastructure, which is welcome relief for many small business owners and IT staffers. Organizations can let Google be the experts in delivering high quality email, messaging, and other web-based services while they focus on the needs of their users and their day-to-day business.

Former Microsoft CEO Steve Ballmer originally dismissed Google Apps, but eventually Microsoft viewed the cloud service as a mortal threat; four years later Ballmer had adopted Consumerization of IT as one of his main talking points, writing in Microsoft’s 2012 shareholder letter:

Fantastic devices and services for end users will drive our enterprise businesses forward given the increasing influence employees have in the technology they use at work — a trend commonly referred to as the Consumerization of IT. It’s one more reason Microsoft is committed to delivering devices and services that people love and businesses need.

In Ballmer’s interpretation, Consumerization of IT was about delivering enterprise products that were IT-friendly with a nice consumer-like interface on top, and while outside observers may have considered the latter a particularly significant challenge for Microsoft, there’s no question the company was well-placed when it came to the former; after all, IT managers were the ultimate decision-makers.

Meanwhile, an enterprise startup called Atlassian, founded in 2002, was offering a new definition of Consumerization of IT. Their main product, the developer-focused JIRA project management software, was not a consumer product ported to the enterprise, like Google Apps, nor was it a traditional IT-product with a consumer-like interface peddled to IT managers to get users off their backs. What was so innovative about JIRA was how it was sold: to teams, not companies. Thanks to the distributive power of the web the company never bothered to build a sales force to wine-and-dine CIOs; rather the product could be downloaded from a website (and later signed up for as a cloud service) and paid for with a credit card; the primary form of marketing was word-of-mouth.

So which definition of the Consumerization of IT is most meaningful? Is it consumer products ported to IT, consumer UI on traditional enterprise products, or a new business model that transforms the relationship between buyers and sellers? Certainly all three factors are important to the rise of software as a service, but the upcoming chat wars will provide an interesting test as to which is the most important.

Approach 1: Workplace by Facebook

Earlier this week Facebook formally introduced Workplace by Facebook. From the company’s blog:

At Facebook, we’ve had an internal version of our app to help run our company for many years. We’ve seen that just as Facebook keeps you connected to friends and family, it can do the same with coworkers…We’ve brought the best of Facebook to the workplace — whether it’s basic infrastructure such as News Feed, or the ability to create and share in Groups or via chat, or useful features such as Live, Reactions, Search and Trending posts. This means you can chat with a colleague across the world in real time, host a virtual brainstorm in a Group, or follow along with your CEO’s presentation on Facebook Live. We’ve also built unique, Workplace-only features that companies can benefit from such as a dashboard with analytics and integrations with single sign-on, in addition to identity providers that allow companies to more easily integrate Workplace with their existing IT systems.

Workplace is a near perfect application of Neal and Taylor’s original thesis: Facebook has already done much of the R&D and all of the backend buildout for the product by virtue of building the Facebook consumer app, meaning it can offer a very robust service at very competitive prices.

Moreover, it’s a service that potential users already know how to use; training and adoption remain significant barriers for all enterprise products, including cloud-based ones, which means Workplace will have a head start over competitors.

Still, there are challenges that arise from Workplace’s consumer roots: for one, while Facebook is promising that Workplace data will be both secure and independent from Facebook itself, the company will still have a reputational problem to overcome. Relatedly, the business model is a complete departure: Facebook has never charged for software before, and as Google learned a decade ago, licensing revenue come with new obligations around service and responsiveness that require the development of completely new organizational skills.

There’s one more hangup: it’s not as easy to get started as Facebook suggests. You need to apply, at which point you will be contacted by Facebook’s sales team, who “will work with you to understand your needs and help launch Workplace across your organisation.”1 This is understandable given that Workplace’s biggest gains come from connecting the entire organization, but it’s a lot closer to a traditional enterprise selling motion.

Approach 2: Skype Teams by Microsoft

Microsoft already owns Yammer, which will have to give up its mantle as “Facebook for work.” It seems much of the company’s energies, though, have been focused on building Skype Teams, a product first reported by MS Power User in early September:

Skype Teams is going to be Microsoft’s take on messaging apps for teams. Skype Teams will include a lot of similar features which you’ll find on Slack. For example, Skype Teams will allow you to chat in different groups within a team, also known as “channels”. Additionally, users will be able to talk to each other via Direct Messages on Skype Teams.

This isn’t exactly a surprise given reports Microsoft considered (some say tried) buying Slack; at the time it was reported that Bill Gates was a particularly vocal proponent of building a competitor. What is more interesting is the licensing model, which was reported by Petri late last month:

Skype Teams will be part of Office 365 and will be available to anyone who is already subscribed to a business plan, likely starting with E3 SKU. Skype Teams integrates deeply with your Office 365 content as well, with the ability to share your calendar inside the app as well as join meetings too. To no surprise, this application is built on the company’s new cloud platform and very well may be the future of Skype for Business. Make no mistake, Microsoft is going for the jugular on Slack with this product as many corporate customers already use Office 365 and with this product being bundled into that service, there will be no need to pay for Slack.

This is very much in-line with the Ballmer view: build a product explicitly for enterprise that has all the trappings of consumer software. Then, in true Microsoft fashion, leverage its existing position in companies to drive adoption. In the case of Skype Teams, companies that have Office 365 will have the choice of paying extra for Slack or simply using Skype Teams for free; ideally they will never even try to compare the two — and Microsoft will have an entire field organization working to ensure that is exactly what happens.

There are, of course, challenges with this approach. As is their wont Microsoft seems to be over-indexing on brand awareness (Skype 😃) as opposed to brand reputation (Skype 😞). More importantly, Microsoft has to prove they can build a competitive user experience while avoiding the temptation of competing on features; when Instagram copied Snapchat’s Stories feature I wrote in The Audacity of Copying Well:

The problem with focusing on features as a means of differentiation is that nothing happens in a vacuum: category-defining products by definition get a lot of the user experience right from the beginning, and the parts that aren’t perfect — like Facebook’s sharing settings or the iPhone’s icon-based UI — become the standard anyways simply because everyone gets used to them.

Perhaps more importantly, though, Microsoft’s Office 365-based model is a double-edged sword: the “you already paid for it” pitch is one that resonates with CIOs, not necessarily users and team leaders who may already be using a competitor — or be tempted to try.

Approach 3: Slack

The only thing that has grown faster than Slack’s user base and valuation is its hype and mindshare among the tech press, so it’s not a surprise that Workplace by Facebook is being characterized as a Slack competitor; undoubtedly Skype Teams will be described the same way.

Slack represents the evolution of the Atlassian model: getting started takes nothing more than an email address. There is no server software to install, no contracts to sign, and you don’t even need a credit card.2 There is a generous free level, which means the only friction entailed in giving the product a try — or in accepting an invitation — is said sign-up. And as we’ve learned in the consumer space, when everything else is equal the quality of the user experience matters most.

Moreover, because this is messaging, the user experience is about more than the user interface: it is first and foremost about who else is using Slack. And, at least amongst developers, that is a whole host of other people and interest groups one might want to be connected to. No, Slack cannot leverage itself into a particular company through another SKU like Microsoft will leverage Office 365 for Skype Teams, but that means the company can willingly (or unwillingly) let its users connect to teams far beyond their own organization, adding on a consumer-like network effect; granted, this won’t necessarily drive CIO decision-making, but it does enhance the end user experience, which matters more the lower down the hierarchy the decision-maker is.

Where Slack is weaker is, well, being a proper enterprise app. While it is used widely in startups, Atlassian’s Hipchat has won contracts at bigger companies like Apple, Twitter, and Uber. In some cases this is because Atlassian offers an on-premise version, in others it comes down to issues like compliance and data storage. Slack will surely catch up, but it’s worth noting that feature weakness is a natural outcome of the model: there are no CIOs demanding features via sales people desperate to close a deal. The feedback loops are a little looser.

Slack’s network effect notwithstanding, the reality of enterprise software is that there will likely be multiple winners. Facebook’s product looks very compelling, but it’s fair to wonder if the company will be truly committed to the category; Google came out of the gate fast a decade ago, and then let its enterprise efforts stagnate until this year. Microsoft’s product hasn’t even launched yet, but presuming it is good enough the ability to leverage Office 365 is a real advantage. Slack, meanwhile, has all of the advantages of a startup: singular focus, aligned incentives, and, most potent of all, a new kind of business model.

What has to worry Microsoft is that their advantage is waning: being a part of Office 365 is great…as long as a company uses Office 365. For all the good work that CEO Satya Nadella has done to reorient Microsoft away from Windows and towards services, the company is still lacking a new generation of products that sell themselves — and, by extension, sell the rest of Microsoft.

Of course that may be unrealistic: as I wrote in the case of Oracle the attractiveness of suites is greatly diminished in a world of cloud-based software; decision-making is devolving away from CIOs concerned with up-front costs and having one throat to choke, to users and managers concerned with the user experience. And, by extension, in an a la carte world standard interfaces and easy integrations become paramount: that means ecosystem building and developer support, which are a whole lot easier to accomplish if your product is free for anyone to use (advantage Slack).

Maybe that is the ultimate meaning of Consumerization of IT: it’s not just products and user interfaces moving from consumer to enterprise; rather, the ultimate manifestation is an enterprise product that can be used by consumers. That means scalable infrastructure, it means nailing the user experience, and, most critically, it means an entirely new business model.

  1. Note the spelling: organisation with an ‘s’. You can tell Workplace was developed in Facebook’s London office, not in California []
  2. This is the case for Atlassian’s cloud-based products as well []

Google and the Limits of Strategy

John Gruber is not impressed by the suggestion that Google’s new Pixel phone, which the company introduced at a keynote yesterday, is the first time the company has competed head-to-head with the iPhone:

Google has been going head-to-head against the iPhone ever since the first Android phone debuted. You can’t say the Nexus phones don’t count just because they never succeeded.

Google then-VP of engineering Vic Gundotra devoted his 2010 I/O keynote to ripping into the iPhone and iPad, pedal to the metal on “open beats closed” and how an ecosystem of over 60 different Android devices (a drop in the pond compared to today) was winning, saving the world from a future where “one man, one company, one device” controls mobile. (Gundotra tossed in “one carrier”, which was true at the time, but looks foolish in hindsight.) He even compared the iPhone to Orwell’s 1984. Really.

The only thing Orwellian here is Google’s attempt to flush down the memory hole their previous attempts to go head-to-head against the iPhone. Watch the first 10 minutes of Gundotra’s 2010 keynote — the whole thing is about beating the iPhone.

Gruber is both right and wrong: yes, Gundotra’s rhetoric was stridently anti-Apple, but at the end of that keynote everyone in attendance received an HTC EVO 4G; when it came to the zero-sum game of actually putting phones in people’s pockets, Apple’s competitors (then) were companies like HTC, Motorola, and especially Samsung. Granted, those manufacturers’ phones ran Google’s Android software, but then again Google’s software ran on the iPhone, too; in fact, at the time of Gundotra’s speech, Google Maps, YouTube, and Google Search were all built into iOS.

As we know today that wouldn’t be the case for long: two years later iOS 6 dumped the YouTube app and, more famously, changed the default mapping application from one based on Google to Apple’s own.1 The proximate cause was not Gundotra’s speech, though: in fact, Apple had purchased a mapping company called Placebase in 2009, and a few months later Google had introduced turn-by-turn navigation; it was Android-only.

Google Versus Android

Very few people know for sure who exactly is to blame for the Google-Apple breakup. Yes, Steve Jobs was livid that Android phones looked a lot like iPhones, but remember, Google purchased Android two years before the iPhone came out (and a year before Eric Schmidt joined Apple’s board) as a hedge against Microsoft; once the iPhone came out, why but for pride would you build a phone any differently?

Where Google went wrong was with that maps decision: making turn-by-turn directions an Android-exclusive differentiated Android as a platform, but to what end? So that HTC et al could sell a few more phones, and pay Google nothing for the privilege?

The truth is that when it came to making money Google and Apple were not competitors in the slightest: Apple was a vertical company that expended R&D and capital investment to design and build devices that included significant material costs, and then sold those devices in a zero-sum competition against other manufacturers. Yes, marketshare was important, but so was profitability: Apple traded off reaching the entire market in favor of creating a differentiated experience that customers would pay a premium for that far exceeded the (significant) marginal costs of each iPhone.

Google, meanwhile, has always been a completely different kind of company — a horizontal one. Nearly all of Google’s costs are fixed — R&D and data centers — which means profitability goes hand-in-hand with marketshare, which by extension means advertising is the perfect business model. The more people using Google the more that those fixed costs can be spread out, and the more attractive Google is to advertisers.

This is why favoring Android in any way was such a strategic error by Google: everything about the company was predicated on serving all customers, but Android by definition would only ever be on a percentage of smartphones. 2 Again, it’s possible Apple would have built its own Maps product regardless, but Google’s short-sighted favoring of Android ensured that for hundreds of millions of potential Google users the default mapping experience and the treasure trove of data that came with it would belong to someone else.

This is where that infamous Gundotra speech matters: I’m not convinced that anyone at Google fully thought through the implication of favoring Android with their services. Rather, the Android team was fully committed to competing with iOS — as they should have been! — and human nature ensured that the rest of Google came along for the ride. Remember, given Google’s business model, winning marketshare was perfectly correlated with reaping outsized profits; it is easy to see how the thinking and culture that developed around Google’s core business failed to adjust to the zero-sum world of physical devices. And so, as that Gundotra speech exemplified, Android winning became synonymous with Google winning, when in fact Android was as much ouroboros as asset.

Google’s Assistant Problem

In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014,3 declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.

It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.

This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.

A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).

Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky Button” guaranteed that the search in question would not make Google any money.4 After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention.5 Google Assistant has the exact same problem: where do the ads go?

Google’s Shift

Yesterday’s announcements from Google seem designed to take on the company’s challenges in an assistant-centric world head-on. For good reason the presentation opened with the Google Assistant itself: pure technology has always been the foundation of Google’s power in the marketplace.

Today’s world, though, is not one of (somewhat) standards-based browsers that treat every web page the same, creating the conditions for Google’s superior technology to become the door to the Internet; it is one of closed ecosystems centered around hardware or social networks, and having failed at the latter, Google is having a go at the former. To put it more generously, Google has adopted Alan Kay’s maxim that “People who are really serious about software should make their own hardware.” To that end the company introduced multiple hardware devices, including a new phone, the previously-announced Google Home device, new Chromecasts, and a new VR headset. Needless to say, all make it far easier to use Google services than any 3rd-party OEM does, much less Apple’s iPhone.

What is even more interesting is that Google has also introduced a new business model: the Pixel phone starts at $649, the same as an iPhone, and while it will take time for Google to achieve the level of scale and expertise to match Apple’s profit margins, the fact there is unquestionably a big margin built-in is a profound new direction for the company.6

The most fascinating point of all, though, is how Google intends to sell the Pixel: the Google Assistant is, at least for now, exclusive to the first true Google phone, delivering a differentiated experience that, at least theoretically, justifies that margin.

It is a strategy that certainly sounds familiar, raising the question of whether this is a replay of the turn-by-turn navigation disaster. Is Google forgetting that they are a horizontal company, one whose business model is designed to maximize reach, not limit it?

I don’t think so. In fact, I think this profound strategy shift springs from a depth of thinking that is the polar opposite of the hot-headedness of the former Android leadership. It is not that Google is artificially constraining its horizontal business model; it is that its business model is being constrained by the reality of a world where, as Pichai noted, artificial intelligence comes first. In that world you must own the interaction point, and there is no room for ads, rendering both Google’s distribution and business model moot. Both must change for the company’s technological advantage to come to the fore.

In this respect Google is like the bizarro-Apple: the iPhone maker has the distribution channel and business model to make Siri the dominant assistant in its users’ lives, but there are open questions about its technology prowess when it comes to artificial intelligence specifically and services generally; moreover, efforts to improve are fundamentally stymied by the company’s device-centric culture and organizational structure.

Google’s culture and organizational structure, meanwhile, are attuned to its old business model, the one that equated marketshare with profitability, and which achieved that market share with a product development approach predicated on iteration and experimentation on top of the positive feedback loop that comes from massive amounts of data.

Phones couldn’t be more different: the Pixel is what it is, which means it has to be great on day one, and it has to be sold. The first means an organizational structure that delivers on the promise of focused integration, not willy-nilly experimentation and iteration; the second means partnerships, and outbound marketing, and a whole bunch of other things that Google has traditionally not valued. Indeed, you could see the disconnect in yesterday’s presentation: while Pichai was extremely clear about the company’s new direction, the actual product demonstrations quickly devolved into droning technical mumbo-jumbo that never bothered to explain why users should care.

This is why, much like Apple, I can be both impressed by Google’s strategic thinking and yes, courage in facing this new epoch, even as I am a bit bearish about their prospects: technology alone is rarely enough, and the only thing more difficult than changing business models is changing cultures.

  1. Google search still ships as the default, and Google pays dearly for the privilege []
  2. One could argue in Google’s defense that Android had the potential to wipe out the iPhone just like Windows wiped out the Mac; that, though, is a complete misunderstanding of history (albeit a commonly held one). Many of us were confident in the iPhone’s market resilience even then []
  3. And further refined in 2015’s The Facebook Epoch []
  4. In fact, thanks to Google’s instant search results, the button doesn’t really exist anymore []
  5. And, by extension, said advertisers don’t have the opportunity to build a customer relationship that potentially makes winning that ad auction far more valuable than the single transaction that may have resulted, which always made the price they were willing to pay much higher than it would be in the sort of affiliate model that may work in some Assistant use cases []
  6. And no, the Nexus devices don’t count; they had neither the business model nor the infrastructure to suggest they were anything but what Google said they were: public reference devices that offered the idealized Android experience for enthusiasts for a relatively low price. Unlike Nexus phones the Pixel is launching with carrier support, top-of-the-line specs, and a price to match. The only thing missing is a multi-million dollar advertising campaign, which I suspect we’ll hear news of shortly. []

Snapchat Spectacles and the Future of Wearables

When it comes to the future, the tricky part is less the “what” than the “how.” For example:


This is the Simon. It was a handheld phone with a touchscreen that could run 3rd party apps. IBM introduced it in 1992 before the world wide web even existed. It sent faxes.

I highlight faxing not to mock (mostly); rather, its inclusion gets to a fundamental reason why it took a full fifteen years for smartphones to truly take off: new categories not only need workable and affordable technology, but also ecosystems to latch onto and established use cases to satisfy.

Think about everything that happened between 1992 and 2007 that, at least at first glance, didn’t seem to have anything to do with smartphones:

  • The personal computer moved out of the office and into the home
  • The world wide web was invented and an entire ecosystem was built from scratch
  • Personal electronics proliferated: while by 1992 most people had or used calculators and Walkmans, the 90s saw the introduction of PDAs and digital cameras; the 00s brought handheld GPS devices and digital music players

The reason we consider 2007 to be the start of the smartphone era is that while there were plenty of smartphones released before then (most notably Nokia/Symbian in 1996, and Blackberry and Windows Mobile in 2003), it was the iPhone that, thanks to its breakthrough user interface and ahead-of-its-time hardware, was able to take advantage of all these developments.

Think back to this slide:


Note that none of these features existed in a vacuum:

  • Telecom providers had been building out cellular networks for two decades, and mobile phones were well-established
  • iPods were hugely popular, with a well-established use case, as were the other personal electronic devices that first offered much of the iPhone’s other functionality (the aforementioned calculators, PDAs, digital cameras, and GPS devices)
  • The web had already developed into an entire universe of information that was accessible through a browser

A year later, Apple added the App Store that made it possible for the iPhone to add on all of the various computing capabilities that it was lacking; the result was a single device built on top of everything that came before:


The critical point is this: even had it been technologically possible, the iPhone wouldn’t have been, well, the iPhone, had the use cases it fulfilled and the ecosystem it plugged into not been established first.

Wearables That Fail

Late last week Snap, the company formerly named after its original Snapchat app, somewhat unexpectedly unveiled a wearable:1


The Spectacles, as they are known, are sunglasses with a pair of cameras: tap the side and it will record a ten-second snippet of video.

Of course Snapchat isn’t the first company to release video-recording glasses: back in 2013 Google released Google Glass:


Glass was a failure for all the obvious reasons: they were extremely expensive and hard to use, and they were ugly not just aesthetically but also in their ignorance of societal conventions. These problems, though, paled in the face of a much more fundamental issue: what was the point?

Oh sure, the theoretical utility of Glass was easy to articulate: see information on the go, easily capture interesting events without pulling out your phone, and ask and answer questions without fumbling around with a touch screen. The issue with the theory was the same one that plagued initial smartphones: none of these use cases were established, and there was no ecosystem to plug into.

A similar critique could be leveled at Apple’s initial take on the Watch. While the hardware was far more attractive than Glass, and no one was offended by the prospect of wearing, well, a watch, what was most striking about the announcement was the absence of a rationale: what was the use case, and where was the ecosystem?

This lack of focus led to a device that probably shouldn’t have been launched when it was: because Apple didn’t know what the Watch should be used for it was larded up with an overly complicated user interface and an SDK that resulted in apps so slow that they were unusable; Apple was so eager for 3rd-party developers to find its missing use case that the company destroyed the user experience that was its hallmark.

Wearables That Work

Contrast the first Apple Watch with the one that was unveiled last month: not only was the hardware improved with a faster processor, waterproofing, and GPS, but more importantly the use case was made very clear. Just look at the introductory video:

This video has 47 separate “shots”; 35 of them are health-and-fitness related (and that doesn’t count walking or breathing, both of which fit in a broader “wellness” categorization). The rest of the introduction followed the same theme, as did the product’s flagship partner: Nike. Finally the message was clear: Apple Watch is for health and fitness.

Now can an Apple Watch do much more than health and fitness? Absolutely. But those new use cases — things like notifications, Apple Pay, and controlling your smart home, all of which are relatively new use cases2 — now have an umbrella to develop under.

This focus also defines the Watch’s most obvious competitor: Fitbit. While the Watch may be far more capable than most Fitbit devices, both are competing for the same spot on the wrist, and both are positioned to do the same job.

What is interesting about Fitbit is that from a product development perspective it is the spiritual heir of Apple’s own iPod: it started out as a purposeful appendage to a computer that did one clearly defined thing — count steps. True, this was a new use case, but the original Fitbit in particular avoided all of the other problems with wearables: it was unobtrusive yet unique, and very easy to understand. It also laid the groundwork for the line expansion that has followed: once the use case was established Fitbit could create trackers that overcame challenges like wearing a strange device on your wrist or costing more than $100.

Apple’s introduction of its second wearable — AirPods — was also well done.3 The use case couldn’t be more obvious: they are wireless headphones for the headphone-jack-less iPhone 7. You can’t get more clear than that! And yet the potential is quite obviously so much greater: as I noted two weeks ago, the AirPods in conjunction with the Apple Watch are forming the outlines of a future Beyond the iPhone.

The Wearables Future

There’s that word I opened with: “future”. As awesome as our smartphones are, it seems unlikely that this is the end of computing. Keep in mind that one of the reasons all those pre-iPhone smartphone initiatives failed, particularly Microsoft’s, is that their creators could not imagine that there might be a device more central to our lives than the PC. Yet here we are in a world where PCs are best understood as optional smartphone accessories.

I suspect we will one day view our phones the same way: incredibly useful devices that can do many tasks better than anything else, but not ones that are central for the simple reason that they will not need to be with us all of the time. After all, we will have our wearables.

To be clear, that future is not here, and it’s probably not that close.4 That doesn’t mean these intervening years — and these intervening products — don’t matter, though. Now is the time to build out the use cases and ecosystem that make wearables products the market demands, not simply technology made for geeks who don’t give a damn about social conventions — or how they look.


Snapchat is Not Google Glass

To be fair, this wasn’t the official product image for Google Glass, although it quickly became the most famous.5 Certainly that was because of who was in it — Marc Andreessen, Bill Maris, and John Doerr are three of the most famous venture capitalists in the industry — but it also so perfectly captured what Google Glass seemed to represent: Silicon Valley’s insistence that its technology would change your life whether you wanted it to or not, for no other reason than the fact it existed.

Needless to say, the contrast with what already feel like iconic pictures for the Snap Spectacles could not be more profound:6


With the caveat that no one has actually used these things — and that manufacturing physical products at scale is a lot more difficult than it looks — I suspect the outcome for Spectacles will be quite different from Glass as well. For one, they look so much better than Glass, and they are an order of magnitude cheaper ($130).

Much more significantly, though, Spectacles have the critical ecosystem and use case components in place: Snapchat has over 150 million daily active users sending over a billion snaps a day and watching an incredible 10 billion videos. All of them are exclusive to Snapchat. Making it easier to add videos — memories, according to Spiegel (and I’m sure it’s not an accident that Snapchat recently added a feature called exactly that) — is not so much strange as it is an obvious step on Snapchat’s Ladder.

Snapchat Versus Apple

Obvious. That’s another word I already used, in the context of Apple’s AirPods, and what is perhaps the most fascinating implication of Spectacles is what it says about the potential of a long-term rivalry between Snapchat and Apple. Snapchat CEO Evan Spiegel has said that Snap née Snapchat is a camera company, not a social network. Or, perhaps more accurately, the company is both: it is a fully contained ecosystem that is more perfectly optimized for the continual creation and circulation of content than even Facebook.7 What matters from Apple’s perspective is that Snapchat, like Facebook or WeChat or other apps that users live in, is one layer closer to their customers. For now that is not a threat — you still need an actual device to run those apps — but then again most people used Google on Windows, which made Microsoft a lot of money even as it froze them out of the future.

This is exactly why Apple is right to push forward into the wearable space even though it is an area, thanks to the important role of services like Siri, in which they have less of an advantage. Modern moats are not about controlling distribution but about owning consumer touch points — in the case of wearables, quite literally.

To be clear, I am peering into a very hazy future; for one thing Snapchat still has to build an actual business on top of that awesome engagement, although I think the company’s prospects are bright. And it goes without saying that the technology still matters: chips need to get faster (a massive Apple advantage), batteries need to improve (also an Apple specialty), and everything needs to get smaller. This, though, is the exact path taken by every piece of hardware since the advent of the industry. They are hard problems, but they are known problems, which is why smart engineers solve them. What is more difficult to appreciate is that creating a market for that smart technology takes an even more considered approach, and right now it’s difficult to imagine a better practitioner than the one on Venice Beach, far from Silicon Valley.

  1. I discussed the odd timing of the announcement in yesterday’s Daily Update []
  2. Notifications obviously isn’t that new; relatedly, it’s one of the most compelling non-fitness-related reasons to buy the Watch []
  3. Except for the fact that it’s not yet available []
  4. If we follow the iPod timeline (the accessory that led to the iPhone), then we’re looking at 2020 or 2021, presuming the Watch is the next centerpiece, and that’s for the minimum viable product []
  5. I couldn’t ascertain what pictures were on the Google Glass page as Google has excluded it from the Internet Archive []
  6. And note the gender ratio []
  7. Facebook’s status as user’s public profile inhibits sharing; that is why the company is more dependent on 3rd-party content than most realize []

Oracle’s Cloudy Future

Everyone knows the story of how IBM gave away the castle to Microsoft (and Intel): besieged by customers demanding low-powered personal computers, the vertically-integrated mainframe-centric company tasked a team in Boca Raton, Florida, far from the company’s headquarters in Armonk, New York, to create something quickly to appease these low-margin requests. Focused on speed and cost said team decided to outsource nearly everything, including the operating system and processor. The approach paid off, at least when it came to IBM’s goals: while IBM’s integrated products normally took half a decade to develop and launch, the Boca Raton team moved from concept to shipping product in only 12 months. However, the focus on standard parts meant that all of the subsequent value in the PC, which massively exceeded the mainframe business, went to the two exclusive suppliers: Microsoft and Intel.1

Fewer are aware that the PC wasn’t IBM’s only internal-politics-driven value giveaway; one of the most important software applications on those mainframes was IBM’s Information Management System (IMS). This was a hierarchical database, and let me pause for a necessary caveat: for those that don’t understand databases, I’ll try to simplify the following explanation as much as possible, and for those that do, I’m sorry for bastardizing this overview!

Database Types

A hierarchical database is, well, a hierarchy of data:


Any particular piece of data in a hierarchical database can be found by either of two methods: either know the parent and find its children, or know the children and find its parent. This is the easiest sort of database to understand, and, at least for early computers, it was the easiest to implement: define the structure, enter data, and find that data layer by traversing the hierarchy until you find the relevant parent or child. Or, more realistically, leverage your knowledge of the hierarchy to go to a specific spot.

However, there were two big limitations with hierarchical databases: first, relationships were pre-determined; what was a parent and what was a child were decisions made before any data was actually entered. This made it extremely difficult to alter a database once it was in use. Secondly, queries analyzing the children of different parents are impractical: you would need to traverse the hierarchy to retrieve information for every potential item before discarding the vast majority to get the data set you wish to analyze.

In 1969, an IBM computer scientist named Edgar F. Codd wrote a seminal paper called A Relational Model of Data for Large Shared Data Banks that proposed a new approach. The introduction is remarkably lucid, even for laypeople:

Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.

This paper was the foundation of what became known as relational databases: instead of storing data in a hierarchy, where the relationship between said data defines its location in the database, relational databases contain tables; every piece of data is thus defined by its table name, column name, and key value, not by the data itself (which is stored elsewhere). That by extension means that you can understand data according to its relationship to all of the other data in the database; a table name could also be a column name, just as a key value could also be a table name.


This approach had several huge benefits: first, databases could be expanded with new data categories without any impact on previous data, or needing to completely rewrite the hierarchy; just add new tables. Two, databases could scale to accommodate arbitrary amounts and types of data because the data wasn’t actually in the database; remember it was abstracted away in favor of integers and strings of text. Third, using a “structured query language” (SQL) you could easily generate reports on those relationships (What were the 10 most popular books ordered by customers over 40?), and because said queries were simply examining the relationship between integers and strings you could ask almost anything. After all, figuring out the relationship between locations in the database is no longer scanning a tree — which is inherently slow and mostly blind, if you don’t know what you’re looking for — but is math. Granted, it was very hard math — many at the time thought it was too hard — but the reality of Moore’s Law was slowly being realized; it wouldn’t be hard math forever.

Phew. I’d imagine that was as painful to read as it was to write, but this is the takeaway: hierarchical databases are limited in both capability and scalability by virtue of being pre-defined; relational databases are far more useful and scalable by virtue of abstracting away the data in favor of easily computable values.

Oracle’s Rise

Dr. Codd’s groundbreaking idea was almost completely ignored by IBM for several years in part because of the aforementioned IMS; Codd was basically saying that one of IBM’s biggest moneymakers was obsolete for many potential database applications, and that was a message IBM’s management was not particularly interested in hearing. In fact, even when IBM finally did build the first-ever relational database in 1977 (it was called System R and included a new query language called SQL),2 they didn’t release it commercially; only in 1982 did the company release its first relational database software called SQL/DS. Naturally it only ran on IBM mainframes, albeit small ones; IMS ran on the big iron.

Meanwhile, a young programmer named Larry Ellison had formed a company called Software Development Laboratories, originally to do contract work, but quickly decided that selling packaged software was a far better proposition: doing the work once and reselling it multiple times was an excellent way to get rich. They just needed a product, and IBM effectively gave it to them; because the System R team was being treated as a research project, not a commercial venture, they happily wrote multiple papers explaining how System R worked, and published the SQL spec. Software Development Laboratories implemented it and called it Oracle, and in 1979 sold it to the CIA; a condition of the contract was that it run on IBM mainframes.3

In other words, IBM not only created the conditions for the richest packaged software company ever to emerge (Microsoft), they basically gave an instruction manual to the second.

The Packaged Software Business

The packaged software industry was a bit of a hybrid between the traditional businesses of the past and the pure digital businesses of the Internet era (after all, there was no Internet). On the one hand, as Ellison quickly realized, software had zero marginal costs: once you had written a particular program, you could make an infinite number of copies. On the other hand, distribution was as much a challenge as ever; in the case of Oracle’s relational database, Relational Software Inc. (née Software Development Laboratories; the company would name itself “Oracle Systems Corporation” in 1982, and then today’s Oracle Corporation in 1995) had to build a sales force to get their product into businesses that could use it (and then ship the actual product on tape).

The most economical way to do that was to build the sort of product that was mostly what most customers wanted, and then work with them to get it actually working. Part of the effort was on the front-end — Oracle was quickly rewritten in the then-new programming language C, which had compilers for most platforms, allowing the company to pitch portability — but even more came after the sale: the customer had to get Oracle installed, get it working, import their data, and only then, months or years after the original agreement, would they start to see a return.

Eventually this became the business model: Oracle’s customers didn’t just buy software, they engaged in a multi-year relationship with the company, complete with licensing, support contracts, and audits to ensure Oracle was getting their just dues. And while customers grumbled, they certainly weren’t going anywhere: those relational databases and the data in them were what made those companies what they were; they’d already put in the work to get them up-and-running, and who wanted to go through that again with another company? Indeed, given that they were already running Oracle databases and had that existing relationship, it was often easier to turn to Oracle for the applications that ran on top of those databases. And so, over the following three decades, Oracle leveraged their initial advantage to an ever-increasing share of their customers’ IT spend. Better the devil you know!

Amazon’s Optionality

The proposition behind Amazon Web Services (AWS) could not be more different: companies don’t make up-front commitments or engage in years-long integration projects. Rather, you sign-up online, and you’re off. To be fair, this is an oversimplification when it comes to Amazon’s biggest customers, who negotiate prices and make long-term commitments, but that’s a recent development; AWS’ core constituency has always been startups taking advantage of server infrastructure that used to cost millions of dollars to build minimum viable products where all of their costs are variable: use AWS more (because you’re gaining customers), pay more; use it hardly at all, because you can’t find product-market fit, and you’re out little more than the opportunity cost of not doing something else.

It’s the option value that makes AWS so valuable: need more capacity? Just press a button. Need to build a new feature? AWS likely has a pre-built service for you to incorporate. Sure, it can get expensive — a common myth is that AWS is winning on price, but actually Amazon is among the more expensive options — but how much is it worth to have exactly what you need when you need it?

Ellison, meanwhile, got up on stage at the company OpenWorld conference this week and declared that “Amazon’s lead is over” when it comes to Infrastructure-as-a-Service, all because Oracle’s top-of-the-line server instance was faster and cheaper than Amazon. Well sure, but hierarchical databases were faster than relational databases too; speed isn’t everything, nor is price. Optionality and scalability matter just as much as they always have, and in this case Oracle’s quite basic offering isn’t remotely competitive.

Ellison’s statement is even more ridiculous when you look at the number that really matters when it comes to cloud services: capital expenditures. Over the last twelve months Oracle has totaled $1.04 billion in capital expenditures; Amazon spent $3.36 billion in the last quarter,4 and $10.9 billion in the last twelve months.5 Infrastructure-as-a-Service is not something you build-to-order; it’s the fact that the infrastructure and all the attendant services that rest on top of that infrastructure are already built that makes AWS’s offering so alluring. Oracle is not only not catching up, they are falling further behind.

SaaS Focus

In his keynote Ellison argued that infrastructure spending wasn’t necessarily the place to gauge Oracle’s cloud commitment; instead he pointed out that the company has spent a decade moving its various applications to the cloud. Indeed, the company spent a significant 17% of revenues last quarter on research-and-development, and Ellison bragged that Oracle now had 30+ SaaS applications and that the sheer number mattered:

What is Oracle’s strategy: what do we think customers want, what do we do in in SaaS? It’s the same thing: if we can figure out what customers want and deliver that customers are going to pick our stuff and buy our stuff. And we think what they want is complete and integrated suites of products, not one-off products. Customers don’t want to have to integrate fifty different products from fifty different vendors. It’s just too hard. It’s simply too hard and the associated security risks and labor costs and reliability problems is just too much. So our big focus is not delivering one, two, three, four applications, but delivering complete suites of applications, for ERP, for human capital management, for customer relationship management, sometimes called customer experience, or CX. That’s our strategy in SaaS: complete and integrated suites.

What Ellison is arguing was absolutely correct when it came to on-premise software; I wrote about exactly this dynamic with regards to Microsoft in 2015:

Consider your typical Chief Information Officer in the pre-Cloud era: for various reasons she has bought in to some aspect of the Microsoft stack (likely Exchange). So, in order to support Exchange, the CIO must obviously buy Windows Server. And Windows Server includes Active Directory, so obviously that will be the identity service. However, now that the CIO has parts of the Microsoft stack in place, she is likely to be much more inclined to go with other Microsoft products as well, whether that be SQL Server, Dynamics CRM, SharePoint, etc. True, the Microsoft product may not always be the best in a vacuum, but no CIO operates in a vacuum: maintenance and service costs are a huge concern, and there is a lot to be gained by buying from fewer vendors rather than more. In fact, much of Microsoft’s growth over the last 15 years can be traced to Ballmer’s cleverness in exploiting this advantage through both new products and also new pricing and licensing agreements that heavily incentivized Microsoft customers to buy ever more from the company.

As noted above, this was the exact same strategy as Oracle. However, enterprise IT decision-making is undergoing dramatic changes: first, without the need for significant up-front investment, there is much less risk in working with another vendor, particularly since trials usually happen at the team or department level. Second, without ongoing support and maintenance costs there is much less of a variable cost argument for going with one vendor as well. True, that leaves the potential hassle of incorporating those fifty different vendors Ellison warned about, but it also means that things like the actual quality of the software and the user experience figure much more prominently in the decision-making — and the point about team-based decision-making makes this even more important, because the buyer is also the user.

Oracle in the Middle

In short, what Ellison was selling as the new Oracle looks an awful lot like the old Oracle: a bunch of products that are mostly what most customers want, at least in theory, but with neither the flexibility and scalability of AWS’ infrastructure on one side nor the focus and commitment to the user experience of dedicated SaaS providers on the other. To put it in database terms, like a hierarchical database Oracle is pre-deciding what its customers want and need with no flexibility. Meanwhile, AWS and dedicated SaaS providers are the relational databases, offering enterprises optionality and scalability to build exactly what they need for their business when they need it; sure, it may not all be working yet, but the long-term trends couldn’t be more obvious.

It should be noted that much of this analysis primarily concerns new companies that are building out their IT systems for the first time; Oracle’s lock on its existing customers, including the vast majority of the largest companies and governments in the world, remains very strong. And to that end its strategy of basically replicating its on-premise business in the cloud (or even moving its cloud hardware on-premise) makes total sense; it’s the same sort of hybrid strategy that Microsoft is banking on. Give their similarly old-fashioned customers the benefit of reducing their capital expenditures (increasing their return on invested capital) and hopefully buy enough time to adapt to a new world where users actually matter and flexible and focused clouds are the best way to serve them.

  1. IBM did force Intel to share its design with AMD to ensure dual suppliers []
  2. Amazingly, IBM kept Codd separate from the engineering team []
  3. To be fair to IBM, SQL/DS and their later mainframe product, DB2, were far more reliable than Oracle’s earliest versions []
  4. Specifically, Amazon spent $1.7 billion in capital expenditures and $1.7 billion in capital lease commitments []
  5. This expenditure includes distribution centers for the retail business; however, no matter how your split it, Amazon is spending a lot more []

Facebook Versus the Media

Facebook found itself in the middle of another media controversy last week. Here’s the New York Times:

The image is iconic: A naked, 9-year-old girl fleeing napalm bombs during the Vietnam War, tears streaming down her face. The picture from 1972, which went on to win the Pulitzer Prize for spot news photography, has since been used countless times to illustrate the horrors of modern warfare.

But for Facebook, the image of the girl, Phan Thi Kim Phuc, was one that violated its standards about nudity on the social network. So after a Norwegian author posted images about the terror of war with the photo to Facebook, the company removed it.

The move triggered a backlash over how Facebook was censoring images. When a Norwegian newspaper, Aftenposten, cried foul over the takedown of the picture, thousands of people globally responded on Friday with an act of virtual civil disobedience by posting the image of Ms. Phuc on their Facebook pages and, in some cases, daring the company to act. Hours after the pushback, Facebook reinstated the photo across its site.

This, like many of Facebook’s recent run-ins with the media, has been like watching an old couple fight: they are nominally talking about the same episode, but in reality both are so wrapped up in their own issues and grievances that they are talking past each other.

Facebook Owns

Start with the media. Aftenposten Editor-in-chief Espen Egil Hansen wrote an open-letter to Facebook CEO Mark Zuckerberg that was, well, pretty amazing, and I’m not sure that’s a compliment:

Facebook has become a world-leading platform for spreading information, for debate and for social contact between persons. You have gained this position because you deserve it. But, dear Mark, you are the world’s most powerful editor. Even for a major player like Aftenposten, Facebook is hard to avoid. In fact we don’t really wish to avoid you, because you are offering us a great channel for distributing our content. We want to reach out with our journalism.

However, even though I am editor-in-chief of Norway’s largest newspaper, I have to realize that you are restricting my room for exercising my editorial responsibility. This is what you and your subordinates are doing in this case.

Actually, no, that is not what is happening at all. Aftenposten is not Facebook, and Facebook is not “Norway’s largest newspaper”. Accordingly, Facebook — and certainly not Mark Zuckerberg — did not take the photo down from They did not block the print edition. They did not edit dear Espen. Rather, Facebook removed a post on, which Aftenposten does not own, and which Hansen admits in his own open letter is something freely offered to the newspaper, one that they take because it is “a great channel for distributing our content.”

Let me foreshadow what I will say later: Facebook screwed this up. But that doesn’t change the fact that is a private site, and while Aftenposten is more than happy to leverage Facebook for its own benefit that by no means suggests Aftenposten has a single iota of ownership over its page or anyone else’s.

The Freedom of the Internet

Unfortunately, Hansen’s letter gets worse:

The media have a responsibility to consider publication in every single case. This may be a heavy responsibility. Each editor must weigh the pros and cons. This right and duty, which all editors in the world have, should not be undermined by algorithms encoded in your office in California…

The least Facebook should do in order to be in harmony with its time is introduce geographically differentiated guidelines and rules for publication. Furthermore, Facebook should distinguish between editors and other Facebook-users. Editors cannot live with you, Mark, as a master editor.

I’ll be honest, this made me mad. Hansen oh-so-blithely presumes that he, simply by virtue of his job title, is entitled to special privileges on Facebook. But why, precisely, should that be the case? The entire premise of Facebook, indeed, the underpinning of the company’s success, is that it is a platform that can be used by every single person on earth. There are no gatekeepers, and certainly no outside editors. Demanding special treatment from Facebook because one controls a printing press is not only nonsensical it is downright antithetical to not just the premise of Facebook but the radical liberty afforded by the Internet. Hansen can write his open letter on and I can say he’s being ridiculous on and there is not a damn thing anyone, including Mark Zuckerberg, can do about it.1

Make no mistake, I recognize the threats Facebook poses to discourse and politics; I’ve written about them explicitly. There are very real concerns that people are not being exposed to news that makes them uncomfortable, and Hansen is right that the photo in question is an example of exactly why making people feel uncomfortable is so important.

But it should also not be forgotten that the prison of engagement-driving news that people are locking themselves in is one of their own making: no one is forced to rely on Facebook for news, just as Aftenposten isn’t required to post its news on Facebook. And on the flipside, the freedom and reach afforded by the Internet remain so significant that the editor-in-chief of a newspaper I had never previously read can force the CEO of one of the most valuable companies in the world to accede to his demands by rousing worldwide outrage.

These two realities are inescapably intertwined, and as a writer who almost certainly would have never been given an inch of space in Aftenposten, I’ll stick with the Internet.

Facebook is Not a Media Company

One more rant, while I’m on a roll: journalists everywhere are using this episode to again make the case that Facebook is a media company. This piece by Peter Kafka was written before this photo controversy but is an excellent case-in-point (and, sigh, it is another open letter):

Dear Mark, We get it. We understand why you don’t want to call Facebook a media company. Your investors don’t want to invest in a media company, they want to invest in a technology company. Your best-and-brightest engineers? They don’t want to work at a media company. And we’re not even going to mention Trending Topicgate here, because that would be rude.

But here’s the deal. When you gather people’s attention, and sell that attention to advertisers, guess what? You’re a media company. And you’re really good at it. Really, really good. Billions of dollars a quarter good.

Let’s be clear: Facebook could call themselves a selfie-stick company and their valuation wouldn’t change an iota. As Kafka notes later in the article Facebook gets all their content for free, which is a pretty big deal.

Indeed, I think one of the (many) reasons the media is so flummoxed with Facebook is that the company has stolen their business model and hugely improved on it. Remember, the entire reason why the media was so successful was because they made massive fixed cost investments in things like printing presses, delivery trucks, wireless spectrum, etc. that gave them monopolies or at worst oligopolies on local attention and thus advertising. The only fly in the ointment was that actual content had to be created continuously, and that’s expensive.

Facebook, like all Internet companies, takes the leverage of fixed costs to an exponentially greater level and marries that with free content creation that is far more interesting to far more people than old media ever was, which naturally attracts advertisers. To put it in academic terms, the Internet has allowed Facebook to expand the efficient frontier of attention gathering and monetization, ruining most media companies’ business model.

In other words, had Kafka insisted that Facebook is an advertising company, just like media companies, I would nod in agreement. That advertising, though, doesn’t just run against journalism: it runs against baby pictures, small businesses, cooking videos and everything in between. Facebook may be everything to the media, but the media is one of many types of content on Facebook.

In short, as long as Facebook doesn’t create content I think it’s a pretty big stretch to say they are a media company; it simply muddies the debate unnecessarily, and this dispute with Aftenposten is a perfect example of why being clear about the differences between a platform and a media company is important.

The Facebook-Media Disconnect

The disconnect in this debate reminds me of this picture:


Ignore the fact that Facebook owns a VR company; the point is this: Facebook is, for better or worse, running a product that is predicated on showing people exactly what they want to see, all the way down to the individual. And while there is absolutely editorial bias in any algorithm, the challenge is indeed a technical one being worked out at a scale few can fully comprehend.

That Norwegian editor-in-chief, meanwhile, is still living in a world in which he and other self-appointed gatekeepers controlled the projector for the front of the room, and the facts of this particular case aside, it is awfully hard to avoid the conclusion that he and the rest of the media feel entitled to individuals’ headsets.

Facebook’s Mistake

Still, the facts of this case do matter: first off, quite obviously this photo should have never been censored, even if the initial flagging was understandable. What is really concerning, though, was the way Facebook refused to back down, not only continuing to censor the photo but actually barring the journalist who originally posted it from the platform for three days. Yes, this was some random Facebook staffer in Hamburg, but that’s the exact problem! No one at Facebook’s headquarters seems to care about this stuff unless it turns into a crisis, which means crises are only going to continue with potentially unwanted effects.

The truth is that Facebook may not be a media company, but users do read a lot of news there; by extension, the company may not have a monopoly in news distribution, but the impact of so many self-selecting Facebook as their primary news source has significant effects on society. And, as I’ve noted repeatedly, society and its representatives may very well strike back; this sort of stupidity via apathy will only hasten the reckoning.2

  1. It should be noted that this is exactly why the Peter Thiel-Gawker episode was so concerning. []
  2. And, I’d add, this is exactly why I think Facebook should have distanced itself from Thiel []

Beyond the iPhone

I enjoy the writing of Farhad Manjoo, tech columnist at The New York Times, but I was prepared for the hottest of hot takes when I saw his latest column, penned just hours after Apple’s latest product unveiling, was titled What’s Really Missing From the New iPhone: Dazzle.1 Once I read it, though, I found a lot to agree with:

Apple has squandered its once-commanding lead in hardware and software design. Though the new iPhones include several new features, including water resistance and upgraded cameras, they look pretty much the same as the old ones. The new Apple Watch does too. And as competitors have borrowed and even begun to surpass Apple’s best designs, what was iconic about the company’s phones, computers, tablets and other products has come to seem generic…

It’s not just that a few new Apple products have been plagued with design flaws. The bigger problem is an absence of delight.

Indeed, it sure seemed to me while watching yesterday’s keynote that the level of excitement and, well, delight peaked early and gradually ebbed away as Apple CEO Tim Cook and his team of presenters got deeper into the details of Apple’s new hardware. Of course a lot of that had to do with the shocking appearance of the legendary Shigeru Miyamoto of Nintendo, on hand to announce Super Mario Run exclusively for iOS.2

The Nintendo news was certainly a surprise,3 but it actually fit in quite well thematically with the opening of the keynote: first was a video of Tim Cook and Carpool Karaoke creator James Corden in a funny skit and a not too subtle reminder that Apple recently bought the upcoming Carpool Karaoke series as an exclusive for Apple Music. That was followed by touting the success of Apple Music and the fact it has “content no one else has” — i.e. more exclusives. Following that up with Mario sure seemed to suggest that Apple was increasingly going to leverage its war chest to differentiate its “good-enough” phones.

The Threat of “Good Enough”

It has been clear for many years that the threat to iPhone growth was not modular Android but the iPhones people already have. The hope with last year’s iPhone 6S launch was that new features like 3D Touch and Live Photos would be compelling enough to drive upgrades, but it turned out that many would-be upgraders had already bought the iPhone 6 and the rest didn’t care; the result was the first iPhone that sold less than a previous model.

At first glance, as Manjoo noted, the iPhone 7 doesn’t seem like it will do much to reverse that trend: it’s mostly the same as the two-year-old iPhone 6 people bought instead of the iPhone 6S, and folks still using older iPhones may very well upgrade — if they upgrade at all — to the cheaper iPhone SE or the newly discounted 6S. After all, as multiple commentators have noted, the most talked-about feature of the iPhone 7 is what it doesn’t have — the headphone jack. Surely no headphone jack + no dazzle = no growth, right?

Well, probably. I have been and remain relatively pessimistic about this iPhone cycle (perhaps because I was overly optimistic last year). However, I was actually very impressed by what Apple introduced yesterday: many of the products and features introduced didn’t make for flashy headlines, but they laid the foundation for both future iPhone features and, more importantly, a future beyond the iPhone.

The iPhone 7 Plus Camera

The annual camera upgrade is always one of the best reasons to upgrade an iPhone, especially if you have little kids creating irreplaceable memories that you want to capture in as high a fidelity as possible. And, as usual, Apple and its suppliers have delivered a better lens, a better sensor, and a better image processor, along with image stabilization on both the iPhone 7 and the iPhone 7 Plus (the iPhone 6S did not have image stabilization, while the iPhone 6S Plus did).

The iPhone 7 Plus, though, retains a photographic advantage over its smaller sibling thanks to the fact it actually has two cameras:


One camera uses the familiar 28mm-equivalent lens found on the iPhone 7, while the second has a 56mm-equivalent lens for superior zooming capabilities (2x optical, which also means digital zooming is viable at longer distances). Apple also demonstrated an upcoming software feature that recreates the shallow depth-of-field that is normally the province of large-sensored cameras with very fast lenses:

Screen Shot 2016-09-08 at 7.30.45 PM

This effect is possible because of those two lenses; because they are millimeters apart, each lens “sees” a scene from a slightly different perspective. By comparing the two perspectives, the iPhone 7 Plus’ image processor can build a depth map that identifies which parts of the scene are in the foreground and which are in the background, and then artificially apply the bokeh that makes a shallow depth-of-field so aesthetically pleasing.

Bokeh, though, is only the tip of the iceberg: what Apple didn’t say was that they may be releasing the first mass-market4 virtual reality camera. The same principles that make artificial bokeh possible also go into making imagery for virtual reality headsets. Of course you probably won’t be able to use the iPhone 7 Plus camera in this way — Apple hasn’t released a headset, for one — but when and if they do, the ecosystem will already have been primed, and you can bet FaceTime VR would be an iPhone seller.

Apple’s willingness and patience to lay the groundwork for new features over multiple generations remains one of its most impressive qualities. Apple Pay, for example, didn’t come until the iPhone 6, but the groundwork had already been laid by the introduction of Touch ID and the secure element in the iPhone 5S. Similarly, while Apple introduced Bluetooth beacons in 2013 and the Apple Watch in 2014, the company had actually been shipping the necessary hardware since 2011’s iPhone 4S.5 I wouldn’t be surprised if we look back at the iPhone 7 Plus’ dual cameras with similar appreciation.

The Headphone Jack

In one of the more tone-deaf moments in Apple keynote history, Senior Vice President of Worldwide Marketing Phil Schiller justified the aforementioned removal of the headphone jack this way:

The reason to move on — I’m going to give you three of them — but it really comes down to one word: courage. The courage to move on, do something new, that betters all of us, and our team has tremendous courage.

Schiller should have stuck to the three reasons: the superiority of Lightning (meh), the space gained by eliminating the headphone jack, and Apple’s vision for audio on mobile devices.

Start with the space: getting rid of the headphone jack fits with the multi-year feature creation I detailed above; current rumors are that next year’s 10th-anniversary iPhone will be nothing but a screen. Making such a phone, though, means two fundamental changes to the iPhone: first, the home button needs, well, a new home; last year’s introduction of 3D Touch and the iPhone 7’s force touch home button lay the groundwork for that. That headphone jack, though, is just as much of an impediment: try to find one phone or music player of modern thinness that has the headphone jack under the screen (the best example of the space issues I’m referring to: the 6th generation iPod nano).

Still, that’s speculation; Apple insists iPhone 7 users will see the benefits right away. Apple executives told BuzzFeed that removing the headphone jack made it possible to bring that image stabilization to the smaller iPhone 7, gave room for a bigger battery, and eliminated a trouble-spot when it came to making the iPhone 7 water-resistant. It’s a solid argument, albeit one not quite worth Schiller’s hubris.

That said, the third reason — Apple’s vision of the future — is such a big deal for Apple in particular that I just might be willing to give Schiller a pass.

AirPods and the Future

Jony Ive, in his usual pre-recorded video, introduced the AirPods like this:

We believe in a wireless future. A future where all of your devices intuitively connect. This belief drove the design of our new wireless AirPods. They have been made possible with the development of the new Apple-designed W1 chip. It is the first of its kind to produce intelligent, high efficiency playback while delivering a consistent and reliable connection…

The W1 chip enables intelligent connection to all of your Apple devices and allows you to instantly switch between whichever one you are using. And of course the new wireless AirPods deliver incredible sound. We’re just at the beginning of a truly wireless future we’ve been working towards for many years, where technology enables the seamless and automatic connection between you and your devices.

Putting aside the possibility of losing the AirPods — and the problem that not everyone’s ears can accommodate one shape6 — and it really looks like Apple is on to something compelling. By ladling a bit of “special sauce” on top of the Bluetooth protocol, Apple has made the painful process of pairing as simple as pushing a button. Even more impressive is that said pairing information immediately propagates to all of your Apple devices, from MacBooks to Watch. As someone who has long since moved to Bluetooth headphones almost exclusively7 I can absolutely vouch for Apple’s insistence that there is a better way than wires,8 and the innovations introduced by the AirPod (which are also coming to Beats) help the headphone jack medicine go down just a bit more easily.

What is most intriguing, though, is that “truly wireless future” Ive talked about. What happens if we presume that the same sort of advancement that led from Touch ID to Apple Pay will apply to the AirPods? Remember, one of the devices that pairs with AirPods is the Apple Watch, which received its own update, including GPS. The GPS addition was part of a heavy focus on health-and-fitness, but it is also another step down the road towards a Watch that has its own cellular connection, and when that future arrives the iPhone will quite suddenly shift from indispensable to optional. Simply strap on your Watch, put in your AirPods, and, thanks to Siri, you have everything you need.

Ah, but there is the catch: I have long held up this vision, of pure voice computing, as Apple’s Waterloo. I wrote about it when the the Watch came out:

I also think that when the Watch inevitably gains cellular functionality I will carry my iPhone far less than I do today. Indeed, just as the iPhone makes far more sense as a digital hub than the Mac, the Watch will one day be the best hub yet. Until, of course, physical devices disappear completely:


That is the ultimate Apple bear case.

The truly wireless future that Ive hinted at doesn’t just entail cutting the cord between your phone and your headphones, but eventually a future where phones may not even be necessary. Given that Apple’s user experience advantages are still the greatest when it comes to physically interacting with your device, and the weakest when it comes to service dependent interactions like Siri, that is a frightening prospect.

And that is why I ultimately forgive Schiller for his “courage” hubris. To Apple’s credit they are, with the creation of AirPods, laying the foundation for a world beyond the iPhone. It is a world where, thanks to their being a product — not services — company, Apple is at a disadvantage; however, it is also a world that Apple, thanks to said product expertise, especially when it comes to chips, is uniquely equipped to create. That the company is running towards it is both wise — the sooner they get there, the longer they have to iterate and improve and hold off competitors — and also, yes, courageous. The easy thing would be to fight to keep us in a world where phones are all that matters, even if, in the long run, that would only defer the end of Apple’s dominance.

  1. The story has since been updated to have the headline “What’s Really Missing From the New iPhone: Cutting-Edge Design” []
  2. Although an Android version will come at some point in the future []
  3. I will cover this news from Nintendo’s perspective in a future Daily Update []
  4. Sorry Lucid []
  5. That the Apple Watch required the iPhone 5 or later was likely a strategic decision to drive upgrades, although I don’t know for certain []
  6. ? []
  7. Beats PowerBeats around town, and the new Bose QC35 noise-cancelling headphones for trips []
  8. That said, the Beats in particular are terrible for music, but I mostly listen to podcasts []