Stratechery Plus Update

  • Open, Closed, and Privacy

    Note: This article has nothing to do with open or closed source code

    It was eight years ago next month that Vic Gundotra, then-VP of Engineering at Google, delivered a blistering attack on Apple for not being open:1

    A slide from Google's 2010 I/O keynote criticizing Apple

    On [my] first day I met a man named Mr. Andy Rubin. Now I suspect most of you know who Andy Rubin is. At the time he was responsible for what was then a secret project codenamed Android, and on that first day Andy enthusiastically described to me the team’s mission and purpose. And as he spoke — I’ll level with you — I was skeptical. In fact, I interrupted Andy, and I said, “Andy, I don’t get it. Does the world really need another mobile operating system? Google is about advertising — shouldn’t we be on every phone?”

    To this day I remember Andy’s response, and he made two points. The first point Andy made was that it was critically important to provide a free mobile operating system — an open-source operating system — that would enable innovation at every level of the stack. In other words, OEMs should be free to build all kinds of devices — devices with keyboards, without keyboards, with front-facing cameras, two inches, three inches, four inches — that operators should be able to compete on the strength and coverage of their network — 2G, 3G, 4G, LTE, CDMA — and that in the end, with innovation coming at every layer, it would be the consumer who would be able to benefit by getting the best device on the best network for them.

    I remember Andy’s second point: he argued that if Google did not act, we faced a draconian future, a future where one man, one company, one device, one carrier, would be our only choice. That’s a future we don’t want! So if you believe in openness, if you believe in choice, if you believe in innovation from everyone, then welcome to Android.

    Gundotra repeated the word “open” like a mantra, appealing to the sensibilities of not just people in technology but also its critics, opposed to so-called “walled gardens”; the two primary offenders were deemed to be Apple and Facebook.

    This is what made Google’s low-key announcement of its latest plans for messaging on Android phones — an exclusive with The Verge about what it calls Chat — so striking: the company is introducing an open alternative to products like iMessage and WhatsApp, but only as a last resort, and the effort is being pilloried by critics to boot; Walt Mossberg was representative:

    Of course Google’s critics are not criticizing Chat for being open; they are, like Mossberg, criticizing it for being “insecure” — that is, not end-to-end encrypted like iMessage or WhatsApp. That, though, is the rub: being “secure” and being “open” are incompatible.

    How End-to-End Encryption Works

    A quick primer on how end-to-end encryption works, using iMessage as an example; I’m going to dramatically simplify this explanation, but you can read Apple’s security white paper to get the specifics:

    • When iMessage is turned on, “keys” are generated; these are produced in pairs, one private and one public. These two keys are related: the public key encrypts content such that it can only be decrypted by a private key; to analogize them to a safe, the public key locks the door, and the private key unlocks it.
    • The relationship between these two keys is, well, the key to understanding how encryption works in messaging (and all communications): anyone sending an encrypted message “locks” the content using a public key, which means that the only person that can “unlock” and read the message is whoever has the corresponding private key.
    • To that end, the private key is, as the name implies private: it is kept on the device that generated it (in fact, every device with iMessage generates its own encryption keys). The public key, meanwhile, is public: for anyone to be able to send you an encrypted message means that everyone must be able to find the public key that corresponds to your private key.

    This is the precise spot where “open” breaks down: you can, in fact, send encrypted content over open protocols like email. The problem is that the sender cannot just unilaterally decide to encrypt a message; rather, the receiver has to first generate a public-private key pair, then share the public key with the sender so that the email can be encrypted in a way that only the recipient — thanks to their private key — can read it. This is, needless to say, far beyond the capabilities of most users: not only do they not understand that there needs to be a conversation before the conversation, they don’t even know the language they need to use.

    And yet, over 100 billion messages are sent per day on WhatsApp and iMessage alone, and the reason is because both are closed. To continue with the iMessage explanation, public keys are sent to Apple’s servers to be stored in a directory service; there they (along with the public keys from all of the user’s devices) are associated with the user’s phone number or email address. This is the critical piece to making iMessage encryption easy-to-use: senders need only know the recipients phone number or email address; Apple will silently pass the appropriate public keys to the sender to encrypt the message such that only the recipient can read it.2

    In short, encryption is viable for the public at scale precisely because Apple controls everything: clients on both ends, and the server in the middle. It’s the same story with WhatsApp or any of the other encrypted messaging services: being closed makes end-to-end encryption actually usable at scale.

    And, as I explained on Monday, this option is not available to Google when it comes to Android: OEMs don’t want to deepen their Google dependence, and carriers do not want to undercut their lucrative SMS business (and Google can’t force the issue because of its looming antitrust problems). The only option was the one Gundotra lauded in 2010: an open standard that no one controls, for better or, in the case of the desire for end-to-end encryption, worse.3

    Encryption and Privacy

    The ongoing debate about data and privacy is directly related to the question of encryption in some important ways, as Mossberg’s tweet notes: messaging content is data that users would like to keep private, and encryption accomplishes that.

    Of course it is not the only data generated by messaging: entailed in the ease-of-use that comes from relying on centralized servers for key exchange is the necessary collection by those servers of metadata. Obviously email addresses and/or phone numbers and/or usernames have to be stored (so that they can be associated with public keys), and the very act of connecting two accounts will generate logs of who was communicating with whom and when, and often from where (through IP addresses). Services can and do differentiate based on how long they keep that metadata; Signal,4 for example, promises to flush metadata as soon as possible, whereas WhatsApp — which uses encryption developed by Signal — keeps such data indefinitely.

    That gets at the more important way that the relationship between open/closed and encryption is relevant to data and privacy: just as encryption at scale is only possible with a closed service, so it is with privacy. That is, to the extent we as a society demand privacy, the more we are by implication demanding ever more closed gardens, with ever higher walls. Just as a closed garden makes the user experience challenge of encryption manageable, so does the centralization of data make privacy — of a certain sort — a viable business model.

    The reality of digital services is that the amount of data each of us generates at basically all times is astronomical; your phone always knows where you are, but so does every app you use and every website you visit.

    A map of Stratechery readers
    Stratechery readers

    Google, of course, knows one’s every search, for many people their every email, and thanks to the company’s ad network, control of Chrome and Google analytics, and, of course Android, pretty much everything else one does online. Facebook’s knowledge is slightly less broad but arguably deeper: your friends, your interests — both stated and revealed — and thanks to its ‘Like’ button, your web activity as well.

    To focus on simply Google and Facebook, though, is to miss how much other data collection is going on: ad networks are tracking you on nearly every website you visit, your credit card company is tracking your purchases (and by extension your location), your grocery store is tracking your eating habit, the list goes on and on. Moreover, the further down you go down the data food chain, the more likely it is that data is bought and sold. That, of course, is as open as it gets.

    Data Collection Versus Data Leakage

    Still, the contrast between Google and Facebook is worth considering: Facebook is in hot water thanks to the revelation that some amount of the data it collects was sold to Cambridge Analytica, which bragged it helped elect Donald Trump president. One does wonder how much that allegation drives the outrage about the fact that Facebook shared that data to begin with, but leaving that aside, what is noteworthy is that the outrage stems from the sharing of the data, not its collection. Yes, some are outraged by that collection — but they were outraged before the current scandal, and their objections simply didn’t register with the broader public.

    This view is buttressed by the fact that Google has been largely unscathed by the current controversy; what seems significant is not the fact that the company collects data, but rather that it has been careful to keep that data inside its walled garden. Indeed, that was always the irony with Gundotra’s attack on Apple: Google has always been anything but open when it came to its proprietary technology or its money-making ad apparatus (of which user data plays an important part). Its insistence that Android be open was based not on principle but on sound strategy: challengers always want to commoditize their complements, and for Google, smartphones themselves were complements to Search and ads.

    The implication is quite far-reaching: being open, at least to the extent that openness involved user data of any sort, is increasingly unacceptable; that new companies and user benefits might result from that data no longer matters, a fate that all-too-often befalls the not-yet-created.

    The Entrenchment of Google and Facebook

    This entrenches Facebook and Google in three ways:

    • First, it is even more unlikely that a challenger to either will arise without meaningful access to their proprietary data. This, to be fair, was already quite unlikely: the entire industry learned from Instagram’s piggy-backing on Twitter’s social graph that sharing data with a potential competitor was a bad idea from a business perspective.
    • Second, Google and Facebook will increasingly be the only source of innovations that leverage their data; it will be too politically risky for either to share anything with third parties. That means new features that rely on user data must be built by one of the two giants, or, as is always the case in a centrally-planned system relative to a market, not built at all.
    • Third, Google and Facebook’s advertising advantage, already massive, is going to become overwhelming. Both companies generate the majority of their user data on their own platforms, which is to say their data collection and advertising business are integrated. Most of their competitors for digital advertising, on the other hand, are modular: some companies collect data, and other collect ads; such a model, in a society demanding ever more privacy, will be increasingly untenable.

    There are increasing expectation that this is exactly what will happen with the European Union’s General Data Protection Regulation (GDPR). From the Wall Street Journal:

    Brussels wants its new General Data Protection Regulation, or GDPR, to stop tech giants and their partners from pressuring consumers to relinquish control of their data in exchange for services. The EU would like to set an example for legislation around the world. But some of the restrictions are having an unintended consequence: reinforcing the duopoly of Facebook Inc. and Alphabet Inc.’s Google…

    Digital advertising companies, known as ad tech firms, say Google and Facebook’s strict interpretation of GDPR squeezes their business. The ad tech firms embed their own technology in publishers’ websites and apps, putting them in competition with the tech giants. Unlike the giants, the ad tech firms have no direct relationship with consumers. They say Google’s and Facebook’s response pressures publishers to seek consent on behalf of dozens of ad tech firms that people have never heard of.

    This is hardly a surprise — I predicted this months ago. And, while GDPR advocates have pointed to the lobbying Google and Facebook have done against the law as evidence that it will be effective, that is to completely miss the point: of course neither company wants to incur the costs entailed in such significant regulation, which will absolutely restrict the amount of information they can collect. What is missed is that the increase in digital advertising is a secular trend driven first-and-foremost by eyeballs: more-and-more time is spent on phones, and the ad dollars will inevitably follow. The calculation that matters, then, is not how much Google or Facebook are hurt in isolation, but how much they are hurt relatively to their competitors, and the obvious answer is “a lot less”, which, in the context of that secular increase, means growth.

    Privacy and Regulation

    There is a broader question from GDPR specifically and the idea that the tide is pushing towards walled gardens generally: what should the seemingly inevitable regulation of tech companies look like? It seems increasingly certain that privacy will be a major focus (it obviously already is in the European Union), but to stop there would be a mistake.

    Specifically, if an emphasis on privacy and the non-leakage of data is a priority, it follows that the platforms that already exist will be increasingly entrenched. And, if those platforms will be increasingly entrenched, then the more valuable might regulation be that ensures an equal playing field on top of those platforms. The reality is that an emphasis on privacy will only increase the walls on those gardens; it may be fruitful to rule out the possibility of unfair expansion.

    Note: I wrote a follow-up in the Daily Update that you can read in this footnote:5


    1. The picture is from his presentation 

    2. Because private keys are associated with devices, iMessage actually encrypts a single message multiple times, each time using the public key for a different recipient device 

    3. To be very clear, it is technically possible to layer encryption onto RCS, but it requires the cooperation of the carriers collectively and the addition of a trusted entity like certificate authorities for https; the entire point, though, is that carriers refuse to do this. 

    4. An example of a open-source software that is a closed service 

    5. So, I definitely messed up with yesterday’s article in a way none of you noticed; given that on Monday I wrote in-depth about Google’s new Chat initiative, I kind of skirted over the details in yesterday’s article, Open, Closed, and Privacy. Unfortunately, that meant I got a whole bunch of tweets and email from non-subscribers taking me to task for items, well, that I already explained (I didn’t get any from subscribers). The perils of paywalls!

      Probably the two biggest points of pushback were that Google could build an encrypted system if they wanted to (as I explained on Monday, they already tried, and they can’t really exercise Android leverage right now), and that carriers could build a federated key exchange system and/or something akin to the certificate authority framework that undergirds HTTPS. That is all true!

      My point, though — and the reality that Google had to accept, as The Verge feature explained — is that the carriers are not going to do that, full stop. The only way to achieve end-to-end encryption in the real world as it exists today is to build a separate centralized service that sits on top of phones (via apps) and runs over the Internet. To put it another way, Google wasn’t choosing whether to build an encrypted service or an open one; they were choosing whether to build something better than SMS or nothing at all.

      Now, does Google have a business interest in message content being unencrypted? I suppose, and as I noted on Monday, making Allo unencrypted by default was a bad look (although understandable for non-advertising related reasons, specifically the deep integration with Google Assistant). The truth, though, is that Google already knows plenty about everyone, especially those using Android. One could argue that Google didn’t fight hard enough for encryption, but to say the company actively didn’t want encryption isn’t quite right in my opinion.

      Still, the clarification is useful given the comparison I was trying to draw between encryption and privacy: just as one can, in theory, envision a standard that is both open and includes encryption (like HTTPS!), one can also envision a world where users truly own their data in a secure way and carry it from service-to-service. In reality, such systems are far more viable if built into the foundation of the technology (like HTTPS!) as opposed to being retrofitted over the objection of entrenched incumbents.

      Two more points of follow-up:

      • While I didn’t say so explicitly, I think I at least strongly implied on Monday that I would not expect Apple to support Chat. They certainly could — remember, this is basically SMS 2.0, and Apple obviously supports SMS — but it is difficult for me to imagine any scenario where Apple doesn’t hold its ground with the (very legitimate!) excuse that Chat is not encrypted. More importantly, it is even more difficult for me to see any way that carriers could exert leverage on Apple; their lack of leverage is why iMessage exists in the first place.
      • The blockchain is, of course, a theoretical solution, but as I’ve noted previously, the real blockchain upside with regards to this debate is the entire undoing of aggregators through decentralization. To be sure, that is by no means a sure thing, for many of the principles laid out in this article, particularly the trade-off between a user experience that scales and such decentralization. Regardless, any such solution is quite a ways in the future.

      As for the final bit about regulation, stay tuned. It has been top-of-mind for a long time. 


  • Zillow, Aggregation, and Integration

    Last Friday something truly remarkable happened: a public company that had grown its valuation from $539 million to nearly $7 billion in seven years announced it was changing its business model. The company was Zillow, and the stock market quickly put a price on how big of a risk the company was taking; from CNBC:

    Zillow shares plunged 9 percent on Friday after the online real estate database company announced it will begin buying and selling homes, a capital-intensive endeavor. With Zillow’s new program, announced on Thursday, home sellers in the test markets of Phoenix and Las Vegas will be able to use Zillow’s platform to compare offers from potential buyers — and Zillow. When Zillow purchases a home, it will aim to quickly flip the home, making updates and repairs and listing it as soon as possible. An agent will represent Zillow in each transaction.

    “We’re entering that market and think we have huge advantages because we have access to the huge audience of sellers and buyers,” Zillow CEO Spencer Rascoff said on CNBC’s “Squawk Alley.” “After testing for a year in a marketplace model, we’re ready to be an investor in our own marketplace.”

    But investors are less enthusiastic. Flipping homes, a model that’s being utilized by start-up Opendoor, is very different than operating an internet marketplace. It carries additional risk associated with buying and selling homes and requires a hefty investment in operations. And it also potentially puts Zillow in direct competition with the realtors on its platform. Zillow sank $5, or 9.3 percent, to $48.77 as of mid-day on Friday, knocking more than $900 million off its stock market value.

    That’s a lot of money to bet on…well, what exactly? What kind of company is Zillow today, and what kind of company does it hope to be in the future?

    Zillow and Aggregation Theory

    Last fall I refined Aggregation Theory by Defining Aggregators. To quickly summarize, I wrote that Aggregators as a whole share three characteristics:

    • A direct relationship with users
    • Zero marginal costs to serve those users
    • Demand-driven multi-sided networks that result in decreasing acquisition costs

    This allows Aggregators to leverage an initial user experience advantage with a relatively small number of users into power over some number of suppliers, which come onto the platform on the Aggregator’s terms, enhancing the user experience and attracting more users, setting off a virtuous cycle of an ever-increasing user base leading to ever-increasing power over suppliers.

    Not all Aggregators are the same, though; they vary based on the cost of supply:

    • Level 1 Aggregators have to acquire their supply and win by leveraging their user base into superior buying power (i.e. Netflix).
    • Level 2 Aggregators do not own their supply but incur significant marginal costs in scaling supply (i.e. Airbnb or Uber).
    • Level 3 Aggregators have zero supply costs (i.e. App Stores or social networks)

    Where, then, does Zillow fit? It certainly has the hallmarks of an Aggregator: users go to Zillow directly to look for homes, Zillow incurs zero marginal costs to serve those users, and the company has created a two-sided market where its suppliers (home sellers) are incentivized to come onto the platform on Zillow’s terms in order to reach Zillow’s end users, thus making the platform more attractive to those end users.

    The question of supply is more complicated; in North America real estate listings are gathered in hundreds of local multiple listing services (MLSs) run by local realtor associations, and access is restricted to brokers in that local region. Redfin got access to those listings by becoming a broker itself, but Zillow, at least at the beginning, relied on brokers uploading listings themselves — which they were willing to do, thanks to the userbase Zillow had already built up thanks in part to its Zestimate house valuation tool.

    This was Aggregation Theory in action: gain users with a new kind of user experience, then leverage that user base to get suppliers to come onto your platform on your terms, further improving the user experience. And, eventually, Zillow was able to parlay that user base into direct access to those MLS services, first via the owners of Realtor.com, and then, when they pulled the agreement, via local MLSs and brokers directly who understood how important it was to stay on Zillow.

    Interestingly, this means that Zillow arguably started out as a Level 3 Aggregator, and then stepped down to a hybrid of Level 1 and Level 2: cutting all of those deals is expensive, and the company does pay for the data, but it’s not exclusive by any means. And this, by extension, gets at why Zillow, despite having so many of the characteristics of an Aggregator, just doesn’t seem nearly as important as companies like Netflix or Airbnb or Facebook: it has accommodated itself to the real estate industry; it hasn’t transformed it.

    The Real Estate Media Company

    The first sentence in Zillow’s S-1 was its mission statement: “Our mission is to build the most trusted and vibrant home-related marketplace to empower consumers with information and tools to make intelligent decisions about homes.” In 2014, though, the company coined a new description for itself: a “real-estate media company.”

    The occasion was the purchase of Trulia: both companies made money selling ads to real estate agents eager to get their listings at the top of the two real estate aggregators that were the top two starting points for real estate searches; by emphasizing they were both media companies Zillow could claim they both had many competitors and weren’t competitive with real estate agents all at the same time.

    It also had the benefit of being true (until last week). The real estate business in North America has long been an expensive quagmire, for reasons I laid out when Zillow bought Trulia:

    • While real estate transactions in the aggregate are very frequent, for individual buyers and sellers they are very rare. Thus there is little incentive to push for a simpler solution.
    • A real estate transaction is usually the largest transaction most buyers and sellers will undertake, which makes them very risk averse and unwilling to try an unconventional service.
    • There is a lot of regulation and paperwork associated with a real estate transaction, where assistance is very valuable. And, as just noted, transactions are rare, which means there is little incentive to learn how to deal with said regulations and paperwork on your own.

    Combine the reticence of consumers to push for change with the local realtor association-controlled MLSs, and a willingness by realtors to punish anyone changing the status quo (by not showing a house, or pointing out flaws that would kill a sale), and the best outcome for Zillow was to be an aggregator but not an integrator: the company was completely removed from the purchase process.

    Integration and Aggregation

    This gets at why Zillow, for all of its success, seems so underwhelming compared to other Aggregators. One of the key theories underpinning Aggregation Theory is Clayton Christensen’s Conservation of Attractive Profits, which I explored in the context of Netflix while developing the theory:

    The Law of Conservation of Attractive Profits1 [was] first explained by Clayton Christensen in his 2003 book The Innovator’s Solution:

    Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and de-commoditization, commoditization, that exists in order to optimize the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

    That’s a bit of a mouthful, but the example that follows in the book shows how powerful this observation is:

    If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computer’s architecture had to be modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one-size-fits-all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

    Did you catch that? That was Christensen, a full four years before the iPhone, explaining why it was that Intel was doomed in mobile even as ARM would become ascendent.2 When the basis of competition changed away from pure processor performance to a low-power system the chip architecture needed to switch from being integrated (Intel) to being modular (ARM), the latter enabling an integrated BlackBerry then, and an integrated iPhone four years later.3

    The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces.
    The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces. Do note that this is a drastically simplified illustration.

    More broadly, breaking up a formerly integrated system — commoditizing and modularizing it — destroys incumbent value while simultaneously allowing a new entrant to integrate a different part of the value chain and thus capture new value.

    Commoditizing an incumbent's integration allows a new entrant to create new integrations -- and profit -- elsewhere in the value chain.
    Commoditizing an incumbent’s integration allows a new entrant to create new integrations — and profit — elsewhere in the value chain.

    This is exactly what is happening with Airbnb, Uber, and Netflix too.

    This is the original piece of Aggregation Theory that was missing from last year’s Defining Aggregators: it is one thing to sit on top of an existing industry and, well, be a media company/lead generation tool. There have been a whole host of businesses that did exactly that, and while there is plenty of money to be made, without some sort of integration into the value chain of the industry itself they simply aren’t transformative. To put it another way, aggregation doesn’t transform value chains; integration does.

    Why aggregation matters is that it is the means by which new integrations are achieved:

    • Netflix leveraged its position as an aggregator of video content into the integration of the customer relationship and content creation, undoing the integration of linear channels and content creation
    • Airbnb/Uber and other similar services integrate the customer relationship with the driver/homeowner relationship, undoing the integration of cars/property with payment
    • Google and Facebook integrated content discovery with advertising, undoing the integration of editorial and advertising

    More broadly — and this really gets at why Zillow is different — Aggregators that change industries (including Aggregator-like Amazon and Apple that deal with physical goods) integrate the customer relationship with however it is their industry generates revenue; Zillow, on the other hand, was completely divorced from the home selling-and-buying process.

    The Threat to Zillow — and the Opportunity

    Again, not all companies need to be Aggregators, and as I noted at the beginning, Zillow has become a very successful company by getting half-way there. And, to return to that Daily Update about their purchase of Trulia, I didn’t think it was even possible for them to go all the way:

    So then, perhaps this deal isn’t anticompetitive, but rather the key to building a company big enough to finally shake up the homebuying process? That’s Brad Stone’s argument in Bloomberg Businessweek…But remember, Zillow/Trulia are marketing tools; who is paying for that tool? Stone has the answer in the next paragraph:

    The companies, which rely on advertising from real estate agents for the bulk of their revenues, are being careful about how they discuss the future of their combined efforts.

    What Stone characterizes as “careful” I characterize “prudent” and “truthful”, because let’s be honest: Zillow/Trulia are not going to bite the hand that feeds them. Nor should they! It would be irresponsible to their shareholders, employees, and all their other stakeholders. It’s very easy to fantasize about disruption; it’s much more productive to simply follow the money. (This is why Redfin is the more interesting company in this space; they use their own network of real estate agents. It’s also why they are much smaller, despite having had a head start.)

    This is why last week’s news was such a surprise, to me anyways; granted, Zillow had been experimenting with facilitating sales to investors, but to fundamentally change your capital structure, margin profile, and compete with your customers in one fell swoop feels like something else entirely — and Wall Street agreed!

    I can, though, see where Zillow is coming from: no one thinks the North American real estate market is the way it is because that is somehow optimal or good for consumers; the only folks that benefit from the status quo are real estate agents that continue to collect 6% of the purchase price even as their responsibilities, particularly in the case of the buying agent, run in the opposite direction of their incentives. Zillow did well to capture a portion of that 6% for itself through its realtor ad model, but that only meant that Zillow was as dependent on the status quo as the realtors.

    To be sure, Zillow has long been a better bet than Redfin, which has admirably IPO’d with a business that basically adds a tech layer (and thus superior lead generation) to a traditional real estate agency; the reality is that simply adding a tech layer doesn’t change industries — that requires new business models. This, though, is where Opendoor, the startup I wrote about in 2016, is compelling: buying houses with the click-of-a-button solves a major problem for sellers, the most disadvantaged party in the entire value chain under the status quo (and thus the most open to something new). And, by definition, it means the company (and competitors like OfferPad) are involved with the transaction that drives the value chain — the actual buying and selling of homes.

    Make no mistake, the business model is risky, but that is another way of saying the potential return is massive as well: truly becoming a market maker for an industry that does $900 billion worth of transactions every year has massive upside. And, by extension, massive downside for the status quo — which again, includes Zillow. That is one reason to act.

    Even so, that might not have been enough for Zillow to make such a shift: remember, this is a public company accountable to shareholders, and sometimes doubling down is the most prudent course of action. That, though, is why I spent so much time discussing integration: there is a massive amount of upside for Zillow in this move as well.

    Remember, Zillow is in nearly every respect already an Aggregator: it is by far the number one place people go when they want to look for a new house, and at a minimum the starting point for research when they want to sell one. They own the customer relationship! What has always been missing is the integration with the purchase itself — until last week. Zillow is making a play to be a true Aggregator — one that transforms its industry by integrating the customer relationship with the most important transaction in its respective value chain — by becoming directly involved in the buying and selling of houses.

    The Zillow Experiment

    This absolutely could go sidewise: Zillow is already being hammered in the stock market — investors aren’t generally fans of high-margin companies entering low-margin businesses, with huge amounts of volatility risk to boot. Moreover, Zillow is embracing a model that, should it be successful, tears down the status quo: this will not only enrage Zillow’s customers, but also endanger Zillow’s primary revenue stream.

    Here, though, Zillow’s status as an almost-Aggregator looms large: we now have years’ worth of evidence that realtors will do what it takes to ensure their listings appear on Zillow, because Zillow controls end users. It very well may be the case that realtors will find themselves with no choice but to continue giving Zillow the money the company needs to disrupt their industry.

    I will certainly be watching closely: how Zillow fares will result in lessons that may be applicable broadly. Think of Spotify, for example: I was a bit bearish on the company last month because of the power of Spotify’s suppliers; the bull case is that Spotify’s ownership of the customer relationship will allow the company to build out the capability to sidestep the record labels even as the record labels can’t punish Spotify because they need them. That’s exactly what Zillow is testing right now: just how much power comes from being an Aggregator, and how much an industry can be transformed when that power is wielded.


    1. Later renamed the Law of Conservation of Modularity 

    2. I have my differences with Christensen, but as I’ve said repeatedly my criticism comes from an attempt to build on his brilliant work, not tear it down 

    3. As I’ve noted, the iPhone is in fact modular at the component level; the integration is between the completed phone and the software. Not appreciating that the point of integration (or modularity) can be anywhere in the value chain is, I believe, at the root of a lot of mistaken analysis about the iPhone in particular 


  • The Facebook Current

    “I thought something was going to get done,” lamented a friend, in reference to yesterday’s Senate hearing that featured a single witness: Facebook Founder and CEO Mark Zuckerberg. “This was the moment of reckoning, but it just turned out to be a whimper — it’s just for show.”

    The sentiment seemed widespread on tech and media Twitter: there was a lack of specificity in terms of questions about privacy (this allowed Zuckerberg to turn nearly every question about the ownership of data to a discussion about user interface controls that limit where data is shown to other Facebook users), plenty of dodged questions (every time there was a question about the data Facebook generates about users beyond what they themselves enter into the system Zuckerberg needed to “check with his team”), and bad questions that presumed Facebook sells data, letting Zuckerberg run out the clock at least three times by explaining the basics of Facebook’s business model (this is precisely why I have been so outspoken about the problem of perpetrating this falsehood: it lets Facebook off the hook).

    In fact, though, I thought the hearing was quite revelatory — a “show”, if you will. First, the fact that Zuckerberg appeared at all is the most meaningful news; the nature of the American political system is that changes happen extremely gradually, and only then in response to fundamental shifts in underlying political opinion. This can certainly be frustrating if one wants faster change — or a relief if one fears those in power — but that is precisely why Zuckerberg’s appearance was noteworthy: there is a current moving against Facebook, and while it is not realistic to expect that current to already be a wave, it was strong enough to sweep him to Washington D.C. for the week.1

    Secondly — and count this as another indication that that current is stronger than it seems — there was a significant amount of agreement amongst the Senators in yesterday’s hearings that something needed to be done about Facebook. Forget the specifics, for a paragraph, because this is a notable development: while these hearings usually devolve into partisan cliches with the same talking points — Democrats want regulations, and Republicans don’t — yesterday Senators from both sides of the aisle expressed unease with Facebook’s handling of private data; obviously Democrats tried to tie the issue to the last election, but that made the Republicans’ shared concern all-the-more striking.

    Here is where the partisan divide does matter: the most important takeaway from yesterday’s hearing was the emergence of two distinct viewpoints on what the problem with Facebook actually is, and what to do about it. That these two viewpoints are in opposition is precisely why their emergence is so compelling: a current has to be very strong indeed for there to be two clearly articulable sides.

    Viewpoint One: Facebook Needs Regulation

    OK, so maybe one of the viewpoints fit the partisan cliche, but the idea that Facebook might need regulation was a frequent talking point, particularly from Democrats pushing already-proposed legislation. After detailing how, in his view, Facebook violated its 2011 Consent Decree with the FTC, Senator Richard Blumenthal distilled this viewpoint to its essence here:

    Senator Blumenthal: What happened here was willful blindness. It was heedless and reckless and in fact amounted to a violation of the FTC consent decree. Would you agree?

    Mark Zuckerberg: No, Senator. My understanding is not that this was a violation of the consent decree. But as I have said a number of times today, I think we need to take a broader view of our responsibility around privacy than just what is mandated in the current laws.

    SB: Well here is my reservation Mr. Zuckerberg…we’ve seen the apology tours before. You have refused to acknowledge even an ethical obligation to have reported this violation of the FTC consent decree, and we have letters, we’ve had contacts with Facebook employees…that indicates not only a lack of resources but lack of attention to privacy. And so, my reservation about your testimony today is that I don’t see how you can change your business model unless there are specific rules of the road. Your business model is to monetize user information, to maximize profit over privacy, and unless there are specific rules and requirements — enforced by an outside agency — I have no assurance that these kinds of vague commitments are going to produce action.

    This view is clearly gaining traction in certain political circles. For example, here is Matthew Yglesias in Vox:

    Online social networks obviously pose some novel legal and regulatory issues. But broadly speaking, the question of how to ensure that companies discharge their responsibilities is not a brand new one. Companies involved in the provision of health care are responsible — not just morally but legally and financially — to abide by the terms of the Health Insurance Portability and Accountability Act of 1996. That law hasn’t eliminated all privacy violations in the health care space, by any means, but when violations occur, they are punished, and the punishment gives actors in that space real reason to avoid them. Financial institutions, similarly, must comply with the privacy rules set out in the Gramm-Leach-Bliley Act. GLBA compliance has thus become its own somewhat tedious mini industry, with lawyers and specialized GLBA compliance firms you can hire…

    Once upon a time, the US government wisely believed that it would be a bad idea to subject promising young internet startups to the bureaucratic morass involved in things like HIPAA or GLBA compliance. But the young internet startups are all grown up now, and can easily afford to hire vast armies of lawyers and compliance experts who will help them avoid breaches that lead to massive fines. There is no longer a need to treat Facebook like a delicate flower whose agility will vaporize if it is held legally accountable for its actions.

    That means disclosure rules for advertising, it means financial consequences for privacy violations, it means firm antitrust action to restrain further acquisitions and try to uphold some semblance of competition in this marketplace, and it means taking a close look at whether the development of ever more sophisticated ad targeting algorithms is being done in a way that serves the public’s interest in creating a robust media infrastructure.

    What is worth noting was the extent to which Zuckerberg was open to, if not something as specific as Yglesias’ proposal, regulation of some sort. Zuckerberg told Senator Dan Sullivan:

    I’m not the type of person who thinks that all regulation is bad, so I think the Internet is becoming increasingly important in people’s lives, and I think we need to have a full conversation about what is the right regulation, not whether it should be or shouldn’t be.

    This isn’t a surprise: Zuckerberg said in his opening remarks that Facebook was “going through a broader philosophical shift in how we approach our responsibility as a company”, which he meant as an indication that the company would be taking more responsibility, but which could easily be interpreted as the company locking the doors to its closed garden and throwing away the key. In this regulation is actually helpful, a point made by Senator Sullivan in response to Zuckerberg’s statement:

    Senator Sullivan: One of my worries on regulation with a company of your size saying “Hey, we might be interested in being regulated”, but as you know, regulations can also cement the dominant power. So what do I mean by that? You have a lot of lobbyists, I think every lobbyist in town is involved in this hearing in some way or another, a lot of powerful interests. You look at what happened with Dodd-Frank: that was supposed to be aimed at the big banks, the regulations ended up empowering the big banks and keeping the small banks down. Do you think that that’s a risk given your influence that if we regulate, we’re actually going to regulate you into a position of cemented authority, when one of my biggest concerns about what you guys are doing is that the next Facebook, which we all want, the guy in the dorm room, we all want that to be started, that you are becoming so dominant that we’re not able to have that next Facebook? What are your views on that?

    MZ: Senator I agree with the point that when you’re thinking through regulation across all industries you need to be careful that it doesn’t cement in the current companies that are winning…I think part of the challenge with regulation in general is that when you add more rules that companies to follow, that’s something that a larger company like ours inherently just have the resources to go do, and that might just be harder for a company getting started to comply with.

    That Sullivan, a Republican, would be suspicious of regulation is hardly a surprise — that’s the cliche I referenced above. There’s more context to Sullivan’s comments though: he hinted at an alternative to regulation.

    Viewpoint Two: Facebook is Too Big

    Here is Sullivan’s lead-up to Zuckerberg’s embrace of regulation quoted above:

    Your testimony, you have talked about a lot of power, you’ve been involved in elections, I thought your testimony was very interesting, really all over the world, 2 billion users, over 200 million Americans, $40 billion in revenue, I believe you and Google have almost 75% of the digital advertising in the U.S., one of the key issues here is Facebook too powerful? Are you too powerful?…

    When you look at the history of this country, and you look at the history of these kinds of hearings…when companies become big and powerful and accumulate a lot of wealth and power, what typically happens from this body is there’s an instinct to either regulate or break-up. Look at the history of this nation. Do you have any thoughts on those two policy approaches?

    No wonder Zuckerberg was so eager to talk about regulation: it’s not simply that it benefits incumbents, it’s that it is a whole lot more attractive than discussing a potential break-up!

    Note, though, that Sullivan wasn’t alone in pushing this idea that Facebook might be too big (a sentiment that Senator John Kennedy also raised last fall). The most fascinating Republican line of questioning came from Senator Lindsey Graham:

    Senator Graham: Who’s your biggest competitor?

    MZ: We have a lot of competitors.

    SG: Who’s your biggest?

    MZ: Hmm, I think the categories — did you want just one? I’m not I could give one — could I give a bunch?

    SG: Uh-huh.

    MZ: So there are three categories I would focus on. One are the other tech platforms, so Google, Apple, Amazon, Microsoft. We overlap with them in different ways.

    SG: Do they provide the same service you provide?

    MZ: Uhm, in different ways, different parts of it, yes.

    SG: Let me put it this way. If I buy a Ford and it doesn’t work well and I don’t like it, I can buy a Chevy. If I’m upset with Facebook, what’s the equivalent product that I can go and sign up for?

    MZ: Well, the second category that I was going to talk about…

    SG: I’m not talking about categories. What I’m talking about is real competition that you face, because car companies face a lot of competition, that if they make a defective car, it gets out in the world, people stop buying that car and buy another one. Is there an alternative to Facebook in the private sector?

    MZ: Yes Senator. The average American uses eight different apps to communicate with their friends and stay in touch with people, ranging from texting to email…

    SG: That is the same service you provide?

    MZ: Well we provide a number of different services.

    SG: Is Twitter the same as what you do?

    MZ: It overlaps with a portion of what we do.

    SG: You don’t think you have a monopoly?

    MZ: It certainly doesn’t feel like that to me!

    SG: So it doesn’t. So, Instagram, you bought Instagram, why did you buy Instagram?

    MZ: Because they were very talented app developers who were making good use of our platform and understood our values.

    SG: It was a good business decision. My point is that one way to regulate a company is through competition, through government regulation, here’s the question all of us have to answer. What do we tell our constituents given what’s happened here, why we should let you self-regulate? What would you tell people in South Carolina, that given all the things we just discovered here, is a good idea for us to rely upon you to regulate your own business practices?

    Zuckerberg quickly articulated that he would be in favor of regulation, using much the same language he would return to later in his response to Senator Sullivan, but the implication of Graham’s line of questioning was more profound than that: perhaps the real problem is the monopolistic nature of the company, because the normal checks that come from competition were missing.

    This is, I would note, quite consistent with the skepticism about regulation voiced by Senator Sullivan: if the concern is that a bunch of rules limit competition, then a better response, if there must be one, would seek to empower competition by undoing the monopoly entirely.

    The Shifting Debate

    The most likely outcome of Facebook’s current scandal continues to be that nothing will happen, for all of the inherent lethargy in our political system noted above. And, if something does, European-style data regulation seems the more likely outcome, as I noted last month. No wonder Facebook’s stock was up after the hearing!

    It’s worth keeping in mind, though, that because Facebook is so dominant, the question of its governance is ultimately a political question, and to that end the shifts in the terms of debate, if not yet its outcome, have been striking. Zuckerberg is in Washington D.C., everyone says something must be done, and critically, both sides have ideas about what that should be; while this certainly may be mostly a Facebook problem, the rest of the industry should take note.


    1. Zuckerberg will testify again later today, this time in front of the House of Representatives’ Energy and Commerce Committee 


  • The End of Windows

    The story of Windows’ decline is relatively straightforward and a classic case of disruption:

    • The Internet dramatically reduced application lock-in
    • PCs became “good enough”, elongating the upgrade cycle
    • Smartphones first addressed needs the PC couldn’t, then over time started taking over PC functionality directly

    What is more interesting, though, is the story of Windows’ decline in Redmond, culminating with last week’s reorganization that, for the first time since 1980, left the company without a division devoted to personal computer operating systems (Windows was split, with the core engineering group placed under Azure, and the rest of the organization effectively under Office 365; there will still be Windows releases, but it is no longer a standalone business). Such a move didn’t seem possible a mere five years ago, when, in the context of another reorganization, former-CEO Steve Ballmer wrote a memo insisting that Windows was the future (emphasis mine):

    In the critical choice today of digital ecosystems, Microsoft has an unmatched advantage in work and productivity experiences, and has a unique ability to drive unified services for everything from tasks and documents to entertainment, games and communications. I am convinced that by deploying our smart-cloud assets across a range of devices, we can make Windows devices once again the devices to own. Other companies provide strong experiences, but in their own way they are each fragmented and limited. Microsoft is best positioned to take advantage of the power of one, and bring it to our over 1 billion users.

    That memo prompted me to write a post entitled Services, Not Devices that argued that Ballmer’s strategic priorities were exactly backwards: Microsoft’s services should be businesses in their own right, not Windows’ differentiators. Ballmer, though, followed-through on his memo by buying Nokia; it speaks to Microsoft’s dysfunction that he was allowed to spend billions on a deal that allegedly played a large role in his ouster.

    That dysfunction was The Curse of Culture:

    Culture is not something that begets success, rather, it is a product of it. All companies start with the espoused beliefs and values of their founder(s), but until those beliefs and values are proven correct and successful they are open to debate and change. If, though, they lead to real sustained success, then those values and beliefs slip from the conscious to the unconscious, and it is this transformation that allows companies to maintain the “secret sauce” that drove their initial success even as they scale. The founder no longer needs to espouse his or her beliefs and values to the 10,000th employee; every single person already in the company will do just that, in every decision they make, big or small.

    As with most such things, culture is one of a company’s most powerful assets right until it isn’t: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so.

    Thus my assertion at the top, that the story of how Microsoft came to accept the reality of Windows’ decline is more interesting than the fact of Windows’ decline; this is how CEO Satya Nadella convinced the company to accept the obvious.

    The Easy Win: Office on iPad

    A month after taking over as CEO, Nadella introduced Office for iPad. Quite obviously, given the timing, the work had been done under Ballmer; some reports suggest the initiative in fact started years previously. Ballmer, though, wouldn’t release it until there was a touch version for Windows 8; some wonder if he would have ever released it at all.

    It’s all a bit of a moot point; in the end Ballmer’s delay gave Nadella an easy win that symbolized the exact shift in mindset Microsoft needed: non-Windows platforms would be targets for Microsoft services, not competitors for Windows.

    That wasn’t the only news that week: Microsoft also renamed its cloud service from Windows Azure to Microsoft Azure. The name change was an obvious one — by then customers could already run a whole host of non-Windows related software, including Linux — but the symbolism tied in perfectly with the Office on iPad announcement: Windows wouldn’t be forced onto Microsoft’s future.

    The Demotion: Nadella’s First Strategy Memo

    It was another three months before Nadella wrote his first company-wide strategy memo explicitly departing from his predecessor:

    More recently, we have described ourselves as a “devices and services” company. While the devices and services description was helpful in starting our transformation, we now need to hone in on our unique strategy. At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

    What is striking about this articulation of “productivity and platforms” is that it is exactly how Nadella reorganized the company last week; the “Experiences & Devices” team is focused on end-user productivity, while the “Cloud + AI” team is all about building the platform of the future. The reason it took so long is the point of this article — Nadella had a Windows problem.

    To that end, the most important aspect of Nadella’s memo was not what he said about Windows, but where he said it. I wrote in a Daily Update breaking down the memo:

    Trust me when I say demoting Windows all the way to this point in the letter is a dramatic shift. Remember, it wasn’t that long ago that Steve Ballmer said “Nothing is More Important at Microsoft than Windows”; Nadella not even mentioning the OS for the first 2,000 words sends a very different message. Similarly, spending nothing more than a sentence on Surface and Nokia — in the entire email, the word “Surface” appears twice and “Nokia” once — makes it as clear as can be that neither is the future.

    This was the next step after the initial symbolism of Office on iPad and the Azure name change: actually articulating a future where Windows didn’t matter.

    The Retreat: Love Windows

    Nadella, though, had a short-term problem: Microsoft’s most important customers — enterprises — hated Windows 8. The operating system may not have been Microsoft’s future, but it was still a massive cash cow, and the linchpin for all of Microsoft’s legacy products. To that end the company needed Windows 10 to get out the door sooner-rather-than-later.

    This, I think, is the context for Nadella’s presentation at a January, 2015 event about Windows 10; Nadella said:

    We absolutely believe that Windows is home for the very best of Microsoft experiences. There’s nothing subtle about this strategy. It’s a practical approach which is customer first. We want to give ourselves the best opportunity to serve our customers everywhere and give ourselves the best chance to help customers find Windows as their home. That’s what we plan to do…We need to move from people needing Windows to choosing Windows to loving Windows…We want to make Windows 10 the most loved release of Windows.

    At the time I was very disappointed; suggesting that Microsoft experiences needed to be “best” on Windows suggested that Windows was dictating the direction of Microsoft services. A few months later, though, once Windows 10 shipped, Nadella made clear this was only a temporary retreat.

    The Quarantine: Nadella’s First Reorganization

    That summer Nadella undertook his first reorganization, separating the company into three divisions: Cloud and Enterprise, Applications and Services, and Windows and Devices. I wrote in a Daily Update:

    This explicitly undoes Ballmer’s ill-considered reorganization from a divisional company to an allegedly functional organization. At the time Ballmer wrote:

    We are rallying behind a single strategy as one company — not a collection of divisional strategies…

    This was exactly wrong: by that point Microsoft had already lost the devices war and needed to focus on services that worked on iOS and Android. A “One Microsoft” strategy, on the other hand, kept all of those services subservient to Windows. However, with this new reorganization, Windows is off in the corner where it belongs, leaving the Cloud and Enterprise team and Applications and Services Group free to focus on building their businesses on top of all platforms.

    I believe this reorganization was the turning point: not only were the two teams Nadella announced last week basically formed at this time, but more importantly, Windows was left to fend for itself.

    The Inception: The Death of Windows Phone

    Nadella’s most impressive bit of jujitsu was how he killed Windows Phone; while the platform had obviously been dead in the water for years, Nadella didn’t imperiously axe the program. Instead, by isolating Windows, he let the division’s leadership come to that conclusion on their own.

    Naturally, departing Windows-head Terry Myerson blamed the rest of the company, stating, “When I look back on our journey in mobility, we’ve done hard work and had great ideas, but have not always had the alignment needed across the company to make an impact.” I wrote at the time:

    This is such an utterly clueless explanation of why Windows Phone failed that it’s kind of stunning. Until, of course, you remember the culture-induced myopia I described yesterday: Myerson still has the Ballmer-esque presumption that Microsoft controlled its own destiny and could have leveraged its assets (like Office) to win the smartphone market, ignoring that by virtue of being late Windows Phone was a product competing against ecosystems, which meant no consumer demand, which meant no developers, topped off by the arrogance to dictate to OEMs and carriers what they could and could not do to the phone, destroying any chance at leveraging distribution to get critical mass…

    Interestingly, though, Myerson’s ridiculous assertion in a roundabout way shows how you change culture…In this case, Nadella effectively shunted Windows to its own division with all of the company’s other non-strategic assets, leaving Myerson and team to come to yesterday’s decision on their own. Remember, Nadella opposed the Nokia acquisition, but instead of simply dropping the axe on day one, thus wasting precious political capital, he hung the Windows team out to dry let Windows give it their best shot and come to that conclusion on their own.

    Nadella did the same thing with Windows proper: when Windows 10 launched Myerson claimed that the operating system would be on 1 billion devices by mid-2018; the company had to walk that back a year later, not because Nadella said so, but because the market did.

    The Division: The End of Windows

    And so we reach last week’s announcements: the Windows division is no more. It is an incredibly meaningful milestone, yet anticlimactic at the same time, thanks to Nadella’s careful management. It is worth noting, though, that Nadella had one critical ally in this journey: Wall Street.

    Microsoft's stock price since Nadella became CEO.
    Microsoft’s stock price since Satya Nadella became CEO.

    If culture flows from success, then it follows that an attempt to change culture is far easier to accomplish when the most obvious indicator of success — one that has a direct impact on employee pocket-books — is moving up-and-to-the-right. What is fascinating to consider, though, is that Microsoft’s stock is up not only because the company has a vision that it is delivering on quarter-after-quarter, but also because the stock was depressed in the first place.

    To put it another way, Nadella’s shift to a post-Windows Microsoft is the right one; to have done the same a decade sooner would have been better. It also, though, may have been impossible, simply because Windows was still the biggest part of the business, and it’s not clear the markets would have tolerated an explicit shift before it was painfully obvious it was necessary; without a rising stock price, Nadella’s mission would have been much more challenging if not impossible.

    The Future: Why Microsoft?

    It’s important to note that Windows persisted as the linchpin of Microsoft’s strategy for over three decades for a very good reason: it made everything the company did possible. Windows had the ecosystem and the lock-in, and provided the foundation for Office and Windows Server, both of which were built with the assumption of Windows at the center.

    Office 365 and Azure are comparatively weaker strategically: Office 365 has document lock-in, but the exact same forces that weakened Windows in the first place weaken the idea of documents as well. It’s not clear why new companies in particular would even care. Azure, meanwhile, is chasing AWS, with a huge amount of business coming from Linux VMs that could run anywhere.

    Unsurprisingly, both are still benefiting from Windows: Office 365 really does, as Nadella noted in his retreat, work better on Windows, and vice versa; it is seamless for organizations that have been using Office for years to move to Office 365. Azure’s biggest advantage, meanwhile, is that it allows for hybrid deployments, where workloads are split between legacy on-premise Windows servers and Azure’s public cloud; that legacy was built on Windows.

    This, then, is Nadella’s next challenge: to understand that Windows is not and will not drive future growth is one thing; identifying future drivers of said growth is another. Even in its division Windows remains the best thing Microsoft has going — it had such a powerful hold on Microsoft’s culture precisely because it was so successful.


  • Stratechery 4.0

    Five Years ago last Sunday, I launched Stratechery 1.0 with a picture of sailboats:1

    A screenshot of Stratechery 1.0

    A simple image. Two boats, and a big ocean. Perhaps it’s a race, and one boat is winning — until it isn’t, of course. Rest assured there is breathless coverage of every twist and turn, and skippers are alternately held as heroes and villains, and nothing in between.

    Yet there is so much more happening. What are the winds like? What have they been like historically, and can we use that to better understand what will happen next? Is there a major wave just off the horizon that will reshape the race? Are there fundamental qualities in the ships themselves that matter far more than whatever skipper is at hand? Perhaps this image is from the America’s Cup, and the trailing boat is quite content to mirror the leading boat all the way to victory; after all, this is but one leg in a far larger race.

    It’s these sort of questions that I’m particularly keen to answer about technology. There are lots of (great!) sites that cover the day-to-day. And there are some fantastic writers who divine what it all means. But I think there might be a niche for context. What is the historical angle on today’s news? What is happening on the business side? Where is value being created?

    Since then I have written 308 Weekly Articles and 659 Daily Updates (and recorded 159 podcasts) answering exactly those questions, and, thankfully, have managed to create some value of my own: in 2014 I launched the Daily Update and have been supported by subscriptions ever since.

    For a long time, though, I have wished Stratechery did a better job of providing value not just through daily emails and posts, but to the new user stumbling across the site for the first time, or the long-time reader hoping to find that one post they remember reading. This update is all about those two use cases — and yes, a new logo and visual refresh.

    Explore Stratechery

    There are now three ways to explore Stratechery:

    Concepts: The Concepts page distills Stratechery’s archive into seven categories:

    • Aggregation Theory
    • Disruption Theory
    • Incentives
    • Media
    • Strategy and Product Management
    • Technology and Society
    • The Evolution of Technology

    Each category has five or so sub-categories, each with a selection of relevant Stratechery articles from the last five years. This is the best place to start if you are new to Stratechery.2

    Companies: The Companies page lets you quickly jump to a specific archive page for every company I have written about on Stratechery (there have been 309 of them!). Full disclosure: this section isn’t completely finished — soon every company will have featured articles that I consider my most important work about the company (right now the top eight by post count do). For now, here is what Apple looks like:

    Topics: The Topics page is just like the Companies page, but about, well, topics! Things like earnings, or cryptocurrencies, or Taylor Swift (and Kanye West!). Right now there are 121 topics in Stratechery’s taxonomy.

    Search Stratechery

    The second major addition to Stratechery is dramatically improved search, powered by Algolia. Better indexing is certainly the most important feature, but there are others:

    Autocomplete: The search box in the side-bar will now auto-complete as you type, taking a first crack at getting you the exact article you were looking for. In addition, you can quickly jump to the relevant Concept, Company, or Topic page:

    Instant Search: Once on the search page you can get results instantly, helping you quickly iterate on your search terms without waiting for a refresh, all with typo-tolerance and synonym search.

    Facets: You can filter search results (or simply all posts) by:

    • Category (Articles, Daily Updates, Podcasts)
    • Company
    • Topic
    • Concept

    This should make it far easier to find that post you remember reading way back when.

    A New Logo and Visual Refresh

    I am tempted to say this is the least important feature, but after all of the time I just spent reading through my archives, I know that little things like logos and look-and-feel matter just as much as the words on the page. To that end, I am extremely excited about Stratechery’s new logo:

    Designed by Brad Ellis of Tall West, the new mark represents Stratechery’s emphasis on writing, the focus on technology, and, of course, my drawings.3 The Archer type-face is a call-back to Stratechery’s original Courier, and the feeling of a type-writer. I’m extremely excited, and hope you are as well.

    In addition, there are now related articles under posts, the all-caps headlines are gone, and the sidebar (drop-down menu on mobile) has been reconfigured. Frankly, I remain very happy with the rest of the site: that is how good of a job Philip Arthur Moore did when he re-built my original version from scratch in 2015; he did an equally fantastic job on 4.0.

    New Email

    In addition, today I am launching a new template for Stratechery emails. The most apparent change for readers will be a new logo (of course), and a general clean-up of the layout. What is far more important, though, is what readers won’t see. Allow me a quick backstory:

    For the first few years of the Daily Update I used a modified MailChimp template; it was functional, and most importantly, it was very easy to prepare and send emails. Unfortunately, those templates didn’t work so well in Gmail on mobile; that’s why I launched a new template last summer. It rendered correctly everywhere, but preparing each email was a laborious process that took as long as 30 minutes a day. Worse, it introduced multiple opportunities to make mistakes.

    That is when Yellow Brim came to the rescue. The just-launched company built not just a template, but rather a converter and template combo that renders perfect emails with nothing more than a push of the button. I am very pleased at how much better it is going to make my day-to-day life. What was most impressive of all, though, was the way CEO Jacqueline Boltik took the way to deeply understand my workflow and needs, and only then came up with a solution. I can’t wait to see what she and Yellow Brim build next.

    Thank You

    Normally I would close this post by thanking all of you, my readers, for making this possible. That is absolutely true, and I am more grateful for your support than you can know.

    On this occasion, though, I need to save my final and most fervent thanks for Daman Rangoola (warning: excessive amounts of Lakers talk behind that link). Stratechery 4.0 has been months in the making, and Daman has shouldered the biggest load by far, particularly the rich taxonomy applied to those 1,128 pieces of content. As you can tell by looking at the new features above, it is plain fact that without Daman, Stratechery 4.0 would not exist.

    For now, though, I hope you enjoy the new site: explore the Concepts, Companies, Topics, and play with Search, and do let me know (via Member forum or email, not Twitter) if you find any bugs. We’ll be fixing things up for the next little bit I’m sure.

    Here’s to five more years!


    1. Please, ignore the terrible pronunciation decisions; it’s Struh-TECH-er-ee, as in the industry that I cover 

    2. I want to give recognition to Sonya Mann who came up with the original outline for the Stratechery Conceptual Framework nearly two years ago. 

    3. A gallery is coming in 4.1 


  • The Facebook Brand

    Last week Reuters reported on the Harris Brand Survey:

    Apple Inc and Alphabet Inc’s Google corporate brands dropped in an annual survey while Amazon.com Inc maintained the top spot for the third consecutive year, and electric carmaker Telsa Inc rocketed higher after sending a red Roadster into space.

    The headline of the piece was “Apple, Google, see reputation of corporate brands tumble in survey”; one would note that the editors at Reuters apparently disagree with the poll survey respondents about what brands move the needle. But I digress.

    So why are Apple and Google lower?

    John Gerzema, CEO of the Harris Poll, told Reuters in an interview that the likely reason Apple and Google fell was that they have not introduced as many attention-grabbing products as they did in past years, such as when Google rolled out free offerings like its Google Docs word processor or Google Maps and Apple’s then-CEO Steve Jobs introduced the iPod, iPhone and iPad.

    Ah, no Google Docs updates. Got it!

    I’m obviously snarking a bit, and it is worth noting that notoriety clearly plays a role in these survey results (look no further than spot 99, where the Harvey Weinstein company makes its debut in the list). What is indisputable, though, is that brand matters — and that includes the regulatory future for Google and Facebook.

    YouTube and Wikipedia

    Start with Google, specifically YouTube. From The Verge:

    YouTube will add information from Wikipedia to videos about popular conspiracy theories to provide alternative viewpoints on controversial subjects, its CEO said today. YouTube CEO Susan Wojcicki said that these text boxes, which the company is calling “information cues,” would begin appearing on conspiracy-related videos within the next couple of weeks…

    The information cues that Wojcicki demonstrated appeared directly below the video as a short block of text, with a link to Wikipedia for more information. Wikipedia — a crowdsourced encyclopedia written by volunteers — is an imperfect source of information, one which most college students are still forbidden from citing in their papers. But it generally provides a more neutral, empirical approach to understanding conspiracies than the more sensationalist videos that appear on YouTube.

    Your average college student surely knows that the real trick is to use Wikipedia to find the sources that are actually allowed by college professors: they are helpfully linked at the bottom of every article. Indeed, Wikipedia’s citation policy arguably makes it one of the more reliable sources of information out there, at least in terms of conventional wisdom. Moreover, crowd-sourcing facts, at least in theory, seems like a more scalable solution to the sheer amount of video YouTube has to deal with.

    It’s also a very Google-y solution: it makes sense that a company with the motto “Organize the world’s information and make it universally accessible and useful” would, confronted with questionable information, seek to remedy it with more information. Not bothering to tell Wikipedia fits as well; Google treats the web as its fiefdom, and for good reason. Search is built on links, the fabric of the web, and is the entry-point for nearly everyone, leading websites everywhere to do Google’s bidding; excluding oneself from search is like going on a hunger strike while fed by robots — one whithers away and no one even notices. Google probably thinks Wikipedia should say “thank-you”!

    That noted, it’s hard to see this having any meaningful impact: conspiracy theories and fake news generally tend to appeal primarily to people who already want them to be true; it’s hard to see a Wikipedia link making a big difference. And, of course, there are the conspiracy theories that turn out to be true, or, perhaps more commonly, the conventional wisdom that proves to be wrong.

    Facebook and Cambridge Analytica

    So which is Cambridge Analytica and Facebook? A year ago the New York Times reported that Cambridge Analytica’s impact on the election of Donald Trump as president was overrated:

    Cambridge Analytica’s rise has rattled some of President Trump’s critics and privacy advocates, who warn of a blizzard of high-tech, Facebook-optimized propaganda aimed at the American public, controlled by the people behind the alt-right hub Breitbart News. Cambridge is principally owned by the billionaire Robert Mercer, a Trump backer and investor in Breitbart. Stephen K. Bannon, the former Breitbart chairman who is Mr. Trump’s senior White House counselor, served until last summer as vice president of Cambridge’s board.

    But a dozen Republican consultants and former Trump campaign aides, along with current and former Cambridge employees, say the company’s ability to exploit personality profiles — “our secret sauce,” Mr. Nix once called it — is exaggerated. Cambridge executives now concede that the company never used psychographics in the Trump campaign. The technology — prominently featured in the firm’s sales materials and in media reports that cast Cambridge as a master of the dark campaign arts — remains unproved, according to former employees and Republicans familiar with the firm’s work.

    Over the weekend the New York Times was out with a new story, entitled How Trump Consultants Exploited the Facebook Data of Millions:

    [Cambridge Analytica] harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

    Facebook executives — on Twitter, naturally — took exception to the use of the word “breach”:

    Everything was working as intended, thanks to the Graph API.

    Facebook versus Google and the Graph API

    Facebook introduced what it called the “Open Graph” back in 2010; CEO Mark Zuckerberg led off Facebook’s f8 developer conference thusly:

    We think that what we have to show you today will be the most transformative thing we’ve ever done for the web. There are a few key themes that we are going to be talking about today. The first is the Open Graph that we’re all building together. Today, the web exists mostly as a series of unstructured links between pages, and this has been a powerful model, but it’s really just the start. The Open Graph puts people at the center of the web. It means the web can become a set of personally and semantically meaningful connections between people and things. I am FRIENDS with you. I am ATTENDING this event. I LIKE this band. These connections aren’t just happening on Facebook, they’re happening all over the web, and today, with the Open Graph, we’re going to bring all of these together.

    The reference to “unstructured links” was clearly about Google, and while it’s easy to think of the two companies as a duopoly astride the web, Facebook was at the time a much smaller entity than it is today: 400 million users, still private, and a tiny advertising business relative to Google.

    The challenge from Facebook’s perspective is the one I outlined above: Google got data from everywhere on the web because sites and applications were heavily incentivized to give it to Google so as to have a better chance of reaching end users aggregated by Google:

    Sites need Google to reach users, so they give Google all their data

    Facebook, meanwhile, was a closed garden. This was an advantage in that users generated Facebook’s content for them, and that said content wasn’t available to Google, but there was no obvious way for Facebook to gather data on the greater web, which is where the Open Graph came in; Facebook would give away slices of its data in exchange for data from sites and apps around the web:

    To catch up with Google Facebook exchanged user data for site data

    Zuckerberg said as much in his keynote:

    At our first F8, I introduced the concept of the Social Graph. The idea that if you mapped out all of the connections between people and things in the world it would form this massive interconnected graph that just shows how everyone is connected together. Now Facebook is actually only mapping out a part of this graph, mostly the part around people and the relationships that they have. You guys [developers] are mapping out other really important of the graph. For example, I know Yelp is here today. Yelp is mapping out the part of the graph that relates to small businesses. Pandora is mapping out the part of the graph that relates to music. And a lot of news sites are mapping out the part of the graph that relates to current events and news content. If we can take these separate maps of the graph and pull them all together, then we can create a web that is more social, personalized, smarter, and semantically aware. That’s what we’re going to focus on today.

    What followed was the introduction of the Graph API, which was the means by which Facebook would facilitate the data exchange, and as you can see on an old Facebook developer page, Facebook was willing to give away just about everything:

    Facebook's developer page showing all of the data given to third party apps

    Moreover, note that users could give away everything about their friends as well; this is exactly how the researcher implicated in the Cambridge Analytica story leveraged 270,000 survey respondents to gain access to the data of 50 million Facebook users.

    Facebook finally shut down the friend-sharing functionality five years later, after it was clearly ensconced with Google atop the digital advertising world, of course.

    Facebook’s Brand

    That Facebook pursued such a strategy is even less of a surprise than Google’s imperious adoption of Wikipedia as conspiracy theory debunker: Facebook’s motto was “Making the world more open and connected”, and the company has repeatedly demonstrated a willingness to do just that, whether users like it or not. That’s the thing with branding: what people think about your company is not so much what you say but what you do, and that many people immediately assume the worst about Facebook and privacy is Facebook’s own fault.

    To be sure, there seems to be a partisan angle as well — one didn’t see many complaints about the Obama campaign. From the Washington Post:

    Early in 2011, some Obama operatives visited Facebook, where executives were encouraging them to spend some of the campaign’s advertising money with the company. “We started saying, ‘Okay, that’s nice if we just advertise,’ ” Messina said. “But what if we could build a piece of software that tracked all this and allowed you to match your friends on Facebook with our lists, and we said to you, ‘Okay, so-and-so is a friend of yours, we think he’s unregistered, why don’t you go get him to register?’ Or ‘So-and-so is a friend of yours, we think he’s undecided. Why don’t you get him to be decided?’ And we only gave you a discrete number of friends. That turned out to be millions of dollars and a year of our lives. It was incredibly complex to do.”

    But this third piece of the puzzle provided the campaign with another treasure trove of information and an organizing tool unlike anything available in the past. It took months and months to solve, but it was a huge breakthrough. If a person signed on to Dashboard through his or her Facebook account, the campaign could, with permission, gain access to that person’s Facebook friends. The Obama team called this “targeted sharing.” It knew from other research that people who pay less attention to politics are more likely to listen to a message from a friend than from someone in the campaign. The team could supply people with information about their friends based on data it had independently gathered. The campaign knew who was and who wasn’t registered to vote. It knew who had a low propensity to vote. It knew who was solid for Obama and who needed more persuasion — and a gentle or not-so-gentle nudge to vote. Instead of asking someone to send a message to all of his or her Facebook friends, the campaign could present a handpicked list of the three or four or five people it believed would most benefit from personal encouragement.

    This, though, is hardly a defense for Facebook: what is the company going to say, that it was exporting friend data for everyone, not just Trump? To be sure, buying the data from an academic and allegedly holding onto it violated Facebook’s Terms of Service, but “We have terms of service!” isn’t exactly a powerful branding campaign, especially given that at that same 2010 f8 Facebook had dramatically loosened those terms of service:

    We’ve had this policy where you can’t store or cache data for any longer than 24 hours, and we’re going to go ahead and get rid of that policy.

    (Cheering)

    So now, if a person comes to your site, and a person gives you permission to access their information, you can store it. No more having to make the same API calls day-after-day. No more needing to build different code paths just to handle information that Facebook users are sharing with you. We think that this step is going to make building with Facebook platform a lot simpler.

    Indeed it was.

    Google, Facebook, and Regulation

    Ultimately, the difference in Google and Facebook’s approaches to the web — and in the case of the latter, to user data — suggest how the duopolists will ultimately be regulated. Google is already facing significant antitrust challenges in the E.U., which is exactly what you would expect from a company in a dominant position in a value chain able to dictate terms to its suppliers. Facebook, meanwhile, has always seemed more immune to antitrust enforcement: its users are its suppliers, so what is there to regulate?

    That, though, is the answer: user data. It seems far more likely that Facebook will be directly regulated than Google; arguably this is already the case in Europe with the GDPR. What is worth noting, though, is that regulations like the GDPR entrench incumbents: protecting users from Facebook will, in all likelihood, lock in Facebook’s competitive position.

    This episode is a perfect example: an unintended casualty of this weekend’s firestorm is the idea of data portability: I have argued that social networks like Facebook should make it trivial to export your network; it seems far more likely that most social networks will respond to this Cambridge Analytica scandal by locking down data even further. That may be good for privacy, but it’s not so good for competition. Everything is a trade-off.

    I wrote a follow-up to this article in this Daily Update.


  • Qualcomm, National Security, and Patents

    From the New York Times:

    President Trump on Monday blocked Broadcom’s $117 billion bid for the chip maker Qualcomm, citing national security concerns and sending a clear signal that he was willing to take extraordinary measures to promote his administration’s increasingly protectionist stance. In a presidential order, Mr. Trump said “credible evidence” had led him to believe that if Singapore-based Broadcom were to acquire control of Qualcomm, it “might take action that threatens to impair the national security of the United States.” The acquisition, if it had gone through, would have been the largest technology deal in history.

    Mr. Trump’s decision to prohibit the blockbuster deal underscored the lengths that he is willing to go to shelter American companies from foreign competition. In recent weeks, the president has turned to an arsenal of tools — including tariffs and an obscure government review panel — to ward off foreign control in American industries and, in particular, thwart the rise of China. The president has focused many of these actions on the technology industry. While the United States has long claimed an advantage in tech, it is now facing emboldened rivals in China, where the government has heavily invested in everything from semiconductors to wireless networks to artificial intelligence. Through its recent actions, the White House has revealed its view that the country’s national security is tied to its advancement of those technologies.

    I can see why the New York Times (and most other commentators) immediately attributed this decision to protectionism: not only does that match President Trump’s rhetoric both on the campaign trail and also in office, but it follows closely on the decision to impose tariffs on imported steel. Moreover, Broadcom is a Singapore-based company (and Singapore is a U.S. ally) that had promised to move back to the U.S.. National security, at least at first glance, looks like a fig leaf.

    In fact, though, I think the Trump administration got this right.

    Understanding Qualcomm

    To understand why, go back to this Daily Update I wrote about Qualcomm in 2014:

    Qualcomm’s situation is a little hard to understand, so let me try to unpack it. Please note I’ll likely oversimplify a bit!

    • Qualcomm has two primary businesses: selling chips (they are most famous for their systems-on-a-chip, but actually most of their revenue is from their communications chips) and licensing (Qualcomm has the vast majority of patents used in CDMA, and a good portion of LTE)
    • Chips drive more revenue than do licenses (they sell for much higher prices, but they also cost money to make), but licenses drive the most profit (all of the costs are amortized)
    • Qualcomm’s chip business, particularly its SoC’s, are threatened by the same headwinds that Samsung is facing: they are a premium product in a market where prices are dropping rapidly. And, just as Apple is locking Samsung and others out of the premium market for smartphones as a whole, Apple’s use of their own chips means the exact same thing is happening to Qualcomm. Meanwhile, just as Chinese manufacturers are eating Samsung on the low end, other SoC makers — especially MediaTek — are doing the same to Qualcomm on the low-end
    • This is not all bad news for Qualcomm: what makes their business so impressive is that they still make money on every phone for which they don’t supply chips because of their licensing business. Moreover, as I noted above, licensing has much higher margins, which helps drive Qualcomm’s profitability. This final point helps explain how Qualcomm’s earnings continue to increase while Samsung’s are starting to go down.
    • The problem for Qualcomm is that their licensing business is much riskier: unlike the chip business, where payment is very straitforward (I give you X, you pay me Y), licensing depends on contractual agreements, and contractual agreements depend on the regulatory environment in which they are struck. And needless to say, China’s regulatory environment – from which Qualcomm derives 50% of its revenue – is an uncertain one

    That Daily Update was about China’s investigation of Qualcomm’s licensing practices, but the takeaway is not specifically about China: rather, note that Qualcomm’s business model is two pronged, that one prong is far more profitable, and that the other is far more cash intensive. This division has attracted activist investors eager to split the company apart; from a 2015 Daily Update in the wake of pressure from Jana Partners:

    I don’t think there is any question that Qualcomm’s licensing unit in particular would be worth significantly more were it to be spun-out. That’s why, ultimately, I can’t really blame Jana Partners for pushing for a break-up…Qualcomm’s licenses by themselves would be a money gusher, at least for a few years, and while I think most investors are more long-term oriented than people think, I can absolutely understand the temptation — and associated price premium — associated with money in hand now.

    Again, these Daily Updates were written in 2014 and 2015, when Qualcomm’s position was stronger than it is today: Android price points were higher (which directly impacted Qualcomm’s royalties), the company hadn’t lost its antitrust case in China, and Apple had neither started sourcing modems from Intel nor sued Qualcomm about its licensing fees.

    Broadcom’s Plan

    This is the context of Broadcom’s proposed $117 billion acquisition, which was to be financed with $106 billion in debt; the way these deals work is that acquirers — usually private equity firms, but sometimes companies (although one could argue that the current iteration of Broadcom is a chip-focused private equity firm) — use debt to acquire cash flow-rich companies, use that cash flow to pay off the debt, and in the meantime strip out all of the parts that don’t contribute to said cash flow. Oftentimes this is justified for reasons that go beyond maximizing cash flow — lots of companies would do better to return profits to shareholders than pursue management fantasies for which the company is fundamentally unsuited — but I’m not sure Qualcomm falls in that category. To go back to that 2015 update:

    The bigger question, of course, is whether “maximizing shareholder value” is always the best course of action; more specifically, what is the proper time horizon? In the case of Qualcomm, licensing and chip-making may be very different from a financial perspective, but they’re wonderful complements from a strategic and sustainability point-of-view: chip-making produces patents, which in turn generate outsized profits that enable Qualcomm to invest significant resources into developing new chips. It’s a virtuous cycle. It’s also one that pays off over the very long run: licensing revenues are not maximized (because of potential antitrust issues), and some portion of the profits is funneled into the lower margin chip business with the promise that said investment will result in licenses in the future, a somewhat risky bet that itself won’t fully pay off because some of that profit will itself be reinvested…

    Again, as I noted in the beginning, management tends to be very biased towards spending profits for its own ends and calling it long-term thinking; I don’t think it’s the worst thing in the world when investors insert some more accountability into capital allocation decisions. I do think Qualcomm is an exception though: I believe its current struggles are largely unrelated to its structure, and while that structure may not be ideal for short term returns, it is responsible for a remarkably innovative and durable company. I suspect this viewpoint will win out in the end, and to be fair, Qualcomm does have a lot of fat to trim when you compare its cost structure to that of rivals like Texas Instruments.

    One can certainly make the argument that I got the balance wrong — as I noted above, Qualcomm is in even worse shape than they were when I wrote that. Perhaps, from a pure shareholder perspective, squeezing every last dime out of Qualcomm’s licenses was the best thing to do.

    That, though, is precisely where the national security concerns come in. From the Committee on Foreign Investment in the United States’ (CFIUS’s) letter to Broadcom:

    Reduction in Qualcomm’s long-term technological competitiveness and influence in standard setting would significantly impact U.S. national security. This is in large part because a weakening of Qualcomm’s position would leave an opening for China to expand its influence on the 5G standard-setting process. Chinese companies, including Huawei, have increased their engagement in 5G standardization working groups as part of their efforts to build out a 5G technology…While the United States remains dominant in the standards-setting space currently, China would likely compete robustly to fill any void left by Qualcomm as a result of this hostile takeover. Given well-known U.S. national security concerns about Huawei and other Chinese telecommunications companies, a shift to Chinese dominance in 5G would have substantial negative national security consequences for the United States.

    CFIUS, during the investigative period, will continue to assess the likelihood that acquisition of Qualcomm by Broadcom could result in a weakening of Qualcomm’s position in maintaining its long-term technological competitiveness. Specifically, Broadcom’s statements indicate that it is looking to take a “private-equity”-style direction if it acquires Qualcomm, which means reducing long-term investment, such as R&R, and focusing on short-term profitability.

    This is why the focus on Broadcom’s Singaporean domicile misses the point — and why Broadcom’s promise to re-domicile in the U.S. didn’t matter either (as for Broadcom’s further promise to not halt investment in 5G, well, that process is nearly over — the issue is really about 6G and beyond). The entire issue is that the structure of the deal itself said far more clearly than anything else that Broadcom wanted to feast off of Qualcomm’s past innovations and contribute far less to future ones than Qualcomm would on its own. And, given ever-increasing Chinese dominance of wireless, that is indeed a national security problem.

    The Patent Problem

    That noted, to the extent that Broadcom’s acquisition was a national security problem because of how future Qualcomm investment would be curtailed, the U.S. is lucky that Broadcom happened to be a foreign company — that is precisely why CFIUS’s review and President Trump’s order were even legal. Had Broadcom been a domestic entity CFIUS wouldn’t be involved at all, and President Trump would have much less discretionary power.

    To be sure, there would still be ways to block the deal, particularly the antitrust issues that would be raised by combining the two companies. The more significant issue, though, is that at least one company and a whole host of willing financiers agree with activist investors that Qualcomm would be better off milking licenses than in inventing new technologies.

    Again, some of the issues are structural: Apple’s dominance of the high-end market leaves would-be differentiated suppliers like Qualcomm stuck in the low-end. It’s worth noting, though, that other structural issues result from the U.S. government — specifically, patents. One more time from that 2015 update:

    I’d also add that this entire episode is ultimately about the distorting power of patents: the entire reason why Qualcomm’s licensing unit is so valuable and such a reliable source of cash is because of government-granted monopolies. Jana Partners’ core dispute with the company is that it is using the results of its innovation to innovate more instead of simply collecting rent, an outcome certainly at odds with the reason the patent system was created in the first place.

    There is a certain amount of irony here: the government is intervening in the private market to stop the sale of a company that is being bought because of government-granted monopolies. Sadly, I doubt it will occur to anyone in government to fix the problem at its root, and Qualcomm would be the first to fight against the precise measures — patent overhaul — that would do more than anything to ensure the company remains independent and incentivized to spend even more on innovation, because its future would depend on innovation to a much greater degree than it does now.

    The reality is that technology has flipped the entire argument for patents — that they spur innovation — completely on its head. The very nature of technology — that costs are fixed and best maximized over huge user-bases, along with the presence of network effects — mean there are greater returns to innovation than ever before. The removal of most technology patents would not reduce the incentive to innovate; indeed, given that a huge number of software patents in particular are violated on accident (unsurprising, given that software is ultimately math), their removal would spur more. And, as Qualcomm demonstrates, one could even argue such a shift would be good for national security.


  • Lessons From Spotify

    The two dominant business models for venture-backed startups are advertising for consumer-focused companies, and Software-as-a-Service (SaaS) for business-focused ones. On one level, these business models are quite different: the former gives away software for free with the hope of convincing a third party to pay for access to users; the latter charges some portion of users directly. The underlying economics of both, though, are more similar than you might think — indeed, both are very much in line with venture-backed startups of the past.

    Venture Outcomes

    Silicon Valley is, unsurprisingly given the name, built on silicon-based computer chips, and that goes for Silicon Valley venture capital, as well. Silicon-based chips have minimal marginal costs — sand is cheap! — but massive fixed costs: R&D on one hand, and the equipment to actually make the chips on the other. And while those two costs live on different parts of the income statement — the latter is a cost of revenue that impacts gross margins, while the former is “under the line” and an operational cost that only impacts overall profitability — the fundamental economic rationale for taking on venture capital is the same: spend a lot of money up-front to develop and build a product, and take advantage of minimal marginal costs to make it up in volume.

    You can see how this model translated perfectly to software: marginal costs were even lower, and an even greater percentage of costs were R&D. Companies needed lots of money to get started, but those that succeeded could generate returns that vastly exceeded the amount of investment. This is certainly the case for today’s business models.

    Advertising-based consumer companies spend huge amounts on R&D building products that appeal to users, although usually not a lot on sales and marketing to acquire users; consumer companies that break through to the scale necessary to support advertising rely on viral network effects. Where the sales and marketing spend comes is in courting advertisers; however, the most valuable consumers companies of all — the super-aggregators — generate the same sort of network effects allowing them to add advertisers in a scalable way as well.

    This produces the ideal venture outcome: a company where users and revenue grow far more quickly than costs.

    Graph of a Venture Company's Costs

    Again, this is possible because there are minimal marginal costs — more users are not necessarily more expensive. Of course fixed costs grow over time, but they only grow linearly — earning ever-increasing revenue on a relatively stable cost basis is the definition of scale.

    SaaS businesses have the same sort of profile — the big difference is that revenue comes from users, and thus sales and marketing expenses are spent on gaining said users, not advertisers, but minimal marginal costs are the common thread.

    Spotify’s Operational Costs

    In The Business of SaaS, one of the guides offered by Stripe Atlas, Patrick McKenzie writes:

    Margins, to a first approximation, don’t matter. Most businesses care quite a bit about their cost-of-goods-sold (COGS), the cost to satisfy a marginal customer. While some platform businesses (like AWS) have material COGS, at the typical SaaS company, the primary source of value is the software and it can be replicated at an extremely low COGS. SaaS companies frequently spend less than 5~10% of their marginal revenue per customer on delivering the underlying service.

    This allows SaaS entrepreneurs to almost ignore every factor of their unit economics except customer acquisition cost (CAC; the marginal spending on marketing and sales per customer added). If they’re quickly growing, the company can ignore every expense that doesn’t scale directly with the number of customers (i.e. engineering costs, general and administrative expenses, etc), on the assumption that growth at a sensible CAC will outrun anything on the expenses side of the ledger.

    In other words, operational costs don’t matter in the long run, which is good news for Spotify, a venture-backed company with definite SaaS characteristics that filed for a direct listing last week. Spotify has increased monthly active users by 43% over the last three years and revenue by 448% over the last five; its fixed costs have largely tracked revenue:

    SPOTIFY REVENUE AND FIXED COSTS (IN MILLIONS OF EUROS)
    Revenue R&D (% Rev) S&M (% Rev) G&A (% Rev) Total (% Rev)
    2013 746 73 (10%) 111 (15%) 42 (6%) 226 (30%)
    2014 1,085 114 (11%) 184 (17%) 67 (6%) 365 (34%)
    2015 1,940 136 (7%) 219 (11%) 106 (5%) 461 (26%)
    2016 2,952 207 (7%) 368 (12%) 175 (6%) 750 (25%)
    2017 4,090 396 (10%) 567 (14%) 264 (6%) 1,227 (30%)

    This looks like a well-managed SaaS company:

    Spotify Revenue and Operational Costs

    There’s just one problem: Spotify’s marginal costs.

    Spotify’s Marginal Cost Problem

    It is not exactly groundbreaking analysis to note that Spotify has significant marginal costs — specifically, the royalties it pays the music industry (not just record labels but also songwriters and publishers). Those are represented by Spotify’s Cost of Revenue:

    Spotify Revenue and Cost of Revenue

    Spotify negotiated new deals with the record labels last summer that resulted in lower royalty rates in exchange for guaranteed subscriber growth and the ability for the labels to make some releases exclusive to Spotify’s paid tier; you can see those lower rates reflected in Spotify’s increased margins.

    Spotify’s Missing Profit Potential

    That, though, is precisely the problem: Spotify’s margins are completely at the mercy of the record labels, and even after the rate change, the company is not just unprofitable, its losses are growing, at least in absolute euro terms:

    Spotify Gross and Net Profit

    Moreover, it seems highly unlikely Spotify’s Cost of Revenue will improve much in the short-term: those record deals are locked in until at least next year, and they include “most-favored nation” provisions, which means that Spotify has to get Universal Music Group, Sony Music Entertainment, Warner Music Group, and Merlin (the representative for many independent labels), which own 85% of the music on Spotify as measured by streams, to all agree to reduce rates collectively. Making matters worse, the U.S. Copyright Royalty Board just increased the amount to be paid out to songwriters; Spotify said the change isn’t material, but it certainly isn’t in the right direction either.

    That leaves two options:

    • Most obviously Spotify could try and lower its operational costs. This, though, is harder than it might seem for two reasons: first, Spotify is already a pretty frugal company; Dropbox, for example, which filed its S-1 the same week, spends 77% of revenue on operational costs as compared to Spotify’s 30%.
    • Spotify could grow its revenue without increasing its operational costs. How, though, will it grow revenue if it cannot increase its spending on R&D and Sales & Marketing? The typical pattern for non-social network companies is for Sales & Marketing to grow less efficient over time, which means it would need to increase as a percentage of revenue, not decrease (and remember, Spotify can’t afford to miss its growth numbers or its royalty rates go up).

    There is one more possibility: Spotify could one day cut out the labels altogether — the idea certainly makes sense on a conceptual level. Spotify is in one sense an aggregator, in that it increasingly controls access to music listeners, and to the company’s credit, it has demonstrated the ability to exercise power via its control of music discovery and popular playlists.

    The problem is that the music labels, as I wrote in The Great Unbundling, have been strengthened by Spotify as well:

    The music industry, meanwhile, has, at least relative to newspapers, come out of the shift to the Internet in relatively good shape; while piracy drove the music labels into the arms of Apple, which unbundled the album into the song, streaming has rewarded the integration of back catalogs and new music with bundle economics: more and more users are willing to pay $10/month for access to everything, significantly increasing the average revenue per customer. The result is an industry that looks remarkably similar to the pre-Internet era:

    Notice how little power Spotify and Apple Music have; neither has a sufficient user base to attract suppliers (artists) based on pure economics, in part because they don’t have access to back catalogs. Unlike newspapers, music labels built an integration that transcends distribution.

    Spotify is an impressive product and company, and CEO Daniel Ek and team deserve credit for reaching this point. Being a true aggregator, though, means gaining power over supply; Spotify doesn’t have that — the company doesn’t even have control over its marginal costs — and it’s hard to see where the profits come from.

    Lessons from Spotify

    The power of the record labels and the resultant linkage of Spotify’s marginal costs to its overall revenue certainly makes Spotify a unique case compared to most zero marginal cost venture-backed companies:

    Graph of Company with Marginal Costs Linked to Revenue

    It’s worth noting, though, that Spotify is hardly the only well-known startup that has its cost of revenue linked to total revenue — at least from a certain perspective. Over the last few years there has been a third model of startup that has emerged: the so-called sharing economy, or Assets-as-a-Service (AaaS). When you spend $10 on an Uber or Lyft ride, around $7 goes to the driver; when you spend $100 on an Airbnb, $85 goes to the host,1 and so on and so forth.

    This isn’t how these companies necessarily keep their books, to be clear: the top line number should exclude whatever is paid out to the driver or host etc. When thinking about how these companies should be managed, though, the situation isn’t much different than Spotify. Specifically:

    • AaaS companies can’t assume that operational expenses are “free”, because gross marginal costs are going to eat up a huge portion of gross revenue growth.
    • AaaS companies should focus Sales & Marketing spending on increasing demand, and allow demand to draw supply. Doing it the other way — spending Sales & Marketing to increase supply in the hope of drawing demand — may make sense competitively, but it is a disaster financially, as the company is basically spending to increase its costs (imagine if Spotify were paying millions to court the record labels!)
    • AaaS companies that can’t lower their operational costs or grow revenue relatively faster than Sales & Marketing will be left rolling the dice on eliminating marginal costs entirely. Granted, self-driving cars or owned-and-operated apartments may both be more viable than getting rid of the record labels, but it still seems a better bet to become far more disciplined when it comes to operational costs.

    I still believe in a future where Everything is a Service, and there’s no question that creating networks for everything will need a lot of venture capital. And make no mistake — there will continue to be capital available, because a network, once made, absolutely offers the sort of scalable revenue generation that makes generating significant profits an inevitability.

    To that end, it is surely Spotify’s hope that the streaming market ends up being so big that the company’s low gross margin in percentage terms ends up large in absolute ones; even then those profits will come from operational excellence and efficient customer acquisition, not simply top-line growth.

    I wrote a follow-up to this article in this Daily Update.


    1. Minus service fees to cover payment processing 


  • The Dropbox Comp

    I am usually quite conservative when it comes to how much time, data, and effort I am willing to put into a product from a new startup: too many go out of business or are acquired-and-sunset, and who wants to go to the effort twice?

    Dropbox, though, was something else entirely: the initial release in 2008 was so good, and filled such a need, that I switched all of my most important data there immediately and I’ve never left, even though I have lots of free data storage included with other SaaS software plans. Indeed, I was so convinced that Dropbox wasn’t going anywhere that I felt no compunction about using Dropbox (plus a bit of Apple Script) as a de facto syncing system for a school I was working at; it has been ten years, the school has expanded to multiple locations, and every classroom still has the exact same set of files thanks to a product that does exactly what it promises. And now the company behind it is going public — I knew it!

    Still, even if the utility and durability of Dropbox’s product was immediately apparent, the long-run trajectory of its business is, even with the release of the company’s S-1, less so.

    Dropbox Versus Box and the Question of Lifetime Value

    Dropbox and Box have always been compared, and for a rather obvious reason: the core offering of both companies is cloud storage. Said comparison, though, mostly serves to highlight that while the two companies might have similar products, there are so many other ways to be different.

    First and foremost, Box has, since the earliest days of the company, been focused on enterprise customers, while Dropbox started out as a consumer product. I explained why this mattered in 2014’s Battle of the Box:

    Dropbox’s model makes sense theoretically, but it ignores the messy reality of actually making money. After all, notably absent from my piece on Business Models for 2014 was consumer software-as-a-service. I’m increasingly convinced that, outside of in-app game purchases, consumers are unwilling to spend money on intangible software. That is likely why Dropbox has spent much of the last year pivoting away from consumers to the enterprise.

    There are multiple reasons why the latter is a more attractive target for all software-as-a-service companies, especially those focused on data:

    • Consumers need to be convinced of the value of their data…
    • Consumers have multiple free options…
    • Consumers are hard to market to…
    • For consumers, collaboration is an edge case…
    • Building a platform for consumers is incredibly difficult…

    I concluded by arguing that $10 million invested in Box at its-then $2 billion valuation was a better bet than the same $10 million invested in Dropbox at its-then $10 billion valuation; given that Box has a $3.2 billion market capitalization while Dropbox is hoping its IPO will clear that same $10 billion mark, I’m (fake) rich!

    Dropbox, though, has indeed pivoted: the company said in its S-1:

    Of our 11 million paying users, approximately 30% use Dropbox for work on a Dropbox Business team plan, and we estimate that an additional 50% use Dropbox for work on an individual plan, collectively totaling approximately 80% of paying users.

    Still, significant differences remain: Dropbox’s customer base, thanks to all those consumers, is over 500 million users (Dropbox announced 500 million signups last March, but explained in its S-1 that it had culled what were apparently ~100 million inactive accounts over the last year), while Box, as of last quarter, had only 57 million registered accounts. On the other hand, 17% of Box’s users had paid accounts; only 2% of Dropbox’s did. This contrast in efficiency gets at the biggest difference between the two companies: to whom they sell, and how they go about doing so.

    Box sells to big companies using a traditional sales force; free accounts exist primarily to enable temporary collaboration with paid accounts, as well as trials. There is a self-
    serve option, but that’s not the point: Box notes in its financial filings that “Our marketing strategy also depends in part on persuading users who use the free version of our service to convince decision-makers to purchase and deploy our service within their organization”. In other words, when it comes to Box’s ideal customer, the CIO decides for everyone all at once.

    For Dropbox, on the other hand, self-serve is the most important channel by far. The company brags that “We generate over 90% of our revenue from self-serve channels — users who purchase a subscription through our app or website.” Dropbox has a sales team, but as it notes in its S-1, the team “focuses on converting and consolidating these separate pockets of usage into a centralized deployment. Nearly all of our largest outbound deals originated as smaller self-serve deployments.”

    There are pros and cons to both approaches. Start with the obvious difference: customer acquisition cost. While the two companies spent a comparable amount on sales and marketing in the third quarter of 2017 ($81.7 million for Box, and $74.7 million for Dropbox1), for Box that represented 63% of revenue; for Dropbox it was only 26%.2

    However, the two numbers aren’t as comparable as they seem: specifically, Box’s Sales and Marketing includes the infrastructure and support costs of those free users; Dropbox’s doesn’t. Rather, the company includes those costs in its Cost of Revenue, which is a big reasons Dropbox’s gross margin of 68% trails Box’s 73%.3 And, by extension, we don’t really know what Dropbox’s customer acquisition cost is.

    There is another advantage of selling to top-down decision-makers: the opportunity to build solutions for specific needs, and charge accordingly. This has enabled Box to achieve negative churn: in all of its cohorts the company is increasing its revenue-per-user by a faster rate than it is losing users overall, which means revenue-per-cohort increases over time. The company explained this in its amended S-1:

    Our business model focuses on maximizing the lifetime value of a customer relationship. We make significant investments in acquiring new customers and believe that we will be able to achieve a positive return on these investments by retaining customers and expanding the size of our deployments within our customer base over time…

    We experience a range of profitability with our customers depending in large part upon what stage of the customer phase they are in. We generally incur higher sales and marketing expenses for new customers and existing customers who are still in an expanding stage…For typical customers who are renewing their Box subscriptions, our associated sales and marketing expenses are significantly less than the revenue we recognize from those customers.

    box1

    Box went on to give numbers for specific cohorts; Dropbox, unfortunately, was significantly less specific:

    As we continue to innovate and optimize our go-to-market strategy, we have successfully increased monetization for subsequent cohorts. Comparing January cohorts from the last three years, at virtually every point in time after signup, the January 2017 cohort generated a higher monthly subscription amount than the January 2016 cohort, which in turn generated a higher monthly subscription amount than the January 2015 cohort.

    This sounds good, until you actually try to figure out what it means. Is the January 2017 cohort monetizing more because users are paying more quickly, or because there are more users? How many of those users are churning, and is there an increase in revenue-per-customer to counteract that?

    Dropbox’s S-1 doesn’t give the answer to the first two questions, but the answer to the third seems to be “no”. Average revenue per paying user is actually down from 2015 ($113.54 to $111.91), although slightly up from 2016 ($110.54). Given the model, though, this isn’t a surprise: the only way to serve a massive user-base efficiently is to have a fairly standardized offering; creating and selling differentiating features that increase the average revenue per paying customer doesn’t scale.

    There is one other big advantage in terms of Dropbox’s model, at least from a founder and early investor perspective: the tradeoff of Box earning ever-increasing amounts of revenue per paying customer is the amount it takes to land that customer in the first place. This is why Box’s losses were so large, and why founder and CEO Aaron Levie was so diluted by the time the company finally IPO’d (Levie owned just over 5% of Box at the time of IPO). Dropbox founder and CEO Drew Houston, on the other hand, still owns 25%, and early investor Sequoia Capital another 23%; a founder retaining that much ownership is much more characteristic of a consumer company than an enterprise one — which is exactly how Dropbox started.

    Dropbox Versus Atlassian and the Question of Market Size

    Still, Houston’s ownership stake pales in comparison to Scott Farquhar and Mike Cannon-Brookes, co-founders and co-CEOs of Atlassian, who owned 37.7% of the company each when it IPO’d two years ago. Not coincidentally, Atlassian was very much a pioneer in the self-serve model when it comes to enterprise software, and as I wrote at the time of their S-1, it helped that the company was selling to developers:

    Agile was largely developer-driven, another factor that worked in JIRA and Atlassian’s favor. Developers are, quite obviously, much more willing to do their own research on products, download and trial software from the Internet, and if they like it, proselytize to other developers even if they don’t work for the same company. In other words, of all the different types of enterprise software, development tools are uniquely suited to spreading somewhat virally without the need for a traditional sales force.

    One of the big questions at the time of Atlassian’s IPO was just how big their market was — specifically, could the company start selling beyond its developer base? So far the results are encouraging: JIRA Service Desk, the company’s attempt to expand its JIRA project management software to non-developer teams, is in over 25,000 organizations, and the company overall continues to grow both by adding new customers and by selling more products to existing customers.

    This is the second question for Dropbox, beyond the uncertainty around its customer acquisition costs and churn: to what extent can it expand its market? On the positive side, those 500 million users are all potential customers; on the other, the vast majority of them have avoided paying for ten years — the proportion of paid users has barely budged over time. And again, Dropbox hasn’t developed ways for its already paying customers to pay it more.

    The potential is certainly there: note that Atlassian’s growth, with a similar model to Dropbox’s, is far out-pacing Box’s — 42% in 3Q 2017 (Atlassian’s FY Q1 2018), compared to 26% — but then again it is far out-pacing Dropbox’s 30% as well. That Dropbox’s revenue growth is slowing suggests the company is ultimately a niche player.

    Dropbox Versus Slack and the Question of the Enterprise OS

    I once thought that Dropbox — and Box, for that matter — could be more than that; in 2014 I wrote Box, Microsoft, and the Next Enterprise Platform:

    Pure storage isn’t a great business. The cost is trending towards zero, as noted by Levie himself. Data, though, is priceless; it can’t be replaced, and it’s the essence of what makes a particular organization unique…Just because the operating system is no longer the platform does not mean that the need – and opportunity – for a platform does not exist. Something needs to tie together all those computing devices, and data, which needs to be everywhere, is the logical place to start.

    Dropbox made a similar argument in its S-1:

    Our modern economy runs on knowledge. Today, knowledge lives in the cloud as digital content, and Dropbox is a global collaboration platform where more and more of this content is created, accessed, and shared with the world. We serve more than 500 million registered users across 180 countries…

    Our market opportunity has grown as we’ve expanded from keeping files in sync to keeping teams in sync. Today, Dropbox is well positioned to reimagine the way work gets done. We’re focused on reducing the inordinate amount of time and energy the world wastes on “work about work” — tedious tasks like searching for content, switching between applications, and managing workflows.

    The shift in focus from data to people is one I made myself in 2015; commenting on that Box OS article above, I wrote:

    I think, in retrospect, I outsmarted myself: companies aren’t made of data, they’re made of people, just like every other single institution on earth. And, as I noted in the context of Facebook, what people love to do, more than anything else in the world, is communicate. Why wouldn’t you start there?

    To that end Dropbox is marketing itself to investors as a collaboration company, and heavily emphasizing Dropbox Paper. In the meantime, though, another company — the one I was writing about in that excerpt — has entered the scene: Slack.

    It’s hard to see anyone — including Microsoft — having a bigger opportunity than Slack.4 The trend in every aspect of computing is higher and higher levels of abstraction, and that doesn’t apply just to things like programming languages. In the case of platforms, the operating system of the PC used to really matter, and then the Internet came along and it didn’t. Similarly, in mobile, the operating system, whether that be iOS or Android, used to really matter, but now it doesn’t. In the consumer space, Facebook or WeChat runs on both, and that is far more important to the day-to-day experience of the vast majority of people.

    It turns out that “mobile” is not about devices, but rather, at a fundamental level, about computing anywhere; to differentiate between PCs or phones is an ultimately meaningless exercise. They are simply different form factors of effectively identical devices, the purpose of which is to connect us to the cloud (consumer or enterprise). And, by extension, if the device is simply an implementation detail, then the operating system that runs on that device is a detail of a detail.

    What matters — what always matters! — is what actual users want to do, and what jobs they want to accomplish. And, whatever they want to do almost certainly involves communicating, which means Slack and its competitors are the best-placed to be the foundational platform of the cloud epoch. More broadly, humans are social creatures: why should we be surprised that social networks are primed to be the most important businesses of all?

    It’s been two years since I wrote that, and while Slack is still growing, albeit more slowly, the question of which company controls the future of enterprise computing remains an open one. Is it Amazon via infrastructure, Microsoft via infrastructure and identity and email, Slack via chat? Google via all-of-the-above?

    What seems clear is that it won’t be Dropbox — both because files weren’t the right route and also because the company spent far too much time and energy chasing a non-existent consumer opportunity — but that’s ok. There is still value — at least $10 billion in value, I’d bet — in doing a job and doing it well, whether that be as a startup in 2008 or a public company in 2018. We still need to share files (and yes, collaborate on them), and will need to do so for a very long time, and Dropbox does it better than anyone. I just wish Dropbox’s S-1 didn’t make it so difficult to figure out just how much value there might be.

    I wrote a follow-up to this article in this Daily Update.


    1. This number jumped to $102.9 million in the fourth quarter, which is a much larger jump than any previous fourth quarter, perhaps in anticipation of the IPO filing 

    2. Per the previous footnote, in the fourth quarter sales and marketing was 34% of revenue 

    3. More on Dropbox’s dropping Cost of Revenue tomorrow 

    4. Note that I said “opportunity”; opportunity means it’s possible, not that it’s necessarily going to happen 


  • The Aggregator Paradox

    Which one of these options sounds better?

    • Fast loading web pages with responsive designs that look great on mobile, and ads that are respectful of the user experience
    • The elimination of pop-up ads, ad overlays, and autoplaying videos with sounds

    Google is promising both; is the company’s offer too good to be true?

    Why Web Pages Suck Redux

    2015 may have been the nadir in terms of the user experience of the web, and in Why Web Pages Suck, I pinned the issue on publishers’ broken business model:

    If you begin with the premise that web pages need to be free, then the list of stakeholders for most websites is incomplete without the inclusion of advertisers…Advertisers’ strong preference for programmatic advertising is why it’s so problematic to only discuss publishers and users when it comes to the state of ad-supported web pages: if advertisers are only spending money — and a lot of it — on programmatic advertising, then it follows that the only way for publishers to make money is to use programmatic advertising…

    The price of efficiency for advertisers is the user experience of the reader. The problem for publishers, though, is that dollars and cents — which come from advertisers — are a far more scarce resource than are page views, leaving publishers with a binary choice: provide a great user experience and go out of business, or muddle along with all of the baggage that relying on advertising networks entails.

    My prediction at the time was that Facebook Instant Articles — the Facebook-native format that the social network promised would speed up load times and enhance the reading experience, thus driving more engagement with publisher content — would become increasingly important to publishers:

    Arguably the biggest takeaway should be that the chief objection to Facebook’s offer — that publishers are giving up their independence — is a red herring. Publishers are already slaves to the ad networks, and their primary decision at this point is which master — ad networks or Facebook — is preferable?

    In fact, the big winner to date has been Google’s Accelerated Mobile Pages (AMP) initiative, which launched later that year with similar goals — faster page loads and a better reading experience. From Recode:

    During its developer conference this week, Google announced that 31 million websites are using AMP, up 25 percent since October. Google says these fast-loading mobile webpages keep people from abandoning searches and by extension drive more traffic to websites.

    The result is that in the first week of February, Google sent 466 million more pageviews to publishers — nearly 40 percent more — than it did in January 2017. Those pageviews came predominantly from mobile and AMP. Meanwhile, Facebook sent 200 million fewer, or 20 percent less. That’s according to Chartbeat, a publisher analytics company whose clients include the New York Times, CNN, the Washington Post and ESPN. Chartbeat says that the composition of its network didn’t materially change in that time.

    This chart doesn’t include Instant Articles specifically, but most accounts suggest the initiative is faltering: the Columbia Journalism Review posited that more than half of Instant Articles’ launch partners had abandoned the format, and Jonah Peretti, the CEO of BuzzFeed, the largest publisher to remain committed to the format, has taken to repeatedly criticizing Facebook for not sharing sufficient revenue with publications committed to the platform.

    Aggregation Management

    The relative success of Instant Articles versus AMP is a reminder that managing an ecosystem is a different skill that building one. Facebook and Google are both super-aggregators:

    Super-Aggregators operate multi-sided markets with at least three sides — users, suppliers, and advertisers — and have zero marginal costs on all of them. The only two examples are Facebook and Google, which in addition to attracting users and suppliers for free, also have self-serve advertising models that generate revenue without corresponding variable costs (other social networks like Twitter and Snapchat rely to a much greater degree on sales-force driven ad sales).

    Super-Aggregators are the ultimate rocket ships, and during the ascent ecosystem management is easy: keep the rocket pointed up-and-to-the-right with regards to users and publishers and suppliers will have no choice but to clamor for their own seat on the spaceship.

    The problem — and forgive me if I stretch this analogy beyond the breaking point — comes when the oxygen is gone. The implication of Facebook and Google effectively taking all digital ad growth is that publishers increasingly can’t breathe, and while that is neither company’s responsibility on an individual publisher basis, it is a problem in aggregate, as Instant Articles is demonstrating. Specifically, Facebook is losing influence over the future of publishing to Google in particular.

    A core idea of Aggregation Theory is that suppliers — in the case of Google and Facebook, that is publishers — commoditize themselves to fit into the modular framework that is their only route to end users owned by the aggregator. Critically, suppliers do so out of their own self-interest; consider the entire SEO industry, in which Google’s suppliers pay consultants to better make their content into the most Google-friendly commodity possible, all in the pursuit of greater revenue and profits.

    This is a point that Facebook seems to have missed: the power that comes from directing lots of traffic towards a publisher stems from the revenue that results from said traffic, not the traffic itself. To that end, Facebook’s too-slow rollout of Instant Articles monetization, and continued underinvestment (if not outright indifference) to the Facebook Audience Network (for advertisements everywhere but the uber-profitable News Feed) has left an opening for Google: the search giant responded by iterating AMP far more quickly, not just in terms of formatting but especially monetization.

    Critically, that monetization was not limited to Google’s own ad networks: from the beginning AMP has been committed to supporting multiple ad networks, which sidestepped the trap Facebook found itself in. By not taking responsibility for publisher monetization Google made AMP more attractive than Instant Articles, which took responsibility and then failed to deliver.1

    I get Facebook’s excuse: News Feed ads are so much more profitable for the company than Facebook Audience Network ads, that from a company perspective it makes more sense to devote the vast majority of the company’s resources to the former; from an ecosystem perspective, though, the neglect of Facebook Audience Network has been a mistake. And that, by extension, is why Google’s approach was so smart: Google has the same incentives as Facebook to focus on its own advertising, but it also has the ecosystem responsibility to ensure the incentives in place for its suppliers pay off. Effectively offloading that payoff to third party networks both ensures publishers get paid even as Google’s own revenue generation is focused on the search results surrounding those AMP articles.

    Google’s Sticks

    Search, of course, is the far more important reason why AMP is a success: Google prioritizes the format in search results. Indeed, for all of the praise I just heaped on AMP with regards to monetization, AMP CPMs are still significantly lower than traditional mobile web pages; publishers, though, are eager to support the format because a rush of traffic from Google more than makes up for it.

    Here too Facebook failed to apply its power as an aggregator: if monetization is a carrot, favoring a particular format is a stick, and Facebook never wielded it. Contrary to expectations the social network never gave Instant Articles higher prominence in the News Feed algorithm, which meant publishers basically had the choice between more-difficult-to-monetize-but-faster-to-load Instant Articles or easier-to-monetize-and-aren’t-our-resources-better-spent-fixing-our-web-page? traditional web pages. Small wonder the latter won out!

    In fact, for all of the criticism Facebook has received for its approach to publishers generally and around Instant Articles specifically, it seems likely that the company’s biggest mistake was that it did not leverage its power in the way that Google was more than willing to.

    That’s not the only Google stick in the news: the company is also starting to block ads in Chrome. From the Wall Street Journal:

    Beginning Thursday, Google Chrome, the world’s most popular web browser, will begin flagging advertising formats that fail to meet standards adopted by the Coalition for Better Ads, a group of advertising, tech and publishing companies, including Google, a unit of Alphabet Inc…

    Sites with unacceptable ad formats—annoying ads like pop-ups, auto-playing video ads with sound and flashing animated ads—will receive a warning that they’re in violation of the standards. If they haven’t fixed the problem within 30 days, all of their ads — including ads that are compliant — will be blocked by the browser. That would be a major blow for publishers, many of which rely on advertising revenue.

    The decision to curtail junk ads is partly a defensive one for both Google and publishers. Third-party ad blockers are exploding, with as many as 615 million devices world-wide using them, according to some estimates. Many publishers expressed optimism that eliminating annoying ads will reduce the need for third-party ad blockers, raise ad quality and boost the viability of digital advertising.

    Nothing quite captures the relationship between suppliers and their aggregator like the expression of optimism that one of the companies actually destroying the viability of digital advertising for publishers will actually save it; then again, that is why Google’s carrots, while perhaps less effective than its sticks, are critical to making an ecosystem work.

    Aggregation’s Antitrust Paradox

    The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

    And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

    This is the fundamental paradox presented by aggregation-based monopolies: by virtue of gaining users through the provision of a superior user experience, aggregators gain power over suppliers, which come onto the aggregator’s platforms on the aggregator’s terms, resulting in an even better experience for users, resulting in virtuous cycle. There is no better example than Google’s actions with AMP and Chrome ad-blocking: Google is quite explicitly dictating exactly how it is its suppliers will access its customers, and it is hard to argue that the experience is not significantly better because of it.

    At the same time, what Google is doing seems nakedly uncompetitive — thus the paradox. The point of antitrust law — both the consumer-centric U.S. interpretation and the European competitor-centric one — is ultimately to protect consumer welfare. What happens when protecting consumer welfare requires acting uncompetitively? Note that implicit in my analysis of Instant Articles above is that Facebook was not ruthless enough!

    The Ad Advantage

    That Google might be better for users by virtue of acting like a bully isn’t the only way in which aggregators mess with our preconceived assumptions about the world. Consider advertising: many commentators assume that user annoyance with ads will be the downfall of companies like Google and Facebook.

    That, though, is far too narrow an understanding of “user experience”; The “user experience” is not simply user interface, but rather the totality of an app or web page. In the case of Google, it has superior search, it is now promising faster web pages and fewer annoying ads, and oh yeah, it is free to use. Yes, consumers are giving up their data, but even there Google has the user experience advantage: consumer data is far safer with Google than it is with random third party ad networks desperate to make their quarterly numbers.

    Free matters in another way: in disruption theory integrated incumbents are thought to lose not only because of innovation in modular competing systems, but also because modular systems are cheaper: the ad advantage, though, is that the integrated incumbents — Google and Facebook — are free to end users. That means potential challengers have to have that much more of a superior user experience in every other aspect, because they can’t be cheaper.2

    In other words, we can have our cake and eat it too — and it’s free to boot. Hopefully it’s not poisonous.

    I wrote a follow-up to this article in this Daily Update.


    1. Instant Articles allows publishers to sell their own ads directly, but explicitly bans third party ad networks 

    2. This, as an aside, is perhaps the biggest advantage of cryptonetworks: I’ve already noted in Tulips, Myths, and Cryptocurrencies that cryptonetworks are “probably the most viable way out from the antitrust trap created by Aggregation Theory”; that was in reference to decentralization, but that there is money to be made is itself an advantage when the competition is free. More on this tomorrow.