Zillow, Aggregation, and Integration

Last Friday something truly remarkable happened: a public company that had grown its valuation from $539 million to nearly $7 billion in seven years announced it was changing its business model. The company was Zillow, and the stock market quickly put a price on how big of a risk the company was taking; from CNBC:

Zillow shares plunged 9 percent on Friday after the online real estate database company announced it will begin buying and selling homes, a capital-intensive endeavor. With Zillow’s new program, announced on Thursday, home sellers in the test markets of Phoenix and Las Vegas will be able to use Zillow’s platform to compare offers from potential buyers — and Zillow. When Zillow purchases a home, it will aim to quickly flip the home, making updates and repairs and listing it as soon as possible. An agent will represent Zillow in each transaction.

“We’re entering that market and think we have huge advantages because we have access to the huge audience of sellers and buyers,” Zillow CEO Spencer Rascoff said on CNBC’s “Squawk Alley.” “After testing for a year in a marketplace model, we’re ready to be an investor in our own marketplace.”

But investors are less enthusiastic. Flipping homes, a model that’s being utilized by start-up Opendoor, is very different than operating an internet marketplace. It carries additional risk associated with buying and selling homes and requires a hefty investment in operations. And it also potentially puts Zillow in direct competition with the realtors on its platform. Zillow sank $5, or 9.3 percent, to $48.77 as of mid-day on Friday, knocking more than $900 million off its stock market value.

That’s a lot of money to bet on…well, what exactly? What kind of company is Zillow today, and what kind of company does it hope to be in the future?

Zillow and Aggregation Theory

Last fall I refined Aggregation Theory by Defining Aggregators. To quickly summarize, I wrote that Aggregators as a whole share three characteristics:

  • A direct relationship with users
  • Zero marginal costs to serve those users
  • Demand-driven multi-sided networks that result in decreasing acquisition costs

This allows Aggregators to leverage an initial user experience advantage with a relatively small number of users into power over some number of suppliers, which come onto the platform on the Aggregator’s terms, enhancing the user experience and attracting more users, setting off a virtuous cycle of an ever-increasing user base leading to ever-increasing power over suppliers.

Not all Aggregators are the same, though; they vary based on the cost of supply:

  • Level 1 Aggregators have to acquire their supply and win by leveraging their user base into superior buying power (i.e. Netflix).
  • Level 2 Aggregators do not own their supply but incur significant marginal costs in scaling supply (i.e. Airbnb or Uber).
  • Level 3 Aggregators have zero supply costs (i.e. App Stores or social networks)

Where, then, does Zillow fit? It certainly has the hallmarks of an Aggregator: users go to Zillow directly to look for homes, Zillow incurs zero marginal costs to serve those users, and the company has created a two-sided market where its suppliers (home sellers) are incentivized to come onto the platform on Zillow’s terms in order to reach Zillow’s end users, thus making the platform more attractive to those end users.

The question of supply is more complicated; in North America real estate listings are gathered in hundreds of local multiple listing services (MLSs) run by local realtor associations, and access is restricted to brokers in that local region. Redfin got access to those listings by becoming a broker itself, but Zillow, at least at the beginning, relied on brokers uploading listings themselves — which they were willing to do, thanks to the userbase Zillow had already built up thanks in part to its Zestimate house valuation tool.

This was Aggregation Theory in action: gain users with a new kind of user experience, then leverage that user base to get suppliers to come onto your platform on your terms, further improving the user experience. And, eventually, Zillow was able to parlay that user base into direct access to those MLS services, first via the owners of Realtor.com, and then, when they pulled the agreement, via local MLSs and brokers directly who understood how important it was to stay on Zillow.

Interestingly, this means that Zillow arguably started out as a Level 3 Aggregator, and then stepped down to a hybrid of Level 1 and Level 2: cutting all of those deals is expensive, and the company does pay for the data, but it’s not exclusive by any means. And this, by extension, gets at why Zillow, despite having so many of the characteristics of an Aggregator, just doesn’t seem nearly as important as companies like Netflix or Airbnb or Facebook: it has accommodated itself to the real estate industry; it hasn’t transformed it.

The Real Estate Media Company

The first sentence in Zillow’s S-1 was its mission statement: “Our mission is to build the most trusted and vibrant home-related marketplace to empower consumers with information and tools to make intelligent decisions about homes.” In 2014, though, the company coined a new description for itself: a “real-estate media company.”

The occasion was the purchase of Trulia: both companies made money selling ads to real estate agents eager to get their listings at the top of the two real estate aggregators that were the top two starting points for real estate searches; by emphasizing they were both media companies Zillow could claim they both had many competitors and weren’t competitive with real estate agents all at the same time.

It also had the benefit of being true (until last week). The real estate business in North America has long been an expensive quagmire, for reasons I laid out when Zillow bought Trulia:

  • While real estate transactions in the aggregate are very frequent, for individual buyers and sellers they are very rare. Thus there is little incentive to push for a simpler solution.
  • A real estate transaction is usually the largest transaction most buyers and sellers will undertake, which makes them very risk averse and unwilling to try an unconventional service.
  • There is a lot of regulation and paperwork associated with a real estate transaction, where assistance is very valuable. And, as just noted, transactions are rare, which means there is little incentive to learn how to deal with said regulations and paperwork on your own.

Combine the reticence of consumers to push for change with the local realtor association-controlled MLSs, and a willingness by realtors to punish anyone changing the status quo (by not showing a house, or pointing out flaws that would kill a sale), and the best outcome for Zillow was to be an aggregator but not an integrator: the company was completely removed from the purchase process.

Integration and Aggregation

This gets at why Zillow, for all of its success, seems so underwhelming compared to other Aggregators. One of the key theories underpinning Aggregation Theory is Clayton Christensen’s Conservation of Attractive Profits, which I explored in the context of Netflix while developing the theory:

The Law of Conservation of Attractive Profits1 [was] first explained by Clayton Christensen in his 2003 book The Innovator’s Solution:

Formally, the law of conservation of attractive profits states that in the value chain there is a requisite juxtaposition of modular and interdependent architectures, and of reciprocal processes of commoditization and de-commoditization, commoditization, that exists in order to optimize the performance of what is not good enough. The law states that when modularity and commoditization cause attractive profits to disappear at one stage in the value chain, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage.

That’s a bit of a mouthful, but the example that follows in the book shows how powerful this observation is:

If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computer’s architecture had to be modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one-size-fits-all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

Did you catch that? That was Christensen, a full four years before the iPhone, explaining why it was that Intel was doomed in mobile even as ARM would become ascendent.2 When the basis of competition changed away from pure processor performance to a low-power system the chip architecture needed to switch from being integrated (Intel) to being modular (ARM), the latter enabling an integrated BlackBerry then, and an integrated iPhone four years later.3

The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces.
The PC is a modular system whose integrated parts earn all the profit. Blackberry (and later iPhones) on the other hand was an integrated system that used modular pieces. Do note that this is a drastically simplified illustration.

More broadly, breaking up a formerly integrated system — commoditizing and modularizing it — destroys incumbent value while simultaneously allowing a new entrant to integrate a different part of the value chain and thus capture new value.

Commoditizing an incumbent's integration allows a new entrant to create new integrations -- and profit -- elsewhere in the value chain.
Commoditizing an incumbent’s integration allows a new entrant to create new integrations — and profit — elsewhere in the value chain.

This is exactly what is happening with Airbnb, Uber, and Netflix too.

This is the original piece of Aggregation Theory that was missing from last year’s Defining Aggregators: it is one thing to sit on top of an existing industry and, well, be a media company/lead generation tool. There have been a whole host of businesses that did exactly that, and while there is plenty of money to be made, without some sort of integration into the value chain of the industry itself they simply aren’t transformative. To put it another way, aggregation doesn’t transform value chains; integration does.

Why aggregation matters is that it is the means by which new integrations are achieved:

  • Netflix leveraged its position as an aggregator of video content into the integration of the customer relationship and content creation, undoing the integration of linear channels and content creation
  • Airbnb/Uber and other similar services integrate the customer relationship with the driver/homeowner relationship, undoing the integration of cars/property with payment
  • Google and Facebook integrated content discovery with advertising, undoing the integration of editorial and advertising

More broadly — and this really gets at why Zillow is different — Aggregators that change industries (including Aggregator-like Amazon and Apple that deal with physical goods) integrate the customer relationship with however it is their industry generates revenue; Zillow, on the other hand, was completely divorced from the home selling-and-buying process.

The Threat to Zillow — and the Opportunity

Again, not all companies need to be Aggregators, and as I noted at the beginning, Zillow has become a very successful company by getting half-way there. And, to return to that Daily Update about their purchase of Trulia, I didn’t think it was even possible for them to go all the way:

So then, perhaps this deal isn’t anticompetitive, but rather the key to building a company big enough to finally shake up the homebuying process? That’s Brad Stone’s argument in Bloomberg Businessweek…But remember, Zillow/Trulia are marketing tools; who is paying for that tool? Stone has the answer in the next paragraph:

The companies, which rely on advertising from real estate agents for the bulk of their revenues, are being careful about how they discuss the future of their combined efforts.

What Stone characterizes as “careful” I characterize “prudent” and “truthful”, because let’s be honest: Zillow/Trulia are not going to bite the hand that feeds them. Nor should they! It would be irresponsible to their shareholders, employees, and all their other stakeholders. It’s very easy to fantasize about disruption; it’s much more productive to simply follow the money. (This is why Redfin is the more interesting company in this space; they use their own network of real estate agents. It’s also why they are much smaller, despite having had a head start.)

This is why last week’s news was such a surprise, to me anyways; granted, Zillow had been experimenting with facilitating sales to investors, but to fundamentally change your capital structure, margin profile, and compete with your customers in one fell swoop feels like something else entirely — and Wall Street agreed!

I can, though, see where Zillow is coming from: no one thinks the North American real estate market is the way it is because that is somehow optimal or good for consumers; the only folks that benefit from the status quo are real estate agents that continue to collect 6% of the purchase price even as their responsibilities, particularly in the case of the buying agent, run in the opposite direction of their incentives. Zillow did well to capture a portion of that 6% for itself through its realtor ad model, but that only meant that Zillow was as dependent on the status quo as the realtors.

To be sure, Zillow has long been a better bet than Redfin, which has admirably IPO’d with a business that basically adds a tech layer (and thus superior lead generation) to a traditional real estate agency; the reality is that simply adding a tech layer doesn’t change industries — that requires new business models. This, though, is where Opendoor, the startup I wrote about in 2016, is compelling: buying houses with the click-of-a-button solves a major problem for sellers, the most disadvantaged party in the entire value chain under the status quo (and thus the most open to something new). And, by definition, it means the company (and competitors like OfferPad) are involved with the transaction that drives the value chain — the actual buying and selling of homes.

Make no mistake, the business model is risky, but that is another way of saying the potential return is massive as well: truly becoming a market maker for an industry that does $900 billion worth of transactions every year has massive upside. And, by extension, massive downside for the status quo — which again, includes Zillow. That is one reason to act.

Even so, that might not have been enough for Zillow to make such a shift: remember, this is a public company accountable to shareholders, and sometimes doubling down is the most prudent course of action. That, though, is why I spent so much time discussing integration: there is a massive amount of upside for Zillow in this move as well.

Remember, Zillow is in nearly every respect already an Aggregator: it is by far the number one place people go when they want to look for a new house, and at a minimum the starting point for research when they want to sell one. They own the customer relationship! What has always been missing is the integration with the purchase itself — until last week. Zillow is making a play to be a true Aggregator — one that transforms its industry by integrating the customer relationship with the most important transaction in its respective value chain — by becoming directly involved in the buying and selling of houses.

The Zillow Experiment

This absolutely could go sidewise: Zillow is already being hammered in the stock market — investors aren’t generally fans of high-margin companies entering low-margin businesses, with huge amounts of volatility risk to boot. Moreover, Zillow is embracing a model that, should it be successful, tears down the status quo: this will not only enrage Zillow’s customers, but also endanger Zillow’s primary revenue stream.

Here, though, Zillow’s status as an almost-Aggregator looms large: we now have years’ worth of evidence that realtors will do what it takes to ensure their listings appear on Zillow, because Zillow controls end users. It very well may be the case that realtors will find themselves with no choice but to continue giving Zillow the money the company needs to disrupt their industry.

I will certainly be watching closely: how Zillow fares will result in lessons that may be applicable broadly. Think of Spotify, for example: I was a bit bearish on the company last month because of the power of Spotify’s suppliers; the bull case is that Spotify’s ownership of the customer relationship will allow the company to build out the capability to sidestep the record labels even as the record labels can’t punish Spotify because they need them. That’s exactly what Zillow is testing right now: just how much power comes from being an Aggregator, and how much an industry can be transformed when that power is wielded.

  1. Later renamed the Law of Conservation of Modularity [↩︎]
  2. I have my differences with Christensen, but as I’ve said repeatedly my criticism comes from an attempt to build on his brilliant work, not tear it down [↩︎]
  3. As I’ve noted, the iPhone is in fact modular at the component level; the integration is between the completed phone and the software. Not appreciating that the point of integration (or modularity) can be anywhere in the value chain is, I believe, at the root of a lot of mistaken analysis about the iPhone in particular [↩︎]

The Facebook Current

“I thought something was going to get done,” lamented a friend, in reference to yesterday’s Senate hearing that featured a single witness: Facebook Founder and CEO Mark Zuckerberg. “This was the moment of reckoning, but it just turned out to be a whimper — it’s just for show.”

The sentiment seemed widespread on tech and media Twitter: there was a lack of specificity in terms of questions about privacy (this allowed Zuckerberg to turn nearly every question about the ownership of data to a discussion about user interface controls that limit where data is shown to other Facebook users), plenty of dodged questions (every time there was a question about the data Facebook generates about users beyond what they themselves enter into the system Zuckerberg needed to “check with his team”), and bad questions that presumed Facebook sells data, letting Zuckerberg run out the clock at least three times by explaining the basics of Facebook’s business model (this is precisely why I have been so outspoken about the problem of perpetrating this falsehood: it lets Facebook off the hook).

In fact, though, I thought the hearing was quite revelatory — a “show”, if you will. First, the fact that Zuckerberg appeared at all is the most meaningful news; the nature of the American political system is that changes happen extremely gradually, and only then in response to fundamental shifts in underlying political opinion. This can certainly be frustrating if one wants faster change — or a relief if one fears those in power — but that is precisely why Zuckerberg’s appearance was noteworthy: there is a current moving against Facebook, and while it is not realistic to expect that current to already be a wave, it was strong enough to sweep him to Washington D.C. for the week.1

Secondly — and count this as another indication that that current is stronger than it seems — there was a significant amount of agreement amongst the Senators in yesterday’s hearings that something needed to be done about Facebook. Forget the specifics, for a paragraph, because this is a notable development: while these hearings usually devolve into partisan cliches with the same talking points — Democrats want regulations, and Republicans don’t — yesterday Senators from both sides of the aisle expressed unease with Facebook’s handling of private data; obviously Democrats tried to tie the issue to the last election, but that made the Republicans’ shared concern all-the-more striking.

Here is where the partisan divide does matter: the most important takeaway from yesterday’s hearing was the emergence of two distinct viewpoints on what the problem with Facebook actually is, and what to do about it. That these two viewpoints are in opposition is precisely why their emergence is so compelling: a current has to be very strong indeed for there to be two clearly articulable sides.

Viewpoint One: Facebook Needs Regulation

OK, so maybe one of the viewpoints fit the partisan cliche, but the idea that Facebook might need regulation was a frequent talking point, particularly from Democrats pushing already-proposed legislation. After detailing how, in his view, Facebook violated its 2011 Consent Decree with the FTC, Senator Richard Blumenthal distilled this viewpoint to its essence here:

Senator Blumenthal: What happened here was willful blindness. It was heedless and reckless and in fact amounted to a violation of the FTC consent decree. Would you agree?

Mark Zuckerberg: No, Senator. My understanding is not that this was a violation of the consent decree. But as I have said a number of times today, I think we need to take a broader view of our responsibility around privacy than just what is mandated in the current laws.

SB: Well here is my reservation Mr. Zuckerberg…we’ve seen the apology tours before. You have refused to acknowledge even an ethical obligation to have reported this violation of the FTC consent decree, and we have letters, we’ve had contacts with Facebook employees…that indicates not only a lack of resources but lack of attention to privacy. And so, my reservation about your testimony today is that I don’t see how you can change your business model unless there are specific rules of the road. Your business model is to monetize user information, to maximize profit over privacy, and unless there are specific rules and requirements — enforced by an outside agency — I have no assurance that these kinds of vague commitments are going to produce action.

This view is clearly gaining traction in certain political circles. For example, here is Matthew Yglesias in Vox:

Online social networks obviously pose some novel legal and regulatory issues. But broadly speaking, the question of how to ensure that companies discharge their responsibilities is not a brand new one. Companies involved in the provision of health care are responsible — not just morally but legally and financially — to abide by the terms of the Health Insurance Portability and Accountability Act of 1996. That law hasn’t eliminated all privacy violations in the health care space, by any means, but when violations occur, they are punished, and the punishment gives actors in that space real reason to avoid them. Financial institutions, similarly, must comply with the privacy rules set out in the Gramm-Leach-Bliley Act. GLBA compliance has thus become its own somewhat tedious mini industry, with lawyers and specialized GLBA compliance firms you can hire…

Once upon a time, the US government wisely believed that it would be a bad idea to subject promising young internet startups to the bureaucratic morass involved in things like HIPAA or GLBA compliance. But the young internet startups are all grown up now, and can easily afford to hire vast armies of lawyers and compliance experts who will help them avoid breaches that lead to massive fines. There is no longer a need to treat Facebook like a delicate flower whose agility will vaporize if it is held legally accountable for its actions.

That means disclosure rules for advertising, it means financial consequences for privacy violations, it means firm antitrust action to restrain further acquisitions and try to uphold some semblance of competition in this marketplace, and it means taking a close look at whether the development of ever more sophisticated ad targeting algorithms is being done in a way that serves the public’s interest in creating a robust media infrastructure.

What is worth noting was the extent to which Zuckerberg was open to, if not something as specific as Yglesias’ proposal, regulation of some sort. Zuckerberg told Senator Dan Sullivan:

I’m not the type of person who thinks that all regulation is bad, so I think the Internet is becoming increasingly important in people’s lives, and I think we need to have a full conversation about what is the right regulation, not whether it should be or shouldn’t be.

This isn’t a surprise: Zuckerberg said in his opening remarks that Facebook was “going through a broader philosophical shift in how we approach our responsibility as a company”, which he meant as an indication that the company would be taking more responsibility, but which could easily be interpreted as the company locking the doors to its closed garden and throwing away the key. In this regulation is actually helpful, a point made by Senator Sullivan in response to Zuckerberg’s statement:

Senator Sullivan: One of my worries on regulation with a company of your size saying “Hey, we might be interested in being regulated”, but as you know, regulations can also cement the dominant power. So what do I mean by that? You have a lot of lobbyists, I think every lobbyist in town is involved in this hearing in some way or another, a lot of powerful interests. You look at what happened with Dodd-Frank: that was supposed to be aimed at the big banks, the regulations ended up empowering the big banks and keeping the small banks down. Do you think that that’s a risk given your influence that if we regulate, we’re actually going to regulate you into a position of cemented authority, when one of my biggest concerns about what you guys are doing is that the next Facebook, which we all want, the guy in the dorm room, we all want that to be started, that you are becoming so dominant that we’re not able to have that next Facebook? What are your views on that?

MZ: Senator I agree with the point that when you’re thinking through regulation across all industries you need to be careful that it doesn’t cement in the current companies that are winning…I think part of the challenge with regulation in general is that when you add more rules that companies to follow, that’s something that a larger company like ours inherently just have the resources to go do, and that might just be harder for a company getting started to comply with.

That Sullivan, a Republican, would be suspicious of regulation is hardly a surprise — that’s the cliche I referenced above. There’s more context to Sullivan’s comments though: he hinted at an alternative to regulation.

Viewpoint Two: Facebook is Too Big

Here is Sullivan’s lead-up to Zuckerberg’s embrace of regulation quoted above:

Your testimony, you have talked about a lot of power, you’ve been involved in elections, I thought your testimony was very interesting, really all over the world, 2 billion users, over 200 million Americans, $40 billion in revenue, I believe you and Google have almost 75% of the digital advertising in the U.S., one of the key issues here is Facebook too powerful? Are you too powerful?…

When you look at the history of this country, and you look at the history of these kinds of hearings…when companies become big and powerful and accumulate a lot of wealth and power, what typically happens from this body is there’s an instinct to either regulate or break-up. Look at the history of this nation. Do you have any thoughts on those two policy approaches?

No wonder Zuckerberg was so eager to talk about regulation: it’s not simply that it benefits incumbents, it’s that it is a whole lot more attractive than discussing a potential break-up!

Note, though, that Sullivan wasn’t alone in pushing this idea that Facebook might be too big (a sentiment that Senator John Kennedy also raised last fall). The most fascinating Republican line of questioning came from Senator Lindsey Graham:

Senator Graham: Who’s your biggest competitor?

MZ: We have a lot of competitors.

SG: Who’s your biggest?

MZ: Hmm, I think the categories — did you want just one? I’m not I could give one — could I give a bunch?

SG: Uh-huh.

MZ: So there are three categories I would focus on. One are the other tech platforms, so Google, Apple, Amazon, Microsoft. We overlap with them in different ways.

SG: Do they provide the same service you provide?

MZ: Uhm, in different ways, different parts of it, yes.

SG: Let me put it this way. If I buy a Ford and it doesn’t work well and I don’t like it, I can buy a Chevy. If I’m upset with Facebook, what’s the equivalent product that I can go and sign up for?

MZ: Well, the second category that I was going to talk about…

SG: I’m not talking about categories. What I’m talking about is real competition that you face, because car companies face a lot of competition, that if they make a defective car, it gets out in the world, people stop buying that car and buy another one. Is there an alternative to Facebook in the private sector?

MZ: Yes Senator. The average American uses eight different apps to communicate with their friends and stay in touch with people, ranging from texting to email…

SG: That is the same service you provide?

MZ: Well we provide a number of different services.

SG: Is Twitter the same as what you do?

MZ: It overlaps with a portion of what we do.

SG: You don’t think you have a monopoly?

MZ: It certainly doesn’t feel like that to me!

SG: So it doesn’t. So, Instagram, you bought Instagram, why did you buy Instagram?

MZ: Because they were very talented app developers who were making good use of our platform and understood our values.

SG: It was a good business decision. My point is that one way to regulate a company is through competition, through government regulation, here’s the question all of us have to answer. What do we tell our constituents given what’s happened here, why we should let you self-regulate? What would you tell people in South Carolina, that given all the things we just discovered here, is a good idea for us to rely upon you to regulate your own business practices?

Zuckerberg quickly articulated that he would be in favor of regulation, using much the same language he would return to later in his response to Senator Sullivan, but the implication of Graham’s line of questioning was more profound than that: perhaps the real problem is the monopolistic nature of the company, because the normal checks that come from competition were missing.

This is, I would note, quite consistent with the skepticism about regulation voiced by Senator Sullivan: if the concern is that a bunch of rules limit competition, then a better response, if there must be one, would seek to empower competition by undoing the monopoly entirely.

The Shifting Debate

The most likely outcome of Facebook’s current scandal continues to be that nothing will happen, for all of the inherent lethargy in our political system noted above. And, if something does, European-style data regulation seems the more likely outcome, as I noted last month. No wonder Facebook’s stock was up after the hearing!

It’s worth keeping in mind, though, that because Facebook is so dominant, the question of its governance is ultimately a political question, and to that end the shifts in the terms of debate, if not yet its outcome, have been striking. Zuckerberg is in Washington D.C., everyone says something must be done, and critically, both sides have ideas about what that should be; while this certainly may be mostly a Facebook problem, the rest of the industry should take note.

  1. Zuckerberg will testify again later today, this time in front of the House of Representatives’ Energy and Commerce Committee [↩︎]

The End of Windows

The story of Windows’ decline is relatively straightforward and a classic case of disruption:

  • The Internet dramatically reduced application lock-in
  • PCs became “good enough”, elongating the upgrade cycle
  • Smartphones first addressed needs the PC couldn’t, then over time started taking over PC functionality directly

What is more interesting, though, is the story of Windows’ decline in Redmond, culminating with last week’s reorganization that, for the first time since 1980, left the company without a division devoted to personal computer operating systems (Windows was split, with the core engineering group placed under Azure, and the rest of the organization effectively under Office 365; there will still be Windows releases, but it is no longer a standalone business). Such a move didn’t seem possible a mere five years ago, when, in the context of another reorganization, former-CEO Steve Ballmer wrote a memo insisting that Windows was the future (emphasis mine):

In the critical choice today of digital ecosystems, Microsoft has an unmatched advantage in work and productivity experiences, and has a unique ability to drive unified services for everything from tasks and documents to entertainment, games and communications. I am convinced that by deploying our smart-cloud assets across a range of devices, we can make Windows devices once again the devices to own. Other companies provide strong experiences, but in their own way they are each fragmented and limited. Microsoft is best positioned to take advantage of the power of one, and bring it to our over 1 billion users.

That memo prompted me to write a post entitled Services, Not Devices that argued that Ballmer’s strategic priorities were exactly backwards: Microsoft’s services should be businesses in their own right, not Windows’ differentiators. Ballmer, though, followed-through on his memo by buying Nokia; it speaks to Microsoft’s dysfunction that he was allowed to spend billions on a deal that allegedly played a large role in his ouster.

That dysfunction was The Curse of Culture:

Culture is not something that begets success, rather, it is a product of it. All companies start with the espoused beliefs and values of their founder(s), but until those beliefs and values are proven correct and successful they are open to debate and change. If, though, they lead to real sustained success, then those values and beliefs slip from the conscious to the unconscious, and it is this transformation that allows companies to maintain the “secret sauce” that drove their initial success even as they scale. The founder no longer needs to espouse his or her beliefs and values to the 10,000th employee; every single person already in the company will do just that, in every decision they make, big or small.

As with most such things, culture is one of a company’s most powerful assets right until it isn’t: the same underlying assumptions that permit an organization to scale massively constrain the ability of that same organization to change direction. More distressingly, culture prevents organizations from even knowing they need to do so.

Thus my assertion at the top, that the story of how Microsoft came to accept the reality of Windows’ decline is more interesting than the fact of Windows’ decline; this is how CEO Satya Nadella convinced the company to accept the obvious.

The Easy Win: Office on iPad

A month after taking over as CEO, Nadella introduced Office for iPad. Quite obviously, given the timing, the work had been done under Ballmer; some reports suggest the initiative in fact started years previously. Ballmer, though, wouldn’t release it until there was a touch version for Windows 8; some wonder if he would have ever released it at all.

It’s all a bit of a moot point; in the end Ballmer’s delay gave Nadella an easy win that symbolized the exact shift in mindset Microsoft needed: non-Windows platforms would be targets for Microsoft services, not competitors for Windows.

That wasn’t the only news that week: Microsoft also renamed its cloud service from Windows Azure to Microsoft Azure. The name change was an obvious one — by then customers could already run a whole host of non-Windows related software, including Linux — but the symbolism tied in perfectly with the Office on iPad announcement: Windows wouldn’t be forced onto Microsoft’s future.

The Demotion: Nadella’s First Strategy Memo

It was another three months before Nadella wrote his first company-wide strategy memo explicitly departing from his predecessor:

More recently, we have described ourselves as a “devices and services” company. While the devices and services description was helpful in starting our transformation, we now need to hone in on our unique strategy. At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

What is striking about this articulation of “productivity and platforms” is that it is exactly how Nadella reorganized the company last week; the “Experiences & Devices” team is focused on end-user productivity, while the “Cloud + AI” team is all about building the platform of the future. The reason it took so long is the point of this article — Nadella had a Windows problem.

To that end, the most important aspect of Nadella’s memo was not what he said about Windows, but where he said it. I wrote in a Daily Update breaking down the memo:

Trust me when I say demoting Windows all the way to this point in the letter is a dramatic shift. Remember, it wasn’t that long ago that Steve Ballmer said “Nothing is More Important at Microsoft than Windows”; Nadella not even mentioning the OS for the first 2,000 words sends a very different message. Similarly, spending nothing more than a sentence on Surface and Nokia — in the entire email, the word “Surface” appears twice and “Nokia” once — makes it as clear as can be that neither is the future.

This was the next step after the initial symbolism of Office on iPad and the Azure name change: actually articulating a future where Windows didn’t matter.

The Retreat: Love Windows

Nadella, though, had a short-term problem: Microsoft’s most important customers — enterprises — hated Windows 8. The operating system may not have been Microsoft’s future, but it was still a massive cash cow, and the linchpin for all of Microsoft’s legacy products. To that end the company needed Windows 10 to get out the door sooner-rather-than-later.

This, I think, is the context for Nadella’s presentation at a January, 2015 event about Windows 10; Nadella said:

We absolutely believe that Windows is home for the very best of Microsoft experiences. There’s nothing subtle about this strategy. It’s a practical approach which is customer first. We want to give ourselves the best opportunity to serve our customers everywhere and give ourselves the best chance to help customers find Windows as their home. That’s what we plan to do…We need to move from people needing Windows to choosing Windows to loving Windows…We want to make Windows 10 the most loved release of Windows.

At the time I was very disappointed; suggesting that Microsoft experiences needed to be “best” on Windows suggested that Windows was dictating the direction of Microsoft services. A few months later, though, once Windows 10 shipped, Nadella made clear this was only a temporary retreat.

The Quarantine: Nadella’s First Reorganization

That summer Nadella undertook his first reorganization, separating the company into three divisions: Cloud and Enterprise, Applications and Services, and Windows and Devices. I wrote in a Daily Update:

This explicitly undoes Ballmer’s ill-considered reorganization from a divisional company to an allegedly functional organization. At the time Ballmer wrote:

We are rallying behind a single strategy as one company — not a collection of divisional strategies…

This was exactly wrong: by that point Microsoft had already lost the devices war and needed to focus on services that worked on iOS and Android. A “One Microsoft” strategy, on the other hand, kept all of those services subservient to Windows. However, with this new reorganization, Windows is off in the corner where it belongs, leaving the Cloud and Enterprise team and Applications and Services Group free to focus on building their businesses on top of all platforms.

I believe this reorganization was the turning point: not only were the two teams Nadella announced last week basically formed at this time, but more importantly, Windows was left to fend for itself.

The Inception: The Death of Windows Phone

Nadella’s most impressive bit of jujitsu was how he killed Windows Phone; while the platform had obviously been dead in the water for years, Nadella didn’t imperiously axe the program. Instead, by isolating Windows, he let the division’s leadership come to that conclusion on their own.

Naturally, departing Windows-head Terry Myerson blamed the rest of the company, stating, “When I look back on our journey in mobility, we’ve done hard work and had great ideas, but have not always had the alignment needed across the company to make an impact.” I wrote at the time:

This is such an utterly clueless explanation of why Windows Phone failed that it’s kind of stunning. Until, of course, you remember the culture-induced myopia I described yesterday: Myerson still has the Ballmer-esque presumption that Microsoft controlled its own destiny and could have leveraged its assets (like Office) to win the smartphone market, ignoring that by virtue of being late Windows Phone was a product competing against ecosystems, which meant no consumer demand, which meant no developers, topped off by the arrogance to dictate to OEMs and carriers what they could and could not do to the phone, destroying any chance at leveraging distribution to get critical mass…

Interestingly, though, Myerson’s ridiculous assertion in a roundabout way shows how you change culture…In this case, Nadella effectively shunted Windows to its own division with all of the company’s other non-strategic assets, leaving Myerson and team to come to yesterday’s decision on their own. Remember, Nadella opposed the Nokia acquisition, but instead of simply dropping the axe on day one, thus wasting precious political capital, he hung the Windows team out to dry let Windows give it their best shot and come to that conclusion on their own.

Nadella did the same thing with Windows proper: when Windows 10 launched Myerson claimed that the operating system would be on 1 billion devices by mid-2018; the company had to walk that back a year later, not because Nadella said so, but because the market did.

The Division: The End of Windows

And so we reach last week’s announcements: the Windows division is no more. It is an incredibly meaningful milestone, yet anticlimactic at the same time, thanks to Nadella’s careful management. It is worth noting, though, that Nadella had one critical ally in this journey: Wall Street.

Microsoft's stock price since Nadella became CEO.
Microsoft’s stock price since Satya Nadella became CEO.

If culture flows from success, then it follows that an attempt to change culture is far easier to accomplish when the most obvious indicator of success — one that has a direct impact on employee pocket-books — is moving up-and-to-the-right. What is fascinating to consider, though, is that Microsoft’s stock is up not only because the company has a vision that it is delivering on quarter-after-quarter, but also because the stock was depressed in the first place.

To put it another way, Nadella’s shift to a post-Windows Microsoft is the right one; to have done the same a decade sooner would have been better. It also, though, may have been impossible, simply because Windows was still the biggest part of the business, and it’s not clear the markets would have tolerated an explicit shift before it was painfully obvious it was necessary; without a rising stock price, Nadella’s mission would have been much more challenging if not impossible.

The Future: Why Microsoft?

It’s important to note that Windows persisted as the linchpin of Microsoft’s strategy for over three decades for a very good reason: it made everything the company did possible. Windows had the ecosystem and the lock-in, and provided the foundation for Office and Windows Server, both of which were built with the assumption of Windows at the center.

Office 365 and Azure are comparatively weaker strategically: Office 365 has document lock-in, but the exact same forces that weakened Windows in the first place weaken the idea of documents as well. It’s not clear why new companies in particular would even care. Azure, meanwhile, is chasing AWS, with a huge amount of business coming from Linux VMs that could run anywhere.

Unsurprisingly, both are still benefiting from Windows: Office 365 really does, as Nadella noted in his retreat, work better on Windows, and vice versa; it is seamless for organizations that have been using Office for years to move to Office 365. Azure’s biggest advantage, meanwhile, is that it allows for hybrid deployments, where workloads are split between legacy on-premise Windows servers and Azure’s public cloud; that legacy was built on Windows.

This, then, is Nadella’s next challenge: to understand that Windows is not and will not drive future growth is one thing; identifying future drivers of said growth is another. Even in its division Windows remains the best thing Microsoft has going — it had such a powerful hold on Microsoft’s culture precisely because it was so successful.

Stratechery 4.0

Five Years ago last Sunday, I launched Stratechery 1.0 with a picture of sailboats:1

A screenshot of Stratechery 1.0

A simple image. Two boats, and a big ocean. Perhaps it’s a race, and one boat is winning — until it isn’t, of course. Rest assured there is breathless coverage of every twist and turn, and skippers are alternately held as heroes and villains, and nothing in between.

Yet there is so much more happening. What are the winds like? What have they been like historically, and can we use that to better understand what will happen next? Is there a major wave just off the horizon that will reshape the race? Are there fundamental qualities in the ships themselves that matter far more than whatever skipper is at hand? Perhaps this image is from the America’s Cup, and the trailing boat is quite content to mirror the leading boat all the way to victory; after all, this is but one leg in a far larger race.

It’s these sort of questions that I’m particularly keen to answer about technology. There are lots of (great!) sites that cover the day-to-day. And there are some fantastic writers who divine what it all means. But I think there might be a niche for context. What is the historical angle on today’s news? What is happening on the business side? Where is value being created?

Since then I have written 308 Weekly Articles and 659 Daily Updates (and recorded 159 podcasts) answering exactly those questions, and, thankfully, have managed to create some value of my own: in 2014 I launched the Daily Update and have been supported by subscriptions ever since.

For a long time, though, I have wished Stratechery did a better job of providing value not just through daily emails and posts, but to the new user stumbling across the site for the first time, or the long-time reader hoping to find that one post they remember reading. This update is all about those two use cases — and yes, a new logo and visual refresh.

Explore Stratechery

There are now three ways to explore Stratechery:

Concepts: The Concepts page distills Stratechery’s archive into seven categories:

  • Aggregation Theory
  • Disruption Theory
  • Incentives
  • Media
  • Strategy and Product Management
  • Technology and Society
  • The Evolution of Technology

Each category has five or so sub-categories, each with a selection of relevant Stratechery articles from the last five years. This is the best place to start if you are new to Stratechery.2

Companies: The Companies page lets you quickly jump to a specific archive page for every company I have written about on Stratechery (there have been 309 of them!). Full disclosure: this section isn’t completely finished — soon every company will have featured articles that I consider my most important work about the company (right now the top eight by post count do). For now, here is what Apple looks like:

Topics: The Topics page is just like the Companies page, but about, well, topics! Things like earnings, or cryptocurrencies, or Taylor Swift (and Kanye West!). Right now there are 121 topics in Stratechery’s taxonomy.

Search Stratechery

The second major addition to Stratechery is dramatically improved search, powered by Algolia. Better indexing is certainly the most important feature, but there are others:

Autocomplete: The search box in the side-bar will now auto-complete as you type, taking a first crack at getting you the exact article you were looking for. In addition, you can quickly jump to the relevant Concept, Company, or Topic page:

Instant Search: Once on the search page you can get results instantly, helping you quickly iterate on your search terms without waiting for a refresh, all with typo-tolerance and synonym search.

Facets: You can filter search results (or simply all posts) by:

  • Category (Articles, Daily Updates, Podcasts)
  • Company
  • Topic
  • Concept

This should make it far easier to find that post you remember reading way back when.

A New Logo and Visual Refresh

I am tempted to say this is the least important feature, but after all of the time I just spent reading through my archives, I know that little things like logos and look-and-feel matter just as much as the words on the page. To that end, I am extremely excited about Stratechery’s new logo:

Designed by Brad Ellis of Tall West, the new mark represents Stratechery’s emphasis on writing, the focus on technology, and, of course, my drawings.3 The Archer type-face is a call-back to Stratechery’s original Courier, and the feeling of a type-writer. I’m extremely excited, and hope you are as well.

In addition, there are now related articles under posts, the all-caps headlines are gone, and the sidebar (drop-down menu on mobile) has been reconfigured. Frankly, I remain very happy with the rest of the site: that is how good of a job Philip Arthur Moore did when he re-built my original version from scratch in 2015; he did an equally fantastic job on 4.0.

New Email

In addition, today I am launching a new template for Stratechery emails. The most apparent change for readers will be a new logo (of course), and a general clean-up of the layout. What is far more important, though, is what readers won’t see. Allow me a quick backstory:

For the first few years of the Daily Update I used a modified MailChimp template; it was functional, and most importantly, it was very easy to prepare and send emails. Unfortunately, those templates didn’t work so well in Gmail on mobile; that’s why I launched a new template last summer. It rendered correctly everywhere, but preparing each email was a laborious process that took as long as 30 minutes a day. Worse, it introduced multiple opportunities to make mistakes.

That is when Yellow Brim pretty much blew my mind. The just-launched company built not just a template, but rather a converter and template combo that renders perfect emails with nothing more than a push of the button:

I am honestly staggered at how much better it is going to make my day-to-day life. What was most impressive of all, though, was the way CEO Jacqueline Boltik took the way to deeply understand my workflow and needs, and only then came up with a solution. I can’t wait to see what she and Yellow Brim build next.

Thank You

Normally I would close this post by thanking all of you, my readers, for making this possible. That is absolutely true, and I am more grateful for your support than you can know.

On this occasion, though, I need to save my final and most fervent thanks for Daman Rangoola (warning: excessive amounts of Lakers talk behind that link). Stratechery 4.0 has been months in the making, and Daman has shouldered the biggest load by far, particularly the rich taxonomy applied to those 1,128 pieces of content. As you can tell by looking at the new features above, it is plain fact that without Daman, Stratechery 4.0 would not exist.

For now, though, I hope you enjoy the new site: explore the Concepts, Companies, Topics, and play with Search, and do let me know (via Member forum or email, not Twitter) if you find any bugs. We’ll be fixing things up for the next little bit I’m sure.

Here’s to five more years!

  1. Please, ignore the terrible pronunciation decisions; it’s Struh-TECH-er-ee, as in the industry that I cover [↩︎]
  2. I want to give recognition to Sonya Mann who came up with the original outline for the Stratechery Conceptual Framework nearly two years ago. [↩︎]
  3. A gallery is coming in 4.1 [↩︎]

The Facebook Brand

Last week Reuters reported on the Harris Brand Survey:

Apple Inc and Alphabet Inc’s Google corporate brands dropped in an annual survey while Amazon.com Inc maintained the top spot for the third consecutive year, and electric carmaker Telsa Inc rocketed higher after sending a red Roadster into space.

The headline of the piece was “Apple, Google, see reputation of corporate brands tumble in survey”; one would note that the editors at Reuters apparently disagree with the poll survey respondents about what brands move the needle. But I digress.

So why are Apple and Google lower?

John Gerzema, CEO of the Harris Poll, told Reuters in an interview that the likely reason Apple and Google fell was that they have not introduced as many attention-grabbing products as they did in past years, such as when Google rolled out free offerings like its Google Docs word processor or Google Maps and Apple’s then-CEO Steve Jobs introduced the iPod, iPhone and iPad.

Ah, no Google Docs updates. Got it!

I’m obviously snarking a bit, and it is worth noting that notoriety clearly plays a role in these survey results (look no further than spot 99, where the Harvey Weinstein company makes its debut in the list). What is indisputable, though, is that brand matters — and that includes the regulatory future for Google and Facebook.

YouTube and Wikipedia

Start with Google, specifically YouTube. From The Verge:

YouTube will add information from Wikipedia to videos about popular conspiracy theories to provide alternative viewpoints on controversial subjects, its CEO said today. YouTube CEO Susan Wojcicki said that these text boxes, which the company is calling “information cues,” would begin appearing on conspiracy-related videos within the next couple of weeks…

The information cues that Wojcicki demonstrated appeared directly below the video as a short block of text, with a link to Wikipedia for more information. Wikipedia — a crowdsourced encyclopedia written by volunteers — is an imperfect source of information, one which most college students are still forbidden from citing in their papers. But it generally provides a more neutral, empirical approach to understanding conspiracies than the more sensationalist videos that appear on YouTube.

Your average college student surely knows that the real trick is to use Wikipedia to find the sources that are actually allowed by college professors: they are helpfully linked at the bottom of every article. Indeed, Wikipedia’s citation policy arguably makes it one of the more reliable sources of information out there, at least in terms of conventional wisdom. Moreover, crowd-sourcing facts, at least in theory, seems like a more scalable solution to the sheer amount of video YouTube has to deal with.

It’s also a very Google-y solution: it makes sense that a company with the motto “Organize the world’s information and make it universally accessible and useful” would, confronted with questionable information, seek to remedy it with more information. Not bothering to tell Wikipedia fits as well; Google treats the web as its fiefdom, and for good reason. Search is built on links, the fabric of the web, and is the entry-point for nearly everyone, leading websites everywhere to do Google’s bidding; excluding oneself from search is like going on a hunger strike while fed by robots — one whithers away and no one even notices. Google probably thinks Wikipedia should say “thank-you”!

That noted, it’s hard to see this having any meaningful impact: conspiracy theories and fake news generally tend to appeal primarily to people who already want them to be true; it’s hard to see a Wikipedia link making a big difference. And, of course, there are the conspiracy theories that turn out to be true, or, perhaps more commonly, the conventional wisdom that proves to be wrong.

Facebook and Cambridge Analytica

So which is Cambridge Analytica and Facebook? A year ago the New York Times reported that Cambridge Analytica’s impact on the election of Donald Trump as president was overrated:

Cambridge Analytica’s rise has rattled some of President Trump’s critics and privacy advocates, who warn of a blizzard of high-tech, Facebook-optimized propaganda aimed at the American public, controlled by the people behind the alt-right hub Breitbart News. Cambridge is principally owned by the billionaire Robert Mercer, a Trump backer and investor in Breitbart. Stephen K. Bannon, the former Breitbart chairman who is Mr. Trump’s senior White House counselor, served until last summer as vice president of Cambridge’s board.

But a dozen Republican consultants and former Trump campaign aides, along with current and former Cambridge employees, say the company’s ability to exploit personality profiles — “our secret sauce,” Mr. Nix once called it — is exaggerated. Cambridge executives now concede that the company never used psychographics in the Trump campaign. The technology — prominently featured in the firm’s sales materials and in media reports that cast Cambridge as a master of the dark campaign arts — remains unproved, according to former employees and Republicans familiar with the firm’s work.

Over the weekend the New York Times was out with a new story, entitled How Trump Consultants Exploited the Facebook Data of Millions:

[Cambridge Analytica] harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.

Facebook executives — on Twitter, naturally — took exception to the use of the word “breach”:

Everything was working as intended, thanks to the Graph API.

Facebook versus Google and the Graph API

Facebook introduced what it called the “Open Graph” back in 2010; CEO Mark Zuckerberg led off Facebook’s f8 developer conference thusly:

We think that what we have to show you today will be the most transformative thing we’ve ever done for the web. There are a few key themes that we are going to be talking about today. The first is the Open Graph that we’re all building together. Today, the web exists mostly as a series of unstructured links between pages, and this has been a powerful model, but it’s really just the start. The Open Graph puts people at the center of the web. It means the web can become a set of personally and semantically meaningful connections between people and things. I am FRIENDS with you. I am ATTENDING this event. I LIKE this band. These connections aren’t just happening on Facebook, they’re happening all over the web, and today, with the Open Graph, we’re going to bring all of these together.

The reference to “unstructured links” was clearly about Google, and while it’s easy to think of the two companies as a duopoly astride the web, Facebook was at the time a much smaller entity than it is today: 400 million users, still private, and a tiny advertising business relative to Google.

The challenge from Facebook’s perspective is the one I outlined above: Google got data from everywhere on the web because sites and applications were heavily incentivized to give it to Google so as to have a better chance of reaching end users aggregated by Google:

Sites need Google to reach users, so they give Google all their data

Facebook, meanwhile, was a closed garden. This was an advantage in that users generated Facebook’s content for them, and that said content wasn’t available to Google, but there was no obvious way for Facebook to gather data on the greater web, which is where the Open Graph came in; Facebook would give away slices of its data in exchange for data from sites and apps around the web:

To catch up with Google Facebook exchanged user data for site data

Zuckerberg said as much in his keynote:

At our first F8, I introduced the concept of the Social Graph. The idea that if you mapped out all of the connections between people and things in the world it would form this massive interconnected graph that just shows how everyone is connected together. Now Facebook is actually only mapping out a part of this graph, mostly the part around people and the relationships that they have. You guys [developers] are mapping out other really important of the graph. For example, I know Yelp is here today. Yelp is mapping out the part of the graph that relates to small businesses. Pandora is mapping out the part of the graph that relates to music. And a lot of news sites are mapping out the part of the graph that relates to current events and news content. If we can take these separate maps of the graph and pull them all together, then we can create a web that is more social, personalized, smarter, and semantically aware. That’s what we’re going to focus on today.

What followed was the introduction of the Graph API, which was the means by which Facebook would facilitate the data exchange, and as you can see on an old Facebook developer page, Facebook was willing to give away just about everything:

Facebook's developer page showing all of the data given to third party apps

Moreover, note that users could give away everything about their friends as well; this is exactly how the researcher implicated in the Cambridge Analytica story leveraged 270,000 survey respondents to gain access to the data of 50 million Facebook users.

Facebook finally shut down the friend-sharing functionality five years later, after it was clearly ensconced with Google atop the digital advertising world, of course.

Facebook’s Brand

That Facebook pursued such a strategy is even less of a surprise than Google’s imperious adoption of Wikipedia as conspiracy theory debunker: Facebook’s motto was “Making the world more open and connected”, and the company has repeatedly demonstrated a willingness to do just that, whether users like it or not. That’s the thing with branding: what people think about your company is not so much what you say but what you do, and that many people immediately assume the worst about Facebook and privacy is Facebook’s own fault.

To be sure, there seems to be a partisan angle as well — one didn’t see many complaints about the Obama campaign. From the Washington Post:

Early in 2011, some Obama operatives visited Facebook, where executives were encouraging them to spend some of the campaign’s advertising money with the company. “We started saying, ‘Okay, that’s nice if we just advertise,’ ” Messina said. “But what if we could build a piece of software that tracked all this and allowed you to match your friends on Facebook with our lists, and we said to you, ‘Okay, so-and-so is a friend of yours, we think he’s unregistered, why don’t you go get him to register?’ Or ‘So-and-so is a friend of yours, we think he’s undecided. Why don’t you get him to be decided?’ And we only gave you a discrete number of friends. That turned out to be millions of dollars and a year of our lives. It was incredibly complex to do.”

But this third piece of the puzzle provided the campaign with another treasure trove of information and an organizing tool unlike anything available in the past. It took months and months to solve, but it was a huge breakthrough. If a person signed on to Dashboard through his or her Facebook account, the campaign could, with permission, gain access to that person’s Facebook friends. The Obama team called this “targeted sharing.” It knew from other research that people who pay less attention to politics are more likely to listen to a message from a friend than from someone in the campaign. The team could supply people with information about their friends based on data it had independently gathered. The campaign knew who was and who wasn’t registered to vote. It knew who had a low propensity to vote. It knew who was solid for Obama and who needed more persuasion — and a gentle or not-so-gentle nudge to vote. Instead of asking someone to send a message to all of his or her Facebook friends, the campaign could present a handpicked list of the three or four or five people it believed would most benefit from personal encouragement.

This, though, is hardly a defense for Facebook: what is the company going to say, that it was exporting friend data for everyone, not just Trump? To be sure, buying the data from an academic and allegedly holding onto it violated Facebook’s Terms of Service, but “We have terms of service!” isn’t exactly a powerful branding campaign, especially given that at that same 2010 f8 Facebook had dramatically loosened those terms of service:

We’ve had this policy where you can’t store or cache data for any longer than 24 hours, and we’re going to go ahead and get rid of that policy.

(Cheering)

So now, if a person comes to your site, and a person gives you permission to access their information, you can store it. No more having to make the same API calls day-after-day. No more needing to build different code paths just to handle information that Facebook users are sharing with you. We think that this step is going to make building with Facebook platform a lot simpler.

Indeed it was.

Google, Facebook, and Regulation

Ultimately, the difference in Google and Facebook’s approaches to the web — and in the case of the latter, to user data — suggest how the duopolists will ultimately be regulated. Google is already facing significant antitrust challenges in the E.U., which is exactly what you would expect from a company in a dominant position in a value chain able to dictate terms to its suppliers. Facebook, meanwhile, has always seemed more immune to antitrust enforcement: its users are its suppliers, so what is there to regulate?

That, though, is the answer: user data. It seems far more likely that Facebook will be directly regulated than Google; arguably this is already the case in Europe with the GDPR. What is worth noting, though, is that regulations like the GDPR entrench incumbents: protecting users from Facebook will, in all likelihood, lock in Facebook’s competitive position.

This episode is a perfect example: an unintended casualty of this weekend’s firestorm is the idea of data portability: I have argued that social networks like Facebook should make it trivial to export your network; it seems far more likely that most social networks will respond to this Cambridge Analytica scandal by locking down data even further. That may be good for privacy, but it’s not so good for competition. Everything is a trade-off.