My favorite part of keynotes is always the opening. That is the moment when the CEO comes on stage, not to introduce new products or features, but rather to create the frame within which new products and features will be introduced.
This is why last week’s Microsoft keynote was so interesting: CEO Satya Nadella spent a good 30 minutes on the framing, explaining a new world where the platform that mattered was not a distinct device or a particular cloud, but rather one that ran on all of them. In this framing Microsoft, freed from a parochial focus on its own devices, could be exactly that; the problem, as I noted earlier this week, is that platforms come from products, and Microsoft is still searching for an on-ramp other than Windows.
The opening to Google I/O couldn’t have been more different. There was no grand statement of vision, no mind-bending re-framing of how to think about the broader tech ecosystem, just an affirmation of the importance of artificial intelligence — the dominant theme of last year’s I/O — and how it fit in with Google’s original vision. CEO Sundar Pichai said in his prepared remarks:
It’s been a very busy year since last year, no different from my 13 years at Google. That’s because we’ve been focused ever more on our core mission of organizing the world’s information. And we are doing it for everyone, and we approach it by applying deep computer science and technical insights to solve problems at scale. That approach has served us very, very well. This is what has allowed us to scale up seven of our most important products and platforms to over a billion users…It’s a privilege to serve users at this scale, and this is all because of the growth of mobile and smartphones.
But computing is evolving again. We spoke last year about this important shift in computing, from a mobile-first, to an AI-first approach. Mobile made us re-imagine every product we were working on. We had to take into account that the user interaction model had fundamentally changed, with multitouch, location, identity, payments, and so on. Similarly, in an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems, and we are doing this across every one of our products.
Honestly, it was kind of boring.
Google’s Go-to-Market Problem
After last year’s I/O I wrote Google’s Go-To-Market Problem, and it remains very relevant. No company benefited more from the open web than Google: the web not only created the need for Google search, but the fact that all web pages were on an equal footing meant that Google could win simply by being the best — and they did.
Mobile has been much more of a challenge: while Android remains a brilliant strategic move, its dominance is rooted more in its business model than in its quality (that’s not to denigrate its quality in the slightest, particularly the fact that Android runs on so many different kinds of devices at so many different price points). The point of Android — and the payoff today — is that Google services are the default on the vast majority of phones.
The problem, of course, is iOS: Apple has the most valuable customers (from a monetization perspective, to be clear), who mostly don’t bother to use different services than the default Apple ones, even if they are, in isolation, inferior. I wrote in that piece:
Yes, it is likely Apple, Facebook, and Amazon are all behind Google when it comes to machine learning and artificial intelligence — hugely so, in many cases — but it is not a fair fight. Google’s competitors, by virtue of owning the customer, need only be good enough, and they will get better. Google has a far higher bar to clear — it is asking users and in some cases their networks to not only change their behavior but willingly introduce more friction into their lives — and its technology will have to be special indeed to replicate the company’s original success as a business.
To that end, I thought there were three product announcements yesterday that suggested Google is on the right track:
Google Assistant was first announced last year, but it was only available through the Allo messenger app, Google’s latest attempt to build a social product; the company also pre-announced Google Home, which would not ship until the fall, alongside the Pixel phone. You could see Google’s thinking with all three products:
- Given that the most important feature of a messaging app is whether or not your friends or family also use it, Google needed a killer feature to get people to even download Allo. Enter Google Assistant.
Thanks to the company’s bad bet on Nest, Google was behind Amazon in the home. Google Assistant being smarter than Alexa was the best way to catch up.
A problem for Google with voice computing is that it is not clear what the business model might be; one alternative would be to start monetizing through hardware, and so the high-end Pixel phone was differentiated by Google Assistant.
All three approaches suffered from the same flaw: Google Assistant was the means to a strategic goal, not the end. The problem, though, is that unlike search, Google Assistant was not yet established as something people should jump through hoops to get: driving Google Assistant usage needs to be the goal; only then can it be leveraged for something else.
To that end Google has significantly changed its approach over the last 12 months.
- Google Assistant is now available as its own app, both on Android and iOS. No unwanted messenger app necessary.
The Google Assistant SDK will allow Google Assistant to be built in to just about anything. Scott Huffman, the VP of Google Assistant said:
We think the assistant should be available on all kinds of devices where people might want to ask for help. The new Google Assistant SDK allows any device manufacturer to easily build the Google Assistant into whatever they’re building, speakers, toys, drink-mixing robots, whatever crazy device all of you think up now can incorporate the Google Assistant. We’re working with many of the world’s best consumer brands and their suppliers so keep an eye out for the badge that says “Google Assistant Built-in” when you do your holiday shopping this year.
This is the exact right approach for a services company.
That leads to the Pixel phone: earlier this year Google finally added Google Assistant to Android broadly — built-in, not an app — after having insisted just a few months earlier it was a separate product. The shifting strategy was a big mistake (as, arguably, is the entire program), but at least Google has ended up where they should be: everywhere.
Google Assistant has a long ways to go, but there is a clear picture of what success will look like: Google Photos. Launched only two years ago, Pichai bragged that Photos now has over 500 million active users who upload 1.2 billion photos a day. This is a spectacular number for one very simple reason: Google Photos is not the default photo app for Android1 or iOS. Rather, Google has earned all of those photos simply by being better than the defaults, and the basis of that superiority is Google’s machine learning.
Moreover, much like search, Photos gets better the more data it gets, creating a virtuous cycle: more photos means more data which means a better experience which means more users which means more photos. It is already hard to see other photo applications catching up.
Yesterday Google continued to push forward, introducing suggested sharing, shared libraries, and photo books. All utilize vision recognition (for example, you can choose to automatically share pictures of your kids with your significant other) and all make Photos an even better app, which will lead to new users, which will lead to more data.
What is particularly exciting from Google’s perspective is that these updates add a social component: suggested sharing, for example, is self-contained within Google Photos, creating ad hoc private networks with you and your friends. Not only does this help spread Google Photos, it is also a much more viable and sustainable approach to social networking than something like Google Plus. Complex entities like social networks are created through evolution, not top-down design, and they must rely on their creator’s strengths, not weaknesses.
Google Lens was announced as a feature of Google Assistant and Google Photos. From Pichai:
We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.
How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.
As you can see, we are beginning to understand images and videos. All of Google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.
The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.
My one concern is that Google is repeating its previous mistake: that is, seeking to use a new product as a means instead of an end. Limiting Google Lens to Google Assistant and Google Photos risks handicapping Lens’ growth; ideally Lens will be its own app — and thus the foundation for other applications — sooner rather than later.
Make no mistake, none of these opportunities are directly analogous to Google search, particularly the openness of their respective markets or the path to monetization. Google Assistant requires you to open an app instead of using what is built-in (although the Android situation should improve going forward), Photos requires a download instead of the default photos app, and Lens sits on top of both. It’s a far cry from simply setting Google as the home page of your browser, and Google making more money the more people used the Internet.
All three apps, though, are leaning into Google’s strengths:
- Google Assistant is focused on being available everywhere
- Google Photos is winning by being better through superior data and machine learning
- Google Lens is expanding Google’s utility into the physical world
There were other examples too: Google’s focus with VR is building a cross-device platform that delivers an immersive experience at multiple price points, as opposed to Facebook’s integrated high-end approach that makes zero sense for a social network. And, just as Apple invests in chips to make its consumer products better, Google is investing in chips to make its machine learning better.
The Beauty of Boring
This is the culmination of a shift that happened two years ago, at the 2015 Google I/O. As I noted at the time,2 the event was two keynotes in one.
[The first hour was] a veritable smorgasbord of features and programs that [lacked a] unifying vision, just a sense that Google should do them. An operating system for the home? Sure! An Internet of Things language? Bring it on! Android Wear? We have apps! Android Pay? Obviously! A vision for Android? Not necessary!
None of these had a unifying vision, just a sense that Google ought to do them because they’re a big company that ought to do big things.
What was so surprising, though was that the second hour of that keynote was completely different. Pichai gave a lengthy, detailed presentation about machine learning and neural nets, and tied it to Google’s mission, much like he did in yesterday’s introduction. After quoting Pichai’s monologue I wrote:
Note the specificity — it may seem too much for a keynote, but it is absolutely not BS. And no surprise: everything Pichai is talking about is exactly what Google was created to do…The next 30 minutes were awesome: Google Now, particularly Now on Tap, was exceptionally impressive, and Google Photos looks amazing. And, I might add, it has a killer tagline: Gmail for Photos. It’s so easy to be clear when you’re doing exactly what you were meant to do, and what you are the best in the world at.
This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.
That is the best place to be, for a person and for a company.