Apart from the myriad of new features, I also found the style of presentation refreshing. Instead of playing a hard-styled presentation, Google actually streamed live from Mountain View – with all the rough edges that live productions bring. There were (very) brief hiccups and seconds of silence here and there in the presentation, and you could hear the California wind and planes flying overhead in the audio.
Here’s what today’s Google I/O opening keynote was all about:
Of course, Android 12 couldn’t be missed today! One of the focal points today was the design, which is now no longer called “Material”, but “Material You”. Users should have many previously unseen possibilities to customize their smartphone design in the future. This starts with the creation of personal color palettes and extends to the shaping of individual buttons and elements.
After Apple had recently made a strong presentation with iOS 14.5, the second focus was not a big surprise: Privacy & Security. The heart of this is supposed to be the so-called Private Compute Core, which is completely open source and, according to Google, ensures that the user’s private data is always protected. Thus, the access of apps to sensors and data should be better restricted in the future, similar to iOS. In addition, Google wants to process more and more data locally on the devices, for example, in image and voice processing.
The third major point is called: networking. Thus, notifications or photos taken should be easier to synchronize between Android smartphones and Chromebooks in the future. The smartphone should also serve as a remote control for more and more devices and, for example, control televisions or open cars. Specifically, Google named BMW as a partner at I/O, with other manufacturers to follow.
Read more about Android 12 soon in a separate article on NextPit – stay tuned! And if you can’t wait to try out the design features: Android 12 Beta available now for various smartphones from twelve different manufacturers! Everything about it you find out in the link below:
Bang for the smartwatches: Google and Samsung are teaming up and working together on wearables in the future. This is to be done with the help of a common platform that emerges from Tizen and Wear OS. The new “Unified Platform” is supposed to make smartwatches 30 percent faster, improve battery life, and so on and so forth. We’ll probably find out exactly what that means for the tech platform in the coming hours and days.
In any case, I think the platform is very exciting. Wear OS really had a lot of construction sites and with all its rough edges and many tinkering solutions, it felt closer to Android 2.2 than Android 12. The revamped interface now relies more on Tiles for display and should bring innovations like turn-by-turn navigation for Google Maps or native music downloading, which was only possible via tinkering solutions after the end of Google Play Music.
And, of course, there’s Fitbit at the start, whose acquisition Google completed earlier this year. Indeed, since 2020, the manufacturer’s overpriced fitness trackers and smartwatches have required an overpriced subscription to take full advantage of their features.
Still, hopefully Google’s Wear will benefit from the fitness features. After all, Google Fit has been fragile on previous Wear OS devices. Also, Fitbit announced that it will offer its smartwatches with Wear in the future.
It’s no secret that Google is working flat out on voice models. Nevertheless, there were again a few exciting demos to see at Google I/O 2021. The dialogue model LaMDA is supposed to enable natural conversations; and not only with the Google Assistant, but with all kinds of things.
At the keynote, for example, we saw a conversation with Pluto, who was a bit depressed about how humans only perceive it as an inhospitable block of ice on the edge of the solar system and doesn’t even count as a proper planet. Had something of Marvin from The Hitchhiker’s Guide to the Galaxy about it.
A bit more cheerful was the paper airplane, which reported about its longest flights and most important characteristics (large wings, stiff paper!), but also about an unpleasant encounter with a puddle.
In the future, LaMDA will be trained not only on text, but also on multimedia content, i.e. audio, videos, images or concepts such as weather or locations. A user should then be able to ask a video, for example: “Show me the place where a lion roars in front of the setting sun” and go directly to the corresponding location.
Of course, web search should also benefit from LaMDA and become more interactive. In addition, Google wants to improve information processing in the future. A key role is played here by a model called MUM; the Multitask Unified Model.
The special feature of MUM is that information is processed independently of medium and language. Example from the presentation: The user asks with a photo of his hiking boots: “Can I climb Mt. Fuji with these?” and gets as an answer “Yes”, followed by a packing list for the tour. The source is not only English, but also Japanese content, for example, which Google then translates directly.
Another important point in the age of disinformation: in future, it will be easier to find out more about the source in search results. For example, anyone who clicks on a button in the search results will get background information about the website in question, such as the year it was founded or reviews and alternative sources. This “Information Credibility Evaluation” feature is scheduled to be rolled out to English-language search results later this month.
There were also a few small updates for Google Maps. The maps are to become more accurate in the future and also show zebra crossings, traffic islands & Co. These “Detailed Street Maps” will be available in 50 cities this year.
The live view navigation will also get virtual street signs, show hotels or points of interest and will also work indoors (e.g. in airports or train stations). Indoor Live View Navigation is set to launch this week in Zurich and this month in Tokyo.
Also new: In the future, it will not only be possible to see how busy certain shops are – Google Maps will also show whether certain parts of the city are currently bustling. The so-called “Area Busyness” is to be rolled out in the coming months. Google also wants to offer environmentally friendly and safe routes for navigation that are particularly fuel-efficient or avoid dangerous roads.
Analogous to the Knowledge Graph, Google has introduced the Shopping Graph, which combines many data points. For example, Google wants to combine prices, experience reports or ratings. At the same time, Google is trying to attract more merchants and shops that want to integrate their offers into the Shopping Graph.
There should also be more contact points for users in the future. In the future, shopping via Google Lens will be easier, and YouTube videos will directly promote discussed products for purchase. Last but not least, Chrome remembers open shopping carts from various websites and reminds you to buy, gladly with coupons and offers.
The usual “fun fact” from the presentation: currently Google Photos stores four trillion photos and videos. To resurrect all that once snapped and never looked at again, Google has come up with another highlight feature. With “Little Patterns” Google looks for commonalities in the various accounts and tries to tell little stories.
In the presentation, for example, we saw the world journey of an orange backpack that was featured in many photos and videos of a Google employee. Or a family that always hung out on the same couch for years.
What else? With “Cinematic Moments”, Google Photos will also turn several photos taken in immediate succession into short video clips by calculating the inter frame. Private photos and videos can be safely stored in a hidden directory and hidden from prying eyes and other apps.
It was a “Boah am I old” moment when Sundar Pitchai congratulated Google Docs and Sheets on their 15th birthdays. Then in the next breath, the Alphabet CEO introduced Smart Canvas, which ranks somewhere between Microsoft Teams and various other project management tools. There are roadmaps with to-do lists, brainstorming pools, surveys and so on.
Finally, Meet Calls has gotten a few updates. Documents can now be shared natively here. In addition, there is a way to participate meaningfully in a call with external guests even while sitting in a room – without Echo & Co., hopefully at least. In addition, it should be possible to adjust the views better in the future, so that you can also see the faces and reactions of the audience during a presentation, if you want to.
- The Google password manager gets better. Passwords can be imported from other password managers and synced better across Chrome, Android and apps. There’s also an alert when a particular account has been compromised.
- In the future, it will be easier to clear history in various apps, such as Google Maps or Google Search.
- Google is revamping its computational photography algorithms to better reflect People of Color. In the past, the algorithms were better at depicting light-skinned people. That bias is now expected to disappear.
- Google has unveiled a faster Tensor Processing Unit that is supposed to work twice as fast as the previous, third generation. TPUs are specialized computing units for AI tasks.
- Project Starline is a glimpse into the future of video calling. High-resolution cameras and depth cameras record the people on the call and create detailed 3D models. These are then displayed on a 3D light field display, which should probably be quite close to holograms in terms of presentation.
That’s it – Google didn’t show off any hardware at this year’s Google I/O. A Pixel 5a 5G was just as little as new Pixel Buds or a Pixel Watch. What do you think of the innovations? Are you looking forward to Android 12 and can’t wait to talk to Pluto? Or did you find the presentation rather lame?
Here you can find the recording of the opening keynote from Google I/O.